So, you're essentially suggesting that we adopt a moral system that is directly derived from not only evolution, but our ability to now look back at how we got here, realize where we are, and determine where to go from here.
Close, but I think you might have a common misconception. You seem to be confusing the known with the actual.
Within the framework of our current evolution there is likely an actual "best" behavior that would allow humans to thrive most efficiently (or--at least--ones that are much much better than others), but this perfect behavior not known. We can start to approximate that behavior, however, by looking at past behaviors and using them to make models to predict what future behaviors would do. We would be trying to find that "best behavior" by looking back at "how we got here." You know, using the scientific method.
(Note: I think this is an awesome misfiring, and I'm glad it works the way it does, but it is a misfire.)
Is it?
I would point out that helping other nations isn't necessarily 'fitness amoral.' Getting Africa into a better position in the world would certainly affect the US, for example.
Would one person sending one dollar make Africa an industrial nation? No, but it's a step in that direction, and I would not call it fitness amoral.
Alright, Mockingbird, I don't think I'm ready for this encounter's challenge rating, but what the heck:
Bold claim. Please say more because this doesn't answer why should I follow any sort of telological purpose when I can assign something a new purpose. For example, let's say that a book's telos is to share information with a reader. So, is there something wrong with me reassigning the book's telos to stabilizing a wobbly table. Why? or Why not?
You certainly could use a book to stabilize your table, but would it be the best use of the book? Would it be the best way to stabilize your table? I would say no. It works, yes, but is a suboptimal use of both the book and the table, and it's not what either were designed for. Within that framework, its both not the worst thing you could do with the book, nor the best. However, it's closer to the one than the other.
You gloss over a pretty important question of why shouldn't we change our telos/own human nature? Also, what is "man's fitness" and why should I aim for it?
I point out below that such a question would be beyond the scope of this normative ethics system. Once you change man's nature, you've moved the goal post, which is illegal in football.
First, "If a behavior works better within that nature, those man following it thrive," is a conditional statement relying on me believing that a man working with their surrounding nature is better conditioned to survive. Sure, I'll bite that there could be a correlation, but that doesn't mean their is a causation. And furthermore, nature is violent. No matter how well conditioned I am to survive in Arkansas USA, it won't help me if I get hit by a meteor or any other random occurrence that kills my genes.
I am not speaking of "surrounding nature" in the environmentalist sense, but of the "nature of man." As in, our natural inclinations and abilities, like advanced communication.
But, jumping ahead in your statements, I hope we would all agree that getting hit by a meteor is something best avoided.
Second, I think you're leaping from we can observe behaviors to we can evaluate actions in a moral way. I think we could evaluate which behaviors are effective for the telos (that we make up), but that doesn't mean we've made a moral judgment.
I am saying we can observe behaviors and saying some behaviors would help obtain our talos and some would not; and that there is a range.
THAN I am saying--within that framework--you can safely call actions against "bad" and actions for "good."
Maybe I was mistaken, but I thought once you have a goal you implicitly move from a descriptive evaluation to a normative evaluation.
That's a pretty serious oversight in my opinion because one of the conditions of evolution is that it has not stopped and will continue. To me this statement is a complete disregard for the future and makes a normative impossible. If I'm way off the mark, please explain how.
I would not call a moral system that allows for amoral actions a "pretty serious oversight," so I guess I disagree with your opinion.
Anyway, evolution is continuing, correct, but the changes are not drastic. So maybe the "best behavior" is changing, but not fast enough for our approximations to be invalide day to day. When the changes caused by progressive evolution are outside of our error bars, THEN we can start worrying about them. To do so otherwise, I would say, would be premature.
Yes, individuals can benefit from altruism directed at unrelated (or less-related) individuals through reciprocation. However, it's pretty safe to say that in a large society, holding a door open for a stranger at the mall, or donating money for starving children halfway around the world, does little if anything to increase your individual fitness. This is essentially a misfiring of altruism.
Even if you're right and it has no fitness value, can it be called a misfiring? That assumes the purpose of the altruism instinct is to increase fitness. But the evolutionary logic of fitness and survival can be analyzed, and possibly better so, as an efficient cause rather than a telos - an explanation for how the thing came to be, rather than an explanation of its purpose.
Is the purpose of a heart to increase fitness, or is the purpose of a heart to pump blood? Would a heart be a bad heart, would it be misfiring, if it continued to pump blood perfectly, but circumstances were (somehow) such that this activity was no longer fitness-improving?
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
Even if you're right and it has no fitness value, can it be called a misfiring? That assumes the purpose of the altruism instinct is to increase fitness.
But the evolutionary logic of fitness and survival can be analyzed, and possibly better so, as an efficient cause rather than a telos - an explanation for how the thing came to be, rather than an explanation of its purpose.
It's sort of one and the same here. Altruism arose (and this is an extremely simplistic explanation) because one way to improve the chances that your genes pass on to the next generation is to increase your relatives' chances to have reproductive success. It also, obviously, has the purpose or effect of improving your fitness.
Is the purpose of a heart to increase fitness, or is the purpose of a heart to pump blood? Would a heart be a bad heart, would it be misfiring, if it continued to pump blood perfectly, but circumstances were (somehow) such that this activity was no longer fitness-improving?
You're playing with words and confusing things. Obviously, the purpose of a heart is to pump blood, and it is also necessary for survival. If a heart is no longer contributing to fitness, it is now a vestigial organ. Altruism, although it misfires, is still useful for fitness (we are disproportionally altruistic towards relatives, as would be expected).
Note: I feel I should clarify...the evolutionary purpose of altruism is to increase fitness, but this is not something that is done consciously.
It also, obviously, has the purpose or effect of improving your fitness.
Warning! Warning! There's a world of difference between a purpose and an effect. It is not the purpose of cars to pump smog into the atmosphere, but it is one of their effects.
It also, obviously, has the purpose or effect of improving your fitness.
Warning! Warning! There's a world of difference between a purpose and an effect. It is not the purpose of cars to pump smog into the atmosphere, but it is one of their effects.
Yes, I got that, I'm trying to tell you that that claim is really complicated and fraught with puzzles and pitfalls.
It seems to me it depends wholly on how you wish to define purpose. You can say:
1. Altruism has the effect of conferring additional fitness upon the participant.
2. Altruism's purpose is to confer additional fitness upon the participant.
Both are true, and maybe I'm wrong, but it seems you disagree that this is the case?
You certainly could use a book to stabilize your table, but would it be the best use of the book? Would it be the best way to stabilize your table?
Yes, it would. I think that it would be great that a books teleological purpose woud be to fix another object's teleological purpse. And the greatest good in addition the aid the book gives the table would be that it increases the freedom I have with my wallet because I don't have to replace a table anymore. That is much better than reading a book. But I'd be interested to hear why it would not be.
It works, yes, but is a suboptimal use of both the book and the table, and it's not what either were designed for. Within that framework, its both not the worst thing you could do with the book, nor the best. However, it's closer to the one than the other.
Unless of course, one comes to the realization that I am arbitrarily redefining objects' telos in such a way to demonstrate that science cannot declare a teleological purpose because the individual assigns teleological purposes based on the context of the situation. To carry my book analogy further, reading a book may be the teleological purpose when I want to learn or be entertained by what it has to say, but that teleological purpose is (allegedly) finite. After I've been entertained or learned what the book has to say, it needs a new teleological purpose.
Or using a different line of thought, why should I believe that reading the book is the teleological purpose just because I've seen it be an effective telos? I've observed it be (in my purview) an equally or more effective table support. Between individuals or even groups of people there can be huge discrepancies with telos. I suppose to drive this point hard I have much more polarizing question that that relating to books: what's the teleological purpose of an individual who happens to be a woman?
You gloss over a pretty important question of why shouldn't we change our telos/own human nature? Also, what is "man's fitness" and why should I aim for it?
I point out below that such a question would be beyond the scope of this normative ethics system. Once you change man's nature, you've moved the goal post, which is illegal in football.
I'll have more to say below, but what I am asking is that if it is possible to decide on a goalpost in the first place.
First, "If a behavior works better within that nature, those man following it thrive," is a conditional statement relying on me believing that a man working with their surrounding nature is better conditioned to survive. Sure, I'll bite that there could be a correlation, but that doesn't mean their is a causation. And furthermore, nature is violent. No matter how well conditioned I am to survive in Arkansas USA, it won't help me if I get hit by a meteor or any other random occurrence that kills my genes.
I am not speaking of "surrounding nature" in the environmentalist sense, but of the "nature of man." As in, our natural inclinations and abilities, like advanced communication.[/Quote]First, you have to establish man has a nature (whatever that means). Second, we have to pick the ones that we want to promote. After all, there are inclinations within people that are frowned upon. The Seven Deadly Sins are examples of such inclinations. What makes them immoral (or moral if you disagree with the sentiment that some may be moral). One to keep in mind is lust.
Second, I think you're leaping from we can observe behaviors to we can evaluate actions in a moral way. I think we could evaluate which behaviors are effective for the telos (that we make up), but that doesn't mean we've made a moral judgment.
I am saying we can observe behaviors and saying some behaviors would help obtain our talos and some would not; and that there is a range.
THAN I am saying--within that framework--you can safely call actions against "bad" and actions for "good."
Maybe I was mistaken, but I thought once you have a goal you implicitly move from a descriptive evaluation to a normative evaluation.
We haven't reached a goal yet. And that's the point I'm trying to make. There is no goal.
That's a pretty serious oversight in my opinion because one of the conditions of evolution is that it has not stopped and will continue. To me this statement is a complete disregard for the future and makes a normative impossible. If I'm way off the mark, please explain how.
I would not call a moral system that allows for amoral actions a "pretty serious oversight," so I guess I disagree with your opinion.
Anyway, evolution is continuing, correct, but the changes are not drastic. So maybe the "best behavior" is changing, but not fast enough for our approximations to be invalide day to day. When the changes caused by progressive evolution are outside of our error bars, THEN we can start worrying about them. To do so otherwise, I would say, would be premature.
Just because nature (the human race) isn't changing drastically within short periods of time does not mean that our nature (environment) is not. And a key part of evolution is taking into account the environment because which aspects of human nature are teleologically most important is dependent on what the environment around that dictates, isn't it?
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
~~~~~
Yes, it would. I think that it would be great that a books teleological purpose woud be to fix another object's teleological purpse. And the greatest good in addition the aid the book gives the table would be that it increases the freedom I have with my wallet because I don't have to replace a table anymore. That is much better than reading a book. But I'd be interested to hear why it would not be.
Certainly within the framework I set up you could show the book's purpose was to do just that in a number of ways. However, I am unsure what justification you can use to say the purpose of the table supersedes the purpose of the book.
Unless of course, one comes to the realization that I am arbitrarily redefining objects' telos in such a way to demonstrate that science cannot declare a teleological purpose because the individual assigns teleological purposes based on the context of the situation. To carry my book analogy further, reading a book may be the teleological purpose when I want to learn or be entertained by what it has to say, but that teleological purpose is (allegedly) finite. After I've been entertained or learned what the book has to say, it needs a new teleological purpose.
If you are assigning your telos ad hoc, and I am using a reasoned method, then my understanding is my argument is a priori stronger than yours.
It is my understanding that the time to offhandedly dismiss an argument is at the axiomatic part, not the corollary. Thus, I would be justified in offhandedly dismissing your arbitrary telos assignment, but you would not be justified in offhandedly dismissing mine, since it is one step forward in the reasoning chain.
You would instead have to either show that mine does not follow from the axiom, or dismiss the axiom. I believe that's how it works, at any rate.
Or using a different line of thought, why should I believe that reading the book is the teleological purpose just because I've seen it be an effective telos?
Because, in my framework, the purpose it was designed for is to convey knowledge, and it is still quite useful in that capacity. Unless this is some other kind of book and it's not, but that would only change my response, not the framework.
Second, we have to pick the ones that we want to promote. After all, there are inclinations within people that are frowned upon. The Seven Deadly Sins are examples of such inclinations. What makes them immoral (or moral if you disagree with the sentiment that some may be moral). One to keep in mind is lust.
There must be justification to remove something within the framework, including 'sins.' Do these 'sins' help or hurt our species? To what extent? IDK, we would have to find out ways of determining that.
Science is not about cutting large swaths of possibilities out of something, but a slow and steady march where each step must be justified.
Just because nature (the human race) isn't changing drastically within short periods of time does not mean that our nature (environment) is not. And a key part of evolution is taking into account the environment because which aspects of human nature are teleologically most important is dependent on what the environment around that dictates, isn't it?
One of the skilled we have evolved is the ability to change our environment. The extent to which we can do that is fairly great. That would also need to be taken into account, no doubt. Which simply makes everything more complicated, but it's not like it was a easy problem to solve from the start.
However, maybe I am misunderstanding, are you talking about some catastrophe so great it would wipe out mankind and there would be nothing mankind could do to stop it? I dismissed the implication because I don't see its relevance, only to say--within this framework--it would be the very definition of 'bad.'
Science can only take into account physical/material aspects of the equation. How can it determine the best end result if it is not taking into account any metaphysical/spiritual elements that are present? Leaving them out would create a faulty end result.
Science can only take into account physical/material aspects of the equation. How can it determine the best end result if it is not taking into account any metaphysical/spiritual elements that are present? Leaving them out would create a faulty end result.
If it creates a "faulty end result", that's an observable effect, and science can and does take it into account. Scientists regularly investigate lots of phenomena that are invisible and immaterial, from the force that makes apples fall and holds the planets in their orbits to the subtleties of human emotion. These phenomena are deemed real - or "physical", if you like - precisely because they have observable effects. But it would make no sense for a scientist to observe an effect and say, "Oh, that's spiritual, I'd better not take it into account"; anything with an effect is fair game. The only people who make the distinction between "spiritual" and "physical" are those who want to believe in certain things in spite of the fact that they don't have observable effects. And if they don't have observable effects, clearly they can't produce any faulty end results, or make their existence known to humanity in any other way.
On the misuse of the word "metaphysical", I have already given you a lecture.
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
Isn't "objective experience" a contradiction in terms? The desk that my computer monitor is sitting on is objectively there. I experience it subjectively.
Might be a bit far afield, sorry if I'm nitpicking, I just prefer precision in language when doing philosophy.
One of the skilled we have evolved is the ability to change our environment. The extent to which we can do that is fairly great. That would also need to be taken into account, no doubt. Which simply makes everything more complicated, but it's not like it was a easy problem to solve from the start.
Wouldn't you say that our ability to change our environment can be more attributed to accumulation of knowledge and the need to support larger groups of people by creating cities. Unless of course you are attributing that to our more developed brain, which actually doesn't vary too much in terms of cc compared to, say, homo erectus and even neaderthals of around 33,000 years ago.
I would not claim to be an expert, but my understanding is that it is the human capacity for speech that makes us different. The fact we can share complex and abstract thoughts with one another allows us to pool our collective brain power in ways unlike any other animal.
I would not claim to be an expert, but my understanding is that it is the human capacity for speech that makes us different. The fact we can share complex and abstract thoughts with one another allows us to pool our collective brain power in ways unlike any other animal.
The jury is still out, but it is very possible that Neanderthals had language abilities equivalent to ours.
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
[Transferred from the "would you kill a stranger..." thread.]
While evolution certainly does generate selection pressure that pushes us towards moral behaviors, it doesn't follow that evolution explains or underwrites morality. You wouldn't say that the police (even when they are acting entirely in their proper capacity) explain morality -- while they are a force that tends to push morally-errant agents back on track, and you could even "map out" the track by doing random things and writing down whether the police intervene or not, their presence isn't what makes the track the way it is. The same is true of evolution.
If we created some sort of sentient AI that didn't evolve (by virtue of us having created it), it would still be a conscious creature and one would still expect it to attain a sufficient understanding of moral principles such that it could both give and receive moral treatment, even in the absence of any evolutionary pressure whatsoever. Arguments about the well-being of conscious creatures (i.e. all moral arguments) cannot turn on those creatures having evolved, because it may well be possible for there to be a conscious creature that didn't or can't evolve -- yet we'd still expect a proper moral framework to encompass it all the same.
Evolution should not be used as a proxy for "the contingent facts about how a conscious creature got the way it is." It happens to be the way we became how we are, but it's not a logical necessity and introducing it into morality ties your morality to contingent matters in a way that is at the very least not necessary and at worst just wrong.
If, instead of having evolved, we were snapped into existence by God just as we are now, it would still be wrong to, say, pluck out a child's eyes, because our eyes would remain just as spectacularly important to our well-being as non-evolved creatures as they are to our well-being as evolved creatures. Evolution is the process that happened to result in our having eyes -- it is not the reason that eyes are useful to us. In fact, it's the other way around -- a visual system is useful because there is a niche carved out by the physical nature of our corner of the universe for sighted animals, and the genetic algorithm that is evolution felt out all the various paths of the search space and settled onto the peak corresponding to the niche.
Similarly, referring back to the classical argument about the failure of a society that permits murder to thrive under evolutionary pressure, it's not that evolution "makes" murder bad -- rather, it's that the badness of murder "makes" evolution favor those societies that are apt to avoid it.
The upshot of all of this is that placing evolution in a position that is ontologically prior to morality is putting the cart before the horse. Evolution can explain how a creature came to occupy a particular niche of its fitness landscape, but it cannot explain the shape of the fitness landscape itself.
While evolution certainly does generate selection pressure that pushes us towards moral behaviors, it doesn't follow that evolution explains or underwrites morality. You wouldn't say that the police (even when they are acting entirely in their proper capacity) explain morality -- while they are a force that tends to push morally-errant agents back on track, and you could even "map out" the track by doing random things and writing down whether the police intervene or not, their presence isn't what makes the track the way it is. The same is true of evolution.
I think this is just a terminology issue. Let me see if we can clear it up a bit.
Things can’t travel faster than speed of light (there are some theories that would interject, but just go with it for a second). As you approach this “universe speed limit” it takes more and more energy for less and less velocity increase. If you ask a physicist why that is and what’s causing it, they will likely reply “Relativity.” That being the name of the theory which explains that observed phenomena. Now, is “Relativity” the force that causes this? Is “Relativity” the thing impeding the partial? Well, sorta, but that’s how we would refer to that whole thing in physics.
So, when I say “evolution,” whether correctly or incorrectly, I’m thinking about the observed phenomena that word is meant to explain. If someone were to ask me “why do animals with these traits die off while these other animals don’t?” Correctly or incorrectly I would say “evolution.” I don’t know what to call “the track;” I just know “evolution” is the observable phenomena that is “sketching” out that track for me.
I don’t think we disagree, I just think you are used to dealing with the rules of a system directly, and I am used to dealing with what the rules physically cause. "The track" is really what we are both talking about; we are just calling it different things and I’m using “evolution” to mean something more inclusive than you are.
If we created some sort of sentient AI that didn't evolve (by virtue of us having created it), it would still be a conscious creature and one would still expect it to attain a sufficient understanding of moral principles such that it could both give and receive moral treatment, even in the absence of any evolutionary pressure whatsoever. Arguments about the well-being of conscious creatures (i.e. all moral arguments) cannot turn on those creatures having evolved, because it may well be possible for there to be a conscious creature that didn't or can't evolve -- yet we'd still expect a proper moral framework to encompass it all the same.
Except these created beings could have any number of totally different biological needs than other things we’ve encountered.
Let’s assume these creatures needed to feel searing pain infliected on them by another every 24 hours or they die (or some other equally unfamiliar quirk; Illithids, a fictional creature physically unable to experience positive emotions like 'love,' spring to mind). Would it be safe to assume their moral code should be the same as ours? No, I don’t think it would. A society of these creatures would be unlike our societies, and their social evolution (not biological evolution, since you say that’s out of the picture for this example) would happen differently. Their reasoning on how “conscious creatures” should treat one another would not lead them to the same conclusions that our reasoning would.
Anyway, I understand what you’re saying here. You’re pointing out that evolution isn’t necessarily for “the track” to exist, and I agree with you. However--for us--our biology is the context that our “well-being” needs to be evaluated in. And--for us--that biology was hewn by evolutionary forces. That’s why I keep bringing it up.
Similarly, referring back to the classical argument about the failure of a society that permits murder to thrive under evolutionary pressure, it's not that evolution "makes" murder bad -- rather, it's that the badness of murder "makes" evolution favor those societies that are apt to avoid it.
I thought that was what I was saying, assuming you mean social evolution here.
There are two kinds of evolution happening here, biological and social. Biological evolution is what defined our biology. Our biology defines what increases or decreases our “well-being.” Social evolution--then--is the presser you are talking about that ‘bumps’ us back onto “the track” towards maximum well-being.
But, "the track" WAS created by biological evolution--in our case--because our biology is what defines "the track."
The upshot of all of this is that placing evolution in a position that is ontologically prior to morality is putting the cart before the horse. Evolution can explain how a creature came to occupy a particular niche of its fitness landscape, but it cannot explain the shape of the fitness landscape itself.
I’m still assert that most of the disagreements we are having are semantic, stemming from my inability to articulate what I’m trying to say correctly.
However, if our biology defines what increases or decreases our “well-being,” then it is not incorrect to put biological evolution before our morality, only social evolution.
Things can’t travel faster than speed of light (there are some theories that would interject, but just go with it for a second). As you approach this “universe speed limit” it takes more and more energy for less and less velocity increase. If you ask a physicist why that is and what’s causing it, they will likely reply “Relativity.” That being the name of the theory which explains that observed phenomena. Now, is “Relativity” the force that causes this? Is “Relativity” the thing impeding the partial? Well, sorta, but that’s how we would refer to that whole thing in physics.
Well, I don't know that I concur about your narrative about what physicists would say. I do know what the greatest physicist of all time said when he was faced with a similar question, and it seems that if you are right our physicists have become quite tragically inarticulate in the intervening 300 years.
So, when I say “evolution,” whether correctly or incorrectly, I’m thinking about the observed phenomena that word is meant to explain.
The problem is that evolution is not an explanatory terminus or a brute inductive observation in the way that gravity was at the time of Newton or relativity is (maybe) today. In fact, (biological) evolution is explicable in terms of more elementary physics: "reproduction" is a physical process traceable back to cell division; "mutation" is a physical process that can be explained in terms of chromosomal crossover combined with external mutating influences, and "selection" is a physical process that can be explained by all of the various ways life may be reduced to a state where it cannot further reproduce.
So it would be a kind of deliberate ignorance to say hypotheses non fingo to evolution -- you could, if you wished to blind yourself to the rest of physical science, observe evolution and treat it as though it were as fundamental a force as gravity, but that would be a mistake.
If someone were to ask me “why do animals with these traits die off while these other animals don’t?” Correctly or incorrectly I would say “evolution.” I don’t know what to call “the track;” I just know “evolution” is the observable phenomena that is “sketching” out that track for me. I don’t think we disagree, I just think you are used to dealing with the rules of a system directly, and I am used to dealing with what the rules physically cause. "The track" is really what we are both talking about; we are just calling it different things and I’m using “evolution” to mean something more inclusive than you are.
It sounds to me like I'm talking about the track and you're talking about one possible device that you use to measure the track. These are very plainly different things. But since you believe this is reducible to a semantic problem, I'd like you to give a precise definition of evolution as you are using it.
Except these created beings could have any number of totally different biological needs than other things we’ve encountered.
Yes, I expect they would. For instance, if we encountered a conscious being, call it X, that thrived under extreme heat, there might be a rule that says "don't burn human beings while they're alive" and also a rule "do burn X while it is alive." How is this supposed to stand in contradiction to or raise a question with anything I've said?
Would it be safe to assume their moral code should be the same as ours? No, I don’t think it would.
Their rules for treating humans would be perforce the same as our rules for treating humans, because rules for treating humans are based on the well-being of humans, which is objective. Similarly, rules for treating X's would be based on the well-being of X's, which is objective.
A society of these creatures would be unlike our societies, and their social evolution (not biological evolution, since you say that’s out of the picture for this example) would happen differently. Their reasoning on how “conscious creatures” should treat one another would not lead them to the same conclusions that our reasoning would.
Yes it would. Just as you and I are having this discussion, and we are both intelligent enough to realize that a conscious creature of kind X might achieve its well-being in different ways from us, so could the X's have the same discussion about humans and reach the same conclusions. In fact, if they were extraordinarily intelligent and capable they could, without knowing anything of human history or evolution, deduce our moral principles purely from an accurate map of our brains and bodies. They would learn exactly what causes us physical and mental suffering and/or happiness.
Anyway, I understand what you’re saying here. You’re pointing out that evolution isn’t necessarily for “the track” to exist, and I agree with you. However--for us--our biology is the context that our “well-being” needs to be evaluated in. And--for us--that biology was hewn by evolutionary forces. That’s why I keep bringing it up.
You keep bringing it up, and I keep insisting that it is a non sequitur. It is as though we have been tasked to measure the temperature a beaker of water, and you think that said measurement turns on whether the beaker was heated by a hot plate or a Bunsen burner. When attempting to figure out how hot the water is, how it became hot is irrelevant.
Evolution is a description of the physical process of how we got to be the way we are -- how the "water" was "heated." It is not a proxy for the way we actually are -- it's not the "temperature" of the "water."
thought that was what I was saying, assuming you mean social evolution here.
I mean both kinds of evolution. Evolution isn't the reason murder is bad; the badness of murder is the reason evolution avoids it. Similarly, evolution isn't the reason sight is good; the goodness of sight is the reason evolution selects for it.
There are two kinds of evolution happening here, biological and social. Biological evolution is what defined our biology. Our biology defines what increases or decreases our “well-being.” Social evolution--then--is the presser you are talking about that ‘bumps’ us back onto “the track” towards maximum well-being.
But, "the track" WAS created by biological evolution--in our case--because our biology is what defines "the track."
But biological (or social) evolution has prior inputs -- the environment in which the evolving is happening, most importantly. You seem to be treating it as a kind of deus ex machina or independent, self-generating force. This is a mistake.
The track wasn't created by biological (or social) evolution. Rather, the track is a facet of the socio+biological environment that is sought out by those processes as they take place. Evolution is, in the abstract, a search algorithm.
I’m still assert that most of the disagreements we are having are semantic, stemming from my inability to articulate what I’m trying to say correctly.
I don't think our disagreement is entirely semantic. I read your post carefully several times and I could not find a way of interpreting it that I did not feel contained serious ontological mistakes. If you could identify the terms you think are semantically mismatched between us and give your definitions, that might help. (I know "evolution" is one you mentioned, and a pretty important one at that.)
I don't think our disagreement is "entirely semantic" either. That's why I said "most of the disagreements we are having are semantic" not "all." I'll also do my best to ignore the unnecessary barbs at the start of your post.
I think it's time I try to recap what's being said to see if we're both on the same page or if we are talking past each other instead. (And I don't think I'm qualified to give a full definition/description of 'evolution,' both social and biological. So, I'll skip that question as well if you don't mind)
Anyway,
Now, your point—I believe—is that there is no reason to bring up evolutionary biology in this discussion. That it really does not matter where we got our biology when talking about what causes us to flourish and what increases our well-being. We could have been snapped into existence by God or made in a laboratory by aliens, neither matters when talking about what we should or should not do. You feel that my talk of biological evolution is pointless at best and “the cart before the horse” at worst.
I agree with you IF you have already assumed--axiomatically--that the well-being and flourishing of “conscious creatures” is the only concern of ethics. We have to start somewhere, and it seems you feel it reasonable to start there. I—however—don’t like starting there.
I don’t feel it is correct for a conscious creature to assume axiomatically that conscious creatures are all that should matter, anymore than I would a mathematician telling me it was axiomatic that math is the best subject or a physicist telling me it is axiomatic that physics is the best subject. Well the latter might be true, it still weakens the argument.
I listened to Sam Harris explain in The Moral Landscape his reasoning about why consciousness necessarily makes consciousness matter, and I can tell you--unless I missed something--the argument was circular. Even ignoring the circular nature of the argument, consciousness validating consciousness leads to issues with things like the experience machine. So, I feel very uncomfortable starting the chain where it seems you wish to start it. I would rather go one “why” back. WHY should the well-being and flourishing of “conscious creatures” be the only concern of ethics?
Because I'm taking one step back, I felt it was necessary to start with Telos. What is the purpose of man? What gives us that purpose? What were we created to do? That is why I start my moral reasoning with biological evolution. I use it to justify why we should care about flourishing and why our ethics should be built around well-being. Biological evolution hewed us for that purpose, so it is our telos.
If you have another reason why we should care about the well-being and flourishing of “conscious creatures” other than an axiomatic, or circular, argument I would like to hear it. But, I don’t think you’ve given one yet.
There is one other part of the discussion I wish to address directly:
They would learn exactly what causes us physical and mental suffering and/or happiness.
Except they could be incapable of physical and mental suffering and/or happiness, so I don’t know why giving those things value would necessarily occur to them. They could also be incapable of empathy—like the Illithid—so they wouldn’t care about “seeing things our way.” Remember, pure logic can’t provide axioms. If these creatures have different starting axioms for what they fundamentally care about, then their logical conclusions are going to be different as well. There is no reason to assume they would be naturally inclined to value things we think "conscious creatures" should value.
I think it's time I try to recap what's being said to see if we're both on the same page or if we are talking past each other instead. (And I don't think I'm qualified to give a full definition/description of 'evolution,' both social and biological. So, I'll skip that question as well if you don't mind)
I'm afraid I do mind, at least in some sense. I mean, obviously you don't have to say anything you don't want to say -- but I think one should know what one's claims are, and by asking you to define evolution I am only asking what you think you mean when you use the word. I am not trying to goad you into providing an "incorrect" definition of evolution so I can say "ha ha, your definition isn't technically correct." Definitions can't be wrong. My goal is to gain an understanding of what you're asserting. If we're having a semantic disagreement, the only way to get to the bottom of it is to try to analyze it in more primitive terms that we do semantically agree on.
Now, your point—I believe—is that there is no reason to bring up evolutionary biology in this discussion. That it really does not matter where we got our biology when talking about what causes us to flourish and what increases our well-being. We could have been snapped into existence by God or made in a laboratory by aliens, neither matters when talking about what we should or should not do. You feel that my talk of biological evolution is pointless at best and “the cart before the horse” at worst.
I would say this is a good summary of one of my core points, yes.
I agree with you IF you have already assumed--axiomatically--that the well-being and flourishing of “conscious creatures” is the only concern of ethics. We have to start somewhere, and it seems you feel it reasonable to start there. I—however—don’t like starting there.
I don’t feel it is correct for a conscious creature to assume axiomatically that conscious creatures are all that should matter, anymore than I would a mathematician telling me it was axiomatic that math is the best subject or a physicist telling me it is axiomatic that physics is the best subject. Well the latter might be true, it still weakens the argument.
I would say that it's a definitional matter, rather than an axiomatic one. Ethics, insofar as it is understood to be concerned with the search for the good, must be concerned with the well-being of conscious creatures. If that's not a part of the good, then I don't know what is. To say anything else would be to misuse the word.
As with any definitional matter, you can argue or disagree, but it would be semantic. The problem with these types of semantic swamps is that they are usually a waste of time. If you get into an argument with a mathematician about why 1+1=2, he's going to have to tell you that it comes down to how those terms are defined, and that's really just the end of it. If you want to dispute that definition and say that 1+1=3, you had damned well better follow it up with an interesting application of your new definition -- otherwise the mathematician is going to walk away.
The problem with many ethicists and the is-ought gap is that they drag the discussion into this semantic swamp and never say anything interesting as a result. In fact, they get very angry at people who do try to say interesting things and their response always seems to be to try to yank them back into the swamp, which is literally defined by the inability of those mired in it to ever articulate anything interesting. Well, I'll only go into the swamp if I'm promised an interesting way out of it.
(Since you now appear to be a critic of Sam Harris, you might benefit from reading his response to critics if you haven't already. In addition to answering some of your complaints here and below, he also quotes Thomas Nagel, who gives another excellent response to this particular point.)
I listened to Sam Harris explain in The Moral Landscape his reasoning about why consciousness necessarily makes consciousness matter, and I can tell you--unless I missed something--the argument was circular.
When we were discussing this in the other thread, I thought you agreed with the argument. In a universe devoid of conscious creatures, there would be no morality, because there would be no evaluative context for moral claims. Morality would be a sort of category mistake.
Even ignoring the circular nature of the argument, consciousness validating consciousness leads to issues with things like the experience machine.
Harris responds to the "Experience machine" argument himself, and it certainly is a valid question to pose. My answer would be different than Harris's -- I would say that if the classical Cartesian argument tells us anything, it's that the universe is indistinguishable from an experience machine. Thus we can expect no empirically-grounded theory of anything to be able to distinguish an experience machine from a non-experience-machine. If our criteria for rejecting theories is that they don't deal very well with experience machines, then we must summarily reject substantially every theory ever. Our criteria for accepting (empirical) theories is ultimately grounded in experience, for Chrissake.
However, the problem is that once again it strikes me as completely tangential to what we're supposed to be discussing. Your modifications to the theory don't address the experience machine at all! Okay, so it was our evolution that shaped our brains such that certain brain states please us, with the further (wrong) stipulation that evolution is an actual primitive force that does this rather than a search algorithm acting on anterior inputs. It still remains the case that certain brain states please us, it still remains the case that the experience machine gives us access to these states, and it still remains a question of whether to go in the machine that is not in the least bit helped by the introduction of the non-sequitur that is evolutionary theory.
So, I feel very uncomfortable starting the chain where it seems you wish to start it. I would rather go one “why” back. WHY should the well-being and flourishing of “conscious creatures” be the only concern of ethics?
Why should 1+1 be 2 rather than 3? Here, let me quote Nagel's defense of Harris:
Quote from Thomas Nagel »
The true culprit behind contemporary professions of moral skepticism is the confused belief that the ground of moral truth must be found in something other than moral values. One can pose this type of question about any kind of truth. What makes it true that 2 + 2 = 4? What makes it true that hens lay eggs? Some things are just true; nothing else makes them true. Moral skepticism is caused by the currently fashionable but unargued assumption that only certain kinds of things, such as physical facts, can be “just true” and that value judgments such as “happiness is better than misery” are not among them. And that assumption in turn leads to the conclusion that a value judgment could be true only if it were made true by something like a physical fact. That, of course, is nonsense.
Quote from Taylor »
Because I'm taking one step back, I felt it was necessary to start with Telos. What is the purpose of man? What gives us that purpose? What were we created to do? That is why I start my moral reasoning with biological evolution. I use it to justify why we should care about flourishing and why our ethics should be built around well-being. Biological evolution hewed us for that purpose, so it is our telos.
Perhaps unsurprisingly, "telos" has similar ontological properties to "ethics." For one thing, only a conscious creature can assign a "telos" to a thing. Suppose in our hypothetical lifeless universe of rocks, one of the rocks happens, through its collisions and erosions and bouncing around in the space, to have a brutally sharp edge that could be used for cutting (if there were anything to cut) and a blunt handle at the other end that could be used for grasping (if there were anyone around to grasp it) -- in other words, the rock could be used as knife, and a good one at that, if there were anyone around to do so. Would you call this state of affairs teleological?
No, says I, for there is no extrinsic finality -- there is nothing outside of the knife-rock that we could say wanted it to be that way; the universe didn't conspire to give it a knife-shape in any sense deeper than possible physical determinism and is as "happy" about it being knife-shaped as it would be about it being a spheroid. Nor is there intrinsic finality; the knife-rock itself does not "care" that it's a knife-shape; it could equally well have been a spheroid. It has no internal state or frame of reference that one can appeal to that would "bless" one configuration over the other.
Now if there were a conscious thing in the universe that wanted a knife, it could give that knife-rock a teleology, it could act such as to assign finality to the knife-rock's present state of affairs, and it could act in such a way as to preserve that state of affairs -- but not bloody well until the barrier of consciousness is crossed can such things happen.
That's what's unique about consciousness; that's why it's the line in the sand the crossing of which triggers all these other conclusions. And of course, since evolution isn't conscious, it can't assign telos to things.
(Incidentally, I don't think an evolutionary account of morality should begin by asking "what were we created to do?")
If you have another reason why we should care about the well-being and flourishing of “conscious creatures” other than an axiomatic, or circular, argument I would like to hear it. But, I don’t think you’ve given one yet.
Ex nihilo nihil fit. I can't reason from no axioms or definitions. If you want me to do that, then I'm afraid I'll have to disappoint you. If this is what it comes down to, then you were right in the first place: our disagreement is semantic and irresolvable.
Except they could be incapable of physical and mental suffering and/or happiness, so I don’t know why giving those things value would necessarily occur to them. They could also be incapable of empathy—like the Illithid—so they wouldn’t care about “seeing things our way.”
You're falling into some kind of qualia trap of the form "only experiencing something can allow you to understand that thing." Nobody's experienced a black hole; that doesn't prevent us from reasoning about them. Similarly, a creature that never experienced suffering is not precluded from making the logical deduction that it shouldn't be inflicted upon creatures that can experience it.
I've never been starving. I still understand, in the abstract, that famine is to be fought against and prevented where possible.
Remember, pure logic can’t provide axioms. If these creatures have different starting axioms for what they fundamentally care about, then their logical conclusions are going to be different as well. There is no reason to assume they would be naturally inclined to value things we think "conscious creatures" should value.
Why would a creature, in attempting to determine what it is that we fundamentally care about, start only from hypotheses about themselves? Once you've got a specimen of humanity in front of you to query and investigate, you are no longer constrained to whatever axioms or inputs you were using before -- you have new data which you can use to derive new conclusions.
(I would certainly not deny the possibility of an Ender's Game-esque scenario, where a conscious creature -- out of ignorance concerning the states of the other conscious creature's well-being -- treats another conscious creature horribly. But I would expect that conscious creature to have the capacity to condemn its own actions as immoral once the data about well-being becomes available to it -- just as, indeed, takes place in the Ender's Game scenario.)
Evolution is the natural process by which a thing or group of things become more efficient fitness-wise. This occurs by desirable traits being selected for and undesirable traits being removed. More fit traits and behaviors flourish. Like second law of thermodynamics pushes things towards a state of maximum entropy, evolution moves things towards a state of maximum fitness.
Biological evolution is this process happening to biology(traits) and social evolution is this happening to a society(behaviors).
Ethics, insofar as it is understood to be concerned with the search for the good, must be concerned with the well-being of conscious creatures. If that's not a part of the good, then I don't know what is.
To say anything else would be to misuse the word.
As with any definitional matter, you can argue or disagree, but it would be semantic...
Since legitimizing a reason behind a definition for a secular morality is the very point of this thread, this is a question beg.
You are saying I need to either accept or reject your definition of "ethics," when an establishment of a definition of "ethics" is the very point of the debate we are having.
I see no fundamental difference between what you're doing here and someone who says "Ethics comes solely from God, and if you can't accept that then we have nothing to discuss." Sure, when I disagree or say I want to discuss reasons for that definition, you can walk away, but what have you accomplished at that point?
(Since you now appear to be a critic of Sam Harris, you might benefit from reading his response to critics if you haven't already. In addition to answering some of your complaints here and below, he also quotes Thomas Nagel, who gives another excellent response to this particular point.)
When we were discussing this in the other thread, I thought you agreed with the argument. In a universe devoid of conscious creatures, there would be no morality, because there would be no evaluative context for moral claims. Morality would be a sort of category mistake.
I agree with many of Sam's conclusions, but I don't agree with all of his reasoning. Certainly not on the reasoning of his I brought up here. I agree with the conclusion that reasoning leads him to, but I don't agree with the reasoning itself. Certainly, consciousness is something which gives value meaning, but it does not follow that the first thing consciousness must give value to is consciousness.
Harris responds to the "Experience machine" argument himself, and it certainly is a valid question to pose. My answer would be different than Harris's -- I would say that if the classical Cartesian argument tells us anything, it's that the universe is indistinguishable from an experience machine. Thus we can expect no empirically-grounded theory of anything to be able to distinguish an experience machine from a non-experience-machine. If our criteria for rejecting theories is that they don't deal very well with experience machines, then we must summarily reject substantially every theory ever. Our criteria for accepting (empirical) theories is ultimately grounded in experience, for Chrissake.
My theory gets around this by having consciousness first assign value to telos, not consciousness.
However, the problem is that once again it strikes me as completely tangential to what we're supposed to be discussing. Your modifications to the theory don't address the experience machine at all! Okay, so it was our evolution that shaped our brains such that certain brain states please us, with the further (wrong) stipulation that evolution is an actual primitive force that does this rather than a search algorithm acting on anterior inputs. It still remains the case that certain brain states please us, it still remains the case that the experience machine gives us access to these states, and it still remains a question of whether to go in the machine that is not in the least bit helped by the introduction of the non-sequitur that is evolutionary theory.
Except my theory values fitness before happiness. Clearly, the experience machine doesn't augment fitness; it is of no use to the evolutionary process, either biological or social. And is, in fact, detrimental in many cases. So, trivially my theory would reject it.
Why should 1+1 be 2 rather than 3? Here, let me quote Nagel's defense of Harris:
I just gave you a reason as to why something is true, and you push it aside in favor of "it just is?" I'd ask you if this strikes you as ironic, but clearly(and surprisingly) it doesn't.
Perhaps unsurprisingly, "telos" has similar ontological properties to "ethics." For one thing, only a conscious creature can assign a "telos" to a thing.
Why? Where is this property of "conscious creatures" coming from? Where does it originate?
I ask, since clearly the linchpin of my argument is that this is not true, and you seem to be pushing it aside with nothing more then a "it must be so."
Suppose in our hypothetical lifeless universe of rocks, one of the rocks happens, through its collisions and erosions and bouncing around in the space, to have a brutally sharp edge that could be used for cutting (if there were anything to cut) and a blunt handle at the other end that could be used for grasping (if there were anyone around to grasp it) -- in other words, the rock could be used as knife, and a good one at that, if there were anyone around to do so. Would you call this state of affairs teleological?
No, says I, for there is no extrinsic finality -- there is nothing outside of the knife-rock that we could say wanted it to be that way; the universe didn't conspire to give it a knife-shape in any sense deeper than possible physical determinism and is as "happy" about it being knife-shaped as it would be about it being a spheroid. Nor is there intrinsic finality; the knife-rock itself does not "care" that it's a knife-shape; it could equally well have been a spheroid. It has no internal state or frame of reference that one can appeal to that would "bless" one configuration over the other.
Except evolution isn't happening randomly; it clearly has a tenancy towards maximum fitness. There is a definite direction to evolution and--at least theoretically--an end goal(s). This definite direction existing doesn't depend on consciousness, I might add.
Ex nihilo nihil fit. I can't reason from no axioms or definitions.
I'm not asking you to. I'm asking you to start from a DIFFERENT set of axioms and use them to justify your definition of ethics, instead of just begging the question.
I've never been starving. I still understand, in the abstract, that famine is to be fought against and prevented where possible.
But, you need empathy to care about another's discomfort. What if these creatures were created without empathy? (I notice you completely glossed over that example of mine, so I'll keep pointing it out)
Why would a creature, in attempting to determine what it is that we fundamentally care about, start only from hypotheses about themselves? Once you've got a specimen of humanity in front of you to query and investigate, you are no longer constrained to whatever axioms or inputs you were using before -- you have new data which you can use to derive new conclusions.
My real question is why would they CARE about us? Or--more to the point--what if they were created to not care about us?
A race composed entirely of creatures genetically predisposed to be sociopaths or Nazis or whatever will not come to the same moral conclusions we do.
Evolution is the natural process by which a thing or group of things become more efficient fitness-wise. This occurs by desirable traits being selected for and undesirable traits being removed. More fit traits and behaviors flourish. Like second law of thermodynamics pushes things towards a state of maximum entropy, evolution moves things towards a state of maximum fitness.
Biological evolution is this process happening to biology(traits) and social evolution is this happening to a society(behaviors).
Well, I think we more or less agree on the basic definition. I don't think the semantic problem is here.
(Incidentally, here is the definition I operate under: evolution is the tendency of a population undergoing selection, mutation, and reproduction to move with successive generations towards local maxima of reproductive fitness.)
Definitions are axioms; you either accept them or you don't.
This is, to quote Pauli, not even wrong. A definition is a substitution; anywhere you see a defined thing, you may replace it with its definition. Definitions are neither true nor false. Axioms, on the other hand, are basic presupposed truths at the bottom of an ontology. Crucially, changing definitions never changes what's true (other than by the superficial replacement entailed by the definitional swap). Changing axioms, however, can change what's true.
What? No it bloody isn't. I'm not saying anything of the form "P is not known to be true, therefore it must be false."
I will restate: my claim is that to fail to recognize the application of the word "good" to the well-being of conscious creatures is to either misuse or redefine the word, making the argument semantic from that point forward. This claim does not have the form "P is not known to be true, therefore it must be false" for any P.
Since legitimizing a reason behind a definition for a secular morality is the very point of this thread, this is a question beg.
You are saying I need to either accept or reject your definition of "ethics," when an establishment of a definition of "ethics" is the very point of the debate we are having.
Definitions are not debatable. There is nothing to be said in a semantic argument that could ever make one party right and the other wrong. Nor can they be "rejected" in the sense that an actual truth claim could. You must accept my definition of ethics in order to understand my claims. That doesn't mean you have to agree with my claims or that my claims can't be wrong -- it means that I intend them to be interpreted in a certain way and I am making that manifest by providing the definitions.
I see no fundamental difference between what you're doing here and someone who says "Ethics comes solely from God, and if you can't accept that then we have nothing to discuss." Sure, when I disagree or say I want to discuss reasons for that definition, you can walk away, but what have you accomplished at that point?
Definitions don't do any work. A person who defines ethics as coming from God still has to prove that his newly-defined ethics exists and can be used to draw conclusions. That is what an intelligent discussion with a person who defines ethics as coming from God would entail. Such a discussion might end with the other party saying "the thing named by your definition doesn't exist" or "your definition doesn't mesh with empirical reality" -- but it could never end with "your definition is wrong/circular/question-begging." Arguments are question-begging, not definitions.
Now once you come to understand the claims I am making by plugging in the definitions I'm telling you I'm using, then I am happy to proceed forward and argue that my claims make sense: that the well-being of conscious creatures is a thing that exists and can be increased and/or decreased in various ways. But I thought you already believed that.
We may need to unwind this all the way back to basic philosophy. Clearly it is not possible to apply epistemics or ontology to problems of ethics when we are not on the same page about the basic rules.
I agree with many of Sam's conclusions, but I don't agree with all of his reasoning. Certainly not on the reasoning of his I brought up here. I agree with the conclusion that reasoning leads him to, but I don't agree with the reasoning itself. Certainly, consciousness is something which gives value meaning, but it does not follow that the first thing consciousness must give value to is consciousness.
I can find nothing in this paragraph that questions the reasoning at hand. Can you cite the exact argument of Harris' that you're criticising with this and point to where it applies?
My theory gets around this by having consciousness first assign value to telos, not consciousness.
I don't see how this helps. Whatever your framework purports to be of value, if it arises from an empirical world state, the experience machine can give you an experience that will increase your perception of that value. If telos comes from evolution, then we'll load a program into the experience machine that gives you the experience of being vastly more evolutionarily-fit. Then what?
Quote from »
Except my theory values fitness before happiness. Clearly, the experience machine doesn't augment fitness; it is of no use to the evolutionary process, either biological or social. And is, in fact, detrimental in many cases. So, trivially my theory would reject it.
You seem to be underestimating the power of the experience machine. When you jump into the experience machine, you are being given a whole new reality, one in which you can spend your entire existence just reproducing over and over and over again. That state of being is much more fit than the one you're in now. You might not make any babies out here, but inside experience-world, your progeny will number as the stars. Your fitness in experience-world will be positively astronomical, and as experience-world's population undergoes evolution you and your offspring will truly be at its pinnacle. And since the only feedback from all these processes is ultimately empirical, whenever you are in experience-world you must measure your state of affairs by what your experience-world experiences indicate it to be.
This is why the experience machine is such a *****. You can't really cheat your way around it like this. I mean, if this answer worked, anybody who had ever faced an experience-machine problem (including Harris himself) could just say the same thing: you're not really making the world more happy by jumping into a fake world where everyone is happy, are you?
I just gave you a reason as to why something is true, and you push it aside in favor of "it just is?" I'd ask you if this strikes you as ironic, but clearly(and surprisingly) it doesn't.
I don't find anything ironic in the basic philosophical misunderstandings that are taking place here.
Why? Where is this property of "conscious creatures" coming from? Where does it originate?
I ask, since clearly the linchpin of my argument is that this is not true, and you seem to be pushing it aside with nothing more then a "it must be so."
I can only assume that you didn't read the argument for my position on teleology that immediately follows my statement thereof when you wrote this. So: why didn't you read the argument for my position on teleology that immediately follows my statement thereof when you wrote this?
Consciousness is the line because only the assertion of a conscious creature can make true the notions of finality on which teleology depends.
Except evolution isn't happening randomly; it clearly has a tenancy towards maximum fitness. There is a definite direction to evolution and--at least theoretically--an end goal(s). This definite direction existing doesn't depend on consciousness, I might add.
There is a direction to e.g. gravity as well, and even an end "goal" if you insist on abusing that word, namely the center of mass of a system. This does not imply teleology. The only things capable of generating teleology by satisfying the finality requirements thereof are conscious beings; beings who can verify statements of the form "I want this thing to do this."
I'm not asking you to. I'm asking you to start from a DIFFERENT set of axioms and use them to justify your definition of ethics, instead of just begging the question.
Why should I start from a different set of axioms? I'm perfectly happy with the ones I've got now and you've pointed me to no consequence of my axioms which gives me even the slightest pause to reconsider them. In fact, better my set than yours, because yours seem to evince some truly devastating errors that mine do not.
In fact, let's unwind this back to topic. I didn't actually come here to make an affirmative claim about the underpinnings of ethics in the first place. The question I raised is whether or not your affirmatively-stated basis for ethics has irrefragable ontological (and now possibly epistemological) problems. I hereby drop all my affirmative claims about the basis for ethics until such time as we have settled that question.
But, you need empathy to care about another's discomfort. What if these creatures were created without empathy? (I notice you completely glossed over that example of mine, so I'll keep pointing it out)
I don't accept the premise. (Well, I do accept the premise about the creatures existing without empathy; I don't accept the premise about empathy being necessary for the recognition of others' well-being.) Why do I need empathy to reach a conclusion that tells me to act to avoid another's discomfort? Why couldn't I have a rule that says "avoid others' discomfort" irrespective of my own ability to process discomfort or emotion? A blind man can't see color, but he can still know that there are colors.
My real question is why would they CARE about us? Or--more to the point--what if they were created to not care about us?
A race composed entirely of creatures genetically predisposed to be sociopaths or Nazis or whatever will not come to the same moral conclusions we do.
This is the most bizarre argument of all. I hear it all the time, yet I truly don't understand how it is supposed to have any force. Okay, let's say there's some really bad dudes who don't care, and they go around with their not-caring attitude, always maximizing the suffering of conscious creatures at every turn with their uncaring, bad selves.
Well, then they are immoral. Full stop. Their behavior reads directly as the negation of morality. Was that supposed to be hard? I mean, honestly, given all the good questions that can be asked of any account of morality -- experience machines, trolley problems, etc etc -- isn't this one just a complete waste of time?
I think this argument only makes sense to those who are neck-deep in the swamp of moral skepticism I alluded to in my previous post.
Close, but I think you might have a common misconception. You seem to be confusing the known with the actual.
Within the framework of our current evolution there is likely an actual "best" behavior that would allow humans to thrive most efficiently (or--at least--ones that are much much better than others), but this perfect behavior not known. We can start to approximate that behavior, however, by looking at past behaviors and using them to make models to predict what future behaviors would do. We would be trying to find that "best behavior" by looking back at "how we got here." You know, using the scientific method.
Is it?
I would point out that helping other nations isn't necessarily 'fitness amoral.' Getting Africa into a better position in the world would certainly affect the US, for example.
Would one person sending one dollar make Africa an industrial nation? No, but it's a step in that direction, and I would not call it fitness amoral.
Alright, Mockingbird, I don't think I'm ready for this encounter's challenge rating, but what the heck:
You certainly could use a book to stabilize your table, but would it be the best use of the book? Would it be the best way to stabilize your table? I would say no. It works, yes, but is a suboptimal use of both the book and the table, and it's not what either were designed for. Within that framework, its both not the worst thing you could do with the book, nor the best. However, it's closer to the one than the other.
The same way any competing scientific theories battle it out.
I point out below that such a question would be beyond the scope of this normative ethics system. Once you change man's nature, you've moved the goal post, which is illegal in football.
It would be safe to say I'm not 'sure' about a singel thing on any topic, but go on...
I am not speaking of "surrounding nature" in the environmentalist sense, but of the "nature of man." As in, our natural inclinations and abilities, like advanced communication.
But, jumping ahead in your statements, I hope we would all agree that getting hit by a meteor is something best avoided.
I am saying we can observe behaviors and saying some behaviors would help obtain our talos and some would not; and that there is a range.
THAN I am saying--within that framework--you can safely call actions against "bad" and actions for "good."
Maybe I was mistaken, but I thought once you have a goal you implicitly move from a descriptive evaluation to a normative evaluation.
I would not call a moral system that allows for amoral actions a "pretty serious oversight," so I guess I disagree with your opinion.
Anyway, evolution is continuing, correct, but the changes are not drastic. So maybe the "best behavior" is changing, but not fast enough for our approximations to be invalide day to day. When the changes caused by progressive evolution are outside of our error bars, THEN we can start worrying about them. To do so otherwise, I would say, would be premature.
Even if you're right and it has no fitness value, can it be called a misfiring? That assumes the purpose of the altruism instinct is to increase fitness. But the evolutionary logic of fitness and survival can be analyzed, and possibly better so, as an efficient cause rather than a telos - an explanation for how the thing came to be, rather than an explanation of its purpose.
Is the purpose of a heart to increase fitness, or is the purpose of a heart to pump blood? Would a heart be a bad heart, would it be misfiring, if it continued to pump blood perfectly, but circumstances were (somehow) such that this activity was no longer fitness-improving?
candidus inperti; si nil, his utere mecum.
This is a pretty safe assumption.
It's sort of one and the same here. Altruism arose (and this is an extremely simplistic explanation) because one way to improve the chances that your genes pass on to the next generation is to increase your relatives' chances to have reproductive success. It also, obviously, has the purpose or effect of improving your fitness.
You're playing with words and confusing things. Obviously, the purpose of a heart is to pump blood, and it is also necessary for survival. If a heart is no longer contributing to fitness, it is now a vestigial organ. Altruism, although it misfires, is still useful for fitness (we are disproportionally altruistic towards relatives, as would be expected).
Note: I feel I should clarify...the evolutionary purpose of altruism is to increase fitness, but this is not something that is done consciously.
Warning! Warning! There's a world of difference between a purpose and an effect. It is not the purpose of cars to pump smog into the atmosphere, but it is one of their effects.
I'm trying to clarify things, and you didn't actually answer my question.
candidus inperti; si nil, his utere mecum.
I don't think it is wrong at all to say that the purpose of altruism is to increase fitness.
candidus inperti; si nil, his utere mecum.
It seems to me it depends wholly on how you wish to define purpose. You can say:
1. Altruism has the effect of conferring additional fitness upon the participant.
2. Altruism's purpose is to confer additional fitness upon the participant.
Both are true, and maybe I'm wrong, but it seems you disagree that this is the case?
Unless of course, one comes to the realization that I am arbitrarily redefining objects' telos in such a way to demonstrate that science cannot declare a teleological purpose because the individual assigns teleological purposes based on the context of the situation. To carry my book analogy further, reading a book may be the teleological purpose when I want to learn or be entertained by what it has to say, but that teleological purpose is (allegedly) finite. After I've been entertained or learned what the book has to say, it needs a new teleological purpose.
Or using a different line of thought, why should I believe that reading the book is the teleological purpose just because I've seen it be an effective telos? I've observed it be (in my purview) an equally or more effective table support. Between individuals or even groups of people there can be huge discrepancies with telos. I suppose to drive this point hard I have much more polarizing question that that relating to books: what's the teleological purpose of an individual who happens to be a woman?
Interesting... but how do competing scientific theories battle it out?
I'll have more to say below, but what I am asking is that if it is possible to decide on a goalpost in the first place.
I am not speaking of "surrounding nature" in the environmentalist sense, but of the "nature of man." As in, our natural inclinations and abilities, like advanced communication.[/Quote]First, you have to establish man has a nature (whatever that means). Second, we have to pick the ones that we want to promote. After all, there are inclinations within people that are frowned upon. The Seven Deadly Sins are examples of such inclinations. What makes them immoral (or moral if you disagree with the sentiment that some may be moral). One to keep in mind is lust.
We haven't reached a goal yet. And that's the point I'm trying to make. There is no goal.
Just because nature (the human race) isn't changing drastically within short periods of time does not mean that our nature (environment) is not. And a key part of evolution is taking into account the environment because which aspects of human nature are teleologically most important is dependent on what the environment around that dictates, isn't it?
candidus inperti; si nil, his utere mecum.
~~~~~
Certainly within the framework I set up you could show the book's purpose was to do just that in a number of ways. However, I am unsure what justification you can use to say the purpose of the table supersedes the purpose of the book.
If you are assigning your telos ad hoc, and I am using a reasoned method, then my understanding is my argument is a priori stronger than yours.
It is my understanding that the time to offhandedly dismiss an argument is at the axiomatic part, not the corollary. Thus, I would be justified in offhandedly dismissing your arbitrary telos assignment, but you would not be justified in offhandedly dismissing mine, since it is one step forward in the reasoning chain.
You would instead have to either show that mine does not follow from the axiom, or dismiss the axiom. I believe that's how it works, at any rate.
Because, in my framework, the purpose it was designed for is to convey knowledge, and it is still quite useful in that capacity. Unless this is some other kind of book and it's not, but that would only change my response, not the framework.
The roles of making a species thrive--for both men and women--are too varied for this to be given one single pigeonholed answer.
By choosing theories that best conform to experiments, and choosing experiments that best conform to reality.
I updated the OP to that effect.
There must be justification to remove something within the framework, including 'sins.' Do these 'sins' help or hurt our species? To what extent? IDK, we would have to find out ways of determining that.
Science is not about cutting large swaths of possibilities out of something, but a slow and steady march where each step must be justified.
One of the skilled we have evolved is the ability to change our environment. The extent to which we can do that is fairly great. That would also need to be taken into account, no doubt. Which simply makes everything more complicated, but it's not like it was a easy problem to solve from the start.
However, maybe I am misunderstanding, are you talking about some catastrophe so great it would wipe out mankind and there would be nothing mankind could do to stop it? I dismissed the implication because I don't see its relevance, only to say--within this framework--it would be the very definition of 'bad.'
[Clan Flamingo]
If it creates a "faulty end result", that's an observable effect, and science can and does take it into account. Scientists regularly investigate lots of phenomena that are invisible and immaterial, from the force that makes apples fall and holds the planets in their orbits to the subtleties of human emotion. These phenomena are deemed real - or "physical", if you like - precisely because they have observable effects. But it would make no sense for a scientist to observe an effect and say, "Oh, that's spiritual, I'd better not take it into account"; anything with an effect is fair game. The only people who make the distinction between "spiritual" and "physical" are those who want to believe in certain things in spite of the fact that they don't have observable effects. And if they don't have observable effects, clearly they can't produce any faulty end results, or make their existence known to humanity in any other way.
On the misuse of the word "metaphysical", I have already given you a lecture.
candidus inperti; si nil, his utere mecum.
[Clan Flamingo]
Might be a bit far afield, sorry if I'm nitpicking, I just prefer precision in language when doing philosophy.
Wouldn't you say that our ability to change our environment can be more attributed to accumulation of knowledge and the need to support larger groups of people by creating cities. Unless of course you are attributing that to our more developed brain, which actually doesn't vary too much in terms of cc compared to, say, homo erectus and even neaderthals of around 33,000 years ago.
The jury is still out, but it is very possible that Neanderthals had language abilities equivalent to ours.
candidus inperti; si nil, his utere mecum.
(Though, if we did--indeed--interbreed with them I guess they might not have been 'competitors.')
And also a... Contributor? I've heard there was interbreeding between Neanderthals and homo sapiens.
Similarly, referring back to the classical argument about the failure of a society that permits murder to thrive under evolutionary pressure, it's not that evolution "makes" murder bad -- rather, it's that the badness of murder "makes" evolution favor those societies that are apt to avoid it.
The upshot of all of this is that placing evolution in a position that is ontologically prior to morality is putting the cart before the horse. Evolution can explain how a creature came to occupy a particular niche of its fitness landscape, but it cannot explain the shape of the fitness landscape itself.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Things can’t travel faster than speed of light (there are some theories that would interject, but just go with it for a second). As you approach this “universe speed limit” it takes more and more energy for less and less velocity increase. If you ask a physicist why that is and what’s causing it, they will likely reply “Relativity.” That being the name of the theory which explains that observed phenomena. Now, is “Relativity” the force that causes this? Is “Relativity” the thing impeding the partial? Well, sorta, but that’s how we would refer to that whole thing in physics.
So, when I say “evolution,” whether correctly or incorrectly, I’m thinking about the observed phenomena that word is meant to explain. If someone were to ask me “why do animals with these traits die off while these other animals don’t?” Correctly or incorrectly I would say “evolution.” I don’t know what to call “the track;” I just know “evolution” is the observable phenomena that is “sketching” out that track for me.
I don’t think we disagree, I just think you are used to dealing with the rules of a system directly, and I am used to dealing with what the rules physically cause. "The track" is really what we are both talking about; we are just calling it different things and I’m using “evolution” to mean something more inclusive than you are.
Except these created beings could have any number of totally different biological needs than other things we’ve encountered.
Let’s assume these creatures needed to feel searing pain infliected on them by another every 24 hours or they die (or some other equally unfamiliar quirk; Illithids, a fictional creature physically unable to experience positive emotions like 'love,' spring to mind). Would it be safe to assume their moral code should be the same as ours? No, I don’t think it would. A society of these creatures would be unlike our societies, and their social evolution (not biological evolution, since you say that’s out of the picture for this example) would happen differently. Their reasoning on how “conscious creatures” should treat one another would not lead them to the same conclusions that our reasoning would.
Anyway, I understand what you’re saying here. You’re pointing out that evolution isn’t necessarily for “the track” to exist, and I agree with you. However--for us--our biology is the context that our “well-being” needs to be evaluated in. And--for us--that biology was hewn by evolutionary forces. That’s why I keep bringing it up.
I thought that was what I was saying, assuming you mean social evolution here.
There are two kinds of evolution happening here, biological and social. Biological evolution is what defined our biology. Our biology defines what increases or decreases our “well-being.” Social evolution--then--is the presser you are talking about that ‘bumps’ us back onto “the track” towards maximum well-being.
But, "the track" WAS created by biological evolution--in our case--because our biology is what defines "the track."
I’m still assert that most of the disagreements we are having are semantic, stemming from my inability to articulate what I’m trying to say correctly.
However, if our biology defines what increases or decreases our “well-being,” then it is not incorrect to put biological evolution before our morality, only social evolution.
Well, I don't know that I concur about your narrative about what physicists would say. I do know what the greatest physicist of all time said when he was faced with a similar question, and it seems that if you are right our physicists have become quite tragically inarticulate in the intervening 300 years.
The problem is that evolution is not an explanatory terminus or a brute inductive observation in the way that gravity was at the time of Newton or relativity is (maybe) today. In fact, (biological) evolution is explicable in terms of more elementary physics: "reproduction" is a physical process traceable back to cell division; "mutation" is a physical process that can be explained in terms of chromosomal crossover combined with external mutating influences, and "selection" is a physical process that can be explained by all of the various ways life may be reduced to a state where it cannot further reproduce.
So it would be a kind of deliberate ignorance to say hypotheses non fingo to evolution -- you could, if you wished to blind yourself to the rest of physical science, observe evolution and treat it as though it were as fundamental a force as gravity, but that would be a mistake.
It sounds to me like I'm talking about the track and you're talking about one possible device that you use to measure the track. These are very plainly different things. But since you believe this is reducible to a semantic problem, I'd like you to give a precise definition of evolution as you are using it.
Yes, I expect they would. For instance, if we encountered a conscious being, call it X, that thrived under extreme heat, there might be a rule that says "don't burn human beings while they're alive" and also a rule "do burn X while it is alive." How is this supposed to stand in contradiction to or raise a question with anything I've said?
Their rules for treating humans would be perforce the same as our rules for treating humans, because rules for treating humans are based on the well-being of humans, which is objective. Similarly, rules for treating X's would be based on the well-being of X's, which is objective.
Yes it would. Just as you and I are having this discussion, and we are both intelligent enough to realize that a conscious creature of kind X might achieve its well-being in different ways from us, so could the X's have the same discussion about humans and reach the same conclusions. In fact, if they were extraordinarily intelligent and capable they could, without knowing anything of human history or evolution, deduce our moral principles purely from an accurate map of our brains and bodies. They would learn exactly what causes us physical and mental suffering and/or happiness.
You keep bringing it up, and I keep insisting that it is a non sequitur. It is as though we have been tasked to measure the temperature a beaker of water, and you think that said measurement turns on whether the beaker was heated by a hot plate or a Bunsen burner. When attempting to figure out how hot the water is, how it became hot is irrelevant.
Evolution is a description of the physical process of how we got to be the way we are -- how the "water" was "heated." It is not a proxy for the way we actually are -- it's not the "temperature" of the "water."
I mean both kinds of evolution. Evolution isn't the reason murder is bad; the badness of murder is the reason evolution avoids it. Similarly, evolution isn't the reason sight is good; the goodness of sight is the reason evolution selects for it.
But biological (or social) evolution has prior inputs -- the environment in which the evolving is happening, most importantly. You seem to be treating it as a kind of deus ex machina or independent, self-generating force. This is a mistake.
The track wasn't created by biological (or social) evolution. Rather, the track is a facet of the socio+biological environment that is sought out by those processes as they take place. Evolution is, in the abstract, a search algorithm.
I don't think our disagreement is entirely semantic. I read your post carefully several times and I could not find a way of interpreting it that I did not feel contained serious ontological mistakes. If you could identify the terms you think are semantically mismatched between us and give your definitions, that might help. (I know "evolution" is one you mentioned, and a pretty important one at that.)
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
I think it's time I try to recap what's being said to see if we're both on the same page or if we are talking past each other instead. (And I don't think I'm qualified to give a full definition/description of 'evolution,' both social and biological. So, I'll skip that question as well if you don't mind)
Anyway,
Now, your point—I believe—is that there is no reason to bring up evolutionary biology in this discussion. That it really does not matter where we got our biology when talking about what causes us to flourish and what increases our well-being. We could have been snapped into existence by God or made in a laboratory by aliens, neither matters when talking about what we should or should not do. You feel that my talk of biological evolution is pointless at best and “the cart before the horse” at worst.
I agree with you IF you have already assumed--axiomatically--that the well-being and flourishing of “conscious creatures” is the only concern of ethics. We have to start somewhere, and it seems you feel it reasonable to start there. I—however—don’t like starting there.
I don’t feel it is correct for a conscious creature to assume axiomatically that conscious creatures are all that should matter, anymore than I would a mathematician telling me it was axiomatic that math is the best subject or a physicist telling me it is axiomatic that physics is the best subject. Well the latter might be true, it still weakens the argument.
I listened to Sam Harris explain in The Moral Landscape his reasoning about why consciousness necessarily makes consciousness matter, and I can tell you--unless I missed something--the argument was circular. Even ignoring the circular nature of the argument, consciousness validating consciousness leads to issues with things like the experience machine. So, I feel very uncomfortable starting the chain where it seems you wish to start it. I would rather go one “why” back. WHY should the well-being and flourishing of “conscious creatures” be the only concern of ethics?
Because I'm taking one step back, I felt it was necessary to start with Telos. What is the purpose of man? What gives us that purpose? What were we created to do? That is why I start my moral reasoning with biological evolution. I use it to justify why we should care about flourishing and why our ethics should be built around well-being. Biological evolution hewed us for that purpose, so it is our telos.
If you have another reason why we should care about the well-being and flourishing of “conscious creatures” other than an axiomatic, or circular, argument I would like to hear it. But, I don’t think you’ve given one yet.
There is one other part of the discussion I wish to address directly:
Except they could be incapable of physical and mental suffering and/or happiness, so I don’t know why giving those things value would necessarily occur to them. They could also be incapable of empathy—like the Illithid—so they wouldn’t care about “seeing things our way.” Remember, pure logic can’t provide axioms. If these creatures have different starting axioms for what they fundamentally care about, then their logical conclusions are going to be different as well. There is no reason to assume they would be naturally inclined to value things we think "conscious creatures" should value.
I'm afraid I do mind, at least in some sense. I mean, obviously you don't have to say anything you don't want to say -- but I think one should know what one's claims are, and by asking you to define evolution I am only asking what you think you mean when you use the word. I am not trying to goad you into providing an "incorrect" definition of evolution so I can say "ha ha, your definition isn't technically correct." Definitions can't be wrong. My goal is to gain an understanding of what you're asserting. If we're having a semantic disagreement, the only way to get to the bottom of it is to try to analyze it in more primitive terms that we do semantically agree on.
I would say this is a good summary of one of my core points, yes.
I would say that it's a definitional matter, rather than an axiomatic one. Ethics, insofar as it is understood to be concerned with the search for the good, must be concerned with the well-being of conscious creatures. If that's not a part of the good, then I don't know what is. To say anything else would be to misuse the word.
As with any definitional matter, you can argue or disagree, but it would be semantic. The problem with these types of semantic swamps is that they are usually a waste of time. If you get into an argument with a mathematician about why 1+1=2, he's going to have to tell you that it comes down to how those terms are defined, and that's really just the end of it. If you want to dispute that definition and say that 1+1=3, you had damned well better follow it up with an interesting application of your new definition -- otherwise the mathematician is going to walk away.
The problem with many ethicists and the is-ought gap is that they drag the discussion into this semantic swamp and never say anything interesting as a result. In fact, they get very angry at people who do try to say interesting things and their response always seems to be to try to yank them back into the swamp, which is literally defined by the inability of those mired in it to ever articulate anything interesting. Well, I'll only go into the swamp if I'm promised an interesting way out of it.
(Since you now appear to be a critic of Sam Harris, you might benefit from reading his response to critics if you haven't already. In addition to answering some of your complaints here and below, he also quotes Thomas Nagel, who gives another excellent response to this particular point.)
When we were discussing this in the other thread, I thought you agreed with the argument. In a universe devoid of conscious creatures, there would be no morality, because there would be no evaluative context for moral claims. Morality would be a sort of category mistake.
Harris responds to the "Experience machine" argument himself, and it certainly is a valid question to pose. My answer would be different than Harris's -- I would say that if the classical Cartesian argument tells us anything, it's that the universe is indistinguishable from an experience machine. Thus we can expect no empirically-grounded theory of anything to be able to distinguish an experience machine from a non-experience-machine. If our criteria for rejecting theories is that they don't deal very well with experience machines, then we must summarily reject substantially every theory ever. Our criteria for accepting (empirical) theories is ultimately grounded in experience, for Chrissake.
However, the problem is that once again it strikes me as completely tangential to what we're supposed to be discussing. Your modifications to the theory don't address the experience machine at all! Okay, so it was our evolution that shaped our brains such that certain brain states please us, with the further (wrong) stipulation that evolution is an actual primitive force that does this rather than a search algorithm acting on anterior inputs. It still remains the case that certain brain states please us, it still remains the case that the experience machine gives us access to these states, and it still remains a question of whether to go in the machine that is not in the least bit helped by the introduction of the non-sequitur that is evolutionary theory.
Why should 1+1 be 2 rather than 3? Here, let me quote Nagel's defense of Harris:
Perhaps unsurprisingly, "telos" has similar ontological properties to "ethics." For one thing, only a conscious creature can assign a "telos" to a thing. Suppose in our hypothetical lifeless universe of rocks, one of the rocks happens, through its collisions and erosions and bouncing around in the space, to have a brutally sharp edge that could be used for cutting (if there were anything to cut) and a blunt handle at the other end that could be used for grasping (if there were anyone around to grasp it) -- in other words, the rock could be used as knife, and a good one at that, if there were anyone around to do so. Would you call this state of affairs teleological?
No, says I, for there is no extrinsic finality -- there is nothing outside of the knife-rock that we could say wanted it to be that way; the universe didn't conspire to give it a knife-shape in any sense deeper than possible physical determinism and is as "happy" about it being knife-shaped as it would be about it being a spheroid. Nor is there intrinsic finality; the knife-rock itself does not "care" that it's a knife-shape; it could equally well have been a spheroid. It has no internal state or frame of reference that one can appeal to that would "bless" one configuration over the other.
Now if there were a conscious thing in the universe that wanted a knife, it could give that knife-rock a teleology, it could act such as to assign finality to the knife-rock's present state of affairs, and it could act in such a way as to preserve that state of affairs -- but not bloody well until the barrier of consciousness is crossed can such things happen.
That's what's unique about consciousness; that's why it's the line in the sand the crossing of which triggers all these other conclusions. And of course, since evolution isn't conscious, it can't assign telos to things.
(Incidentally, I don't think an evolutionary account of morality should begin by asking "what were we created to do?")
Ex nihilo nihil fit. I can't reason from no axioms or definitions. If you want me to do that, then I'm afraid I'll have to disappoint you. If this is what it comes down to, then you were right in the first place: our disagreement is semantic and irresolvable.
You're falling into some kind of qualia trap of the form "only experiencing something can allow you to understand that thing." Nobody's experienced a black hole; that doesn't prevent us from reasoning about them. Similarly, a creature that never experienced suffering is not precluded from making the logical deduction that it shouldn't be inflicted upon creatures that can experience it.
I've never been starving. I still understand, in the abstract, that famine is to be fought against and prevented where possible.
Why would a creature, in attempting to determine what it is that we fundamentally care about, start only from hypotheses about themselves? Once you've got a specimen of humanity in front of you to query and investigate, you are no longer constrained to whatever axioms or inputs you were using before -- you have new data which you can use to derive new conclusions.
(I would certainly not deny the possibility of an Ender's Game-esque scenario, where a conscious creature -- out of ignorance concerning the states of the other conscious creature's well-being -- treats another conscious creature horribly. But I would expect that conscious creature to have the capacity to condemn its own actions as immoral once the data about well-being becomes available to it -- just as, indeed, takes place in the Ender's Game scenario.)
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Biological evolution is this process happening to biology(traits) and social evolution is this happening to a society(behaviors).
Definitions are axioms; you either accept them or you don't.
You do know this is a textbook example of argumentum ad ignorantiam, right?
Since legitimizing a reason behind a definition for a secular morality is the very point of this thread, this is a question beg.
You are saying I need to either accept or reject your definition of "ethics," when an establishment of a definition of "ethics" is the very point of the debate we are having.
I see no fundamental difference between what you're doing here and someone who says "Ethics comes solely from God, and if you can't accept that then we have nothing to discuss." Sure, when I disagree or say I want to discuss reasons for that definition, you can walk away, but what have you accomplished at that point?
I'll take a look.
I agree with many of Sam's conclusions, but I don't agree with all of his reasoning. Certainly not on the reasoning of his I brought up here. I agree with the conclusion that reasoning leads him to, but I don't agree with the reasoning itself. Certainly, consciousness is something which gives value meaning, but it does not follow that the first thing consciousness must give value to is consciousness.
My theory gets around this by having consciousness first assign value to telos, not consciousness.
Except my theory values fitness before happiness. Clearly, the experience machine doesn't augment fitness; it is of no use to the evolutionary process, either biological or social. And is, in fact, detrimental in many cases. So, trivially my theory would reject it.
I just gave you a reason as to why something is true, and you push it aside in favor of "it just is?" I'd ask you if this strikes you as ironic, but clearly(and surprisingly) it doesn't.
Why? Where is this property of "conscious creatures" coming from? Where does it originate?
I ask, since clearly the linchpin of my argument is that this is not true, and you seem to be pushing it aside with nothing more then a "it must be so."
Except evolution isn't happening randomly; it clearly has a tenancy towards maximum fitness. There is a definite direction to evolution and--at least theoretically--an end goal(s). This definite direction existing doesn't depend on consciousness, I might add.
I'm not asking you to. I'm asking you to start from a DIFFERENT set of axioms and use them to justify your definition of ethics, instead of just begging the question.
But, you need empathy to care about another's discomfort. What if these creatures were created without empathy? (I notice you completely glossed over that example of mine, so I'll keep pointing it out)
My real question is why would they CARE about us? Or--more to the point--what if they were created to not care about us?
A race composed entirely of creatures genetically predisposed to be sociopaths or Nazis or whatever will not come to the same moral conclusions we do.
Well, I think we more or less agree on the basic definition. I don't think the semantic problem is here.
(Incidentally, here is the definition I operate under: evolution is the tendency of a population undergoing selection, mutation, and reproduction to move with successive generations towards local maxima of reproductive fitness.)
This is, to quote Pauli, not even wrong. A definition is a substitution; anywhere you see a defined thing, you may replace it with its definition. Definitions are neither true nor false. Axioms, on the other hand, are basic presupposed truths at the bottom of an ontology. Crucially, changing definitions never changes what's true (other than by the superficial replacement entailed by the definitional swap). Changing axioms, however, can change what's true.
What? No it bloody isn't. I'm not saying anything of the form "P is not known to be true, therefore it must be false."
I will restate: my claim is that to fail to recognize the application of the word "good" to the well-being of conscious creatures is to either misuse or redefine the word, making the argument semantic from that point forward. This claim does not have the form "P is not known to be true, therefore it must be false" for any P.
Definitions are not debatable. There is nothing to be said in a semantic argument that could ever make one party right and the other wrong. Nor can they be "rejected" in the sense that an actual truth claim could. You must accept my definition of ethics in order to understand my claims. That doesn't mean you have to agree with my claims or that my claims can't be wrong -- it means that I intend them to be interpreted in a certain way and I am making that manifest by providing the definitions.
Definitions don't do any work. A person who defines ethics as coming from God still has to prove that his newly-defined ethics exists and can be used to draw conclusions. That is what an intelligent discussion with a person who defines ethics as coming from God would entail. Such a discussion might end with the other party saying "the thing named by your definition doesn't exist" or "your definition doesn't mesh with empirical reality" -- but it could never end with "your definition is wrong/circular/question-begging." Arguments are question-begging, not definitions.
Now once you come to understand the claims I am making by plugging in the definitions I'm telling you I'm using, then I am happy to proceed forward and argue that my claims make sense: that the well-being of conscious creatures is a thing that exists and can be increased and/or decreased in various ways. But I thought you already believed that.
We may need to unwind this all the way back to basic philosophy. Clearly it is not possible to apply epistemics or ontology to problems of ethics when we are not on the same page about the basic rules.
I can find nothing in this paragraph that questions the reasoning at hand. Can you cite the exact argument of Harris' that you're criticising with this and point to where it applies?
I don't see how this helps. Whatever your framework purports to be of value, if it arises from an empirical world state, the experience machine can give you an experience that will increase your perception of that value. If telos comes from evolution, then we'll load a program into the experience machine that gives you the experience of being vastly more evolutionarily-fit. Then what?
You seem to be underestimating the power of the experience machine. When you jump into the experience machine, you are being given a whole new reality, one in which you can spend your entire existence just reproducing over and over and over again. That state of being is much more fit than the one you're in now. You might not make any babies out here, but inside experience-world, your progeny will number as the stars. Your fitness in experience-world will be positively astronomical, and as experience-world's population undergoes evolution you and your offspring will truly be at its pinnacle. And since the only feedback from all these processes is ultimately empirical, whenever you are in experience-world you must measure your state of affairs by what your experience-world experiences indicate it to be.
This is why the experience machine is such a *****. You can't really cheat your way around it like this. I mean, if this answer worked, anybody who had ever faced an experience-machine problem (including Harris himself) could just say the same thing: you're not really making the world more happy by jumping into a fake world where everyone is happy, are you?
I don't find anything ironic in the basic philosophical misunderstandings that are taking place here.
I can only assume that you didn't read the argument for my position on teleology that immediately follows my statement thereof when you wrote this. So: why didn't you read the argument for my position on teleology that immediately follows my statement thereof when you wrote this?
Consciousness is the line because only the assertion of a conscious creature can make true the notions of finality on which teleology depends.
There is a direction to e.g. gravity as well, and even an end "goal" if you insist on abusing that word, namely the center of mass of a system. This does not imply teleology. The only things capable of generating teleology by satisfying the finality requirements thereof are conscious beings; beings who can verify statements of the form "I want this thing to do this."
Why should I start from a different set of axioms? I'm perfectly happy with the ones I've got now and you've pointed me to no consequence of my axioms which gives me even the slightest pause to reconsider them. In fact, better my set than yours, because yours seem to evince some truly devastating errors that mine do not.
In fact, let's unwind this back to topic. I didn't actually come here to make an affirmative claim about the underpinnings of ethics in the first place. The question I raised is whether or not your affirmatively-stated basis for ethics has irrefragable ontological (and now possibly epistemological) problems. I hereby drop all my affirmative claims about the basis for ethics until such time as we have settled that question.
I don't accept the premise. (Well, I do accept the premise about the creatures existing without empathy; I don't accept the premise about empathy being necessary for the recognition of others' well-being.) Why do I need empathy to reach a conclusion that tells me to act to avoid another's discomfort? Why couldn't I have a rule that says "avoid others' discomfort" irrespective of my own ability to process discomfort or emotion? A blind man can't see color, but he can still know that there are colors.
This is the most bizarre argument of all. I hear it all the time, yet I truly don't understand how it is supposed to have any force. Okay, let's say there's some really bad dudes who don't care, and they go around with their not-caring attitude, always maximizing the suffering of conscious creatures at every turn with their uncaring, bad selves.
Well, then they are immoral. Full stop. Their behavior reads directly as the negation of morality. Was that supposed to be hard? I mean, honestly, given all the good questions that can be asked of any account of morality -- experience machines, trolley problems, etc etc -- isn't this one just a complete waste of time?
I think this argument only makes sense to those who are neck-deep in the swamp of moral skepticism I alluded to in my previous post.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.