Freezing out would be irrational. As I said, this person prefer A and B over nothing. If it falls into a dilemma until the time ends, this would show up a trace of economic irrationality in people.
I've my share of reading in behavior economics and my opinion is that all studies that reveal irrational behavior are very poorly executed and never replicated (while studies showing verifying the validity of rational conditions are well spread). There's a reason why the state of the art considers human beings perfectly rational decision makers when it comes to choice theory (economics). This result would be seriously confronting of the state of the art.
"Rationality" implies a parent appeal for justification. In decision theory, you assume some value, goal, preference, or interest coming in. But that value itself isn't "rational" unless it is in turn justified by some parent value.
This creates an interesting problem that yields either existentialism or nihilism: there is no ultimate rational value. That's because "ultimate" means "has no parent" and "rational" means "has a parent."
In economics, actors are treated as "rational" insofar as it is assumed that they generally are good at making decisions that satisfy their values, whatever those values are. Aggregate those actors together, and the net aim ought to be toward the value aggregate. There is a broad consensus in terms of many human values (most humans feel compelled to fill their stomachs and protect children, for example), so most individuals are cool with systems that optimize the value aggregate.
The thing is, this is a generalization we make because there's no way to practically model reality, which we know consists of people who are bad at making decisions in service of their values (especially higher-order values), and the solution is a shotgun that can only satisfy most; outliers may have eccentric values, and will be dissatisfied by a system that advances the interests of the value aggregate.
"Economics depends on this assumption" doesn't mean "this assumption is impeccably true." In fact, we know it's an imperfect whitewash.
In any case, all decisions make appeals to, ultimately, appetitious desires over which we have little arbitrary control. People have suffered brain damage that robs them of their appetitious desires. The ancient philosophers would expect that these people would make purely rational decisions. Instead, they don't make decisions. Put one on a cereal aisle to pick a cereal, and they have no appetites by which to choose one over the other, and thus are frozen. Even "objective-seeming" things like calorie counts and nutrients don't motivate if you have no appetite to be healthy and alive, which is ultimately irrational (like every value or interest).
In A.I., we run into this. A decision function looks like this:
[returns a Decision] GetOptimum(value V)
We have this waking delusion that we don't have to pass a V to get a decision. Turns out we do. It's just that we don't usually notice that we're passing Vs all the time, nor do we have a perfect understanding of what Vs we're passing (though we can figure out some of the big ones).
One way you can circumvent this is by randomizing its Vs, and then having the A.I. mutate (including its Vs), propagate, and then have some sort of selector that kills or makes sterile. Evolution by natural selection happens over time, crafting a set of Vs. But to what end? Well, the V set is crafted toward the meta-V of whatever selector you were applying.
But, this is disappointing, because as programmer, you arbitrated the meta-V of the selector.
If the selector happens to be a natural thing in the world, however, then it can seem like your A.I. has "real values" that aren't just contrived. Feels "magical," even if it's as stupid as "he likes grass because, over time, his ancestral mutants that acquired that preference were less likely to wander into the mountains and freeze."
Anyway, a lot of sci-fi that explores A.I. is bad because it assumes a robot would have self-interested motivations "magically," without appealing to a programmer's whims or a meta-V applying cutt-throat, selective pressure over time.
I understand these words(because I looked them up), but I don't know how "generativity for Homo sapiens" has baring on this discussion. You're going to have to break it down for me if you want me to answer.
You're the one who used that word when you defined "flourishing"!
Quote from Taylor »
Because we have defined the process that created Homo sapiens as being good/desirable. I will point out--yet again--that this was an axiomatic assertion, and subject to all of the weaknesses thereto.
I must have missed this the first time. Once you assert a good, whether it be the resilience of the human race or the conquest of the planet by fungi, you'd be correct that an objective morality flows therefrom on the applied-side.
Quote from Taylor »
Survivability is objectively good. We just happen to be humans--therefor--that is the survivability we evolved to care about.
Okay... stuff like this is why I'm having trouble understanding your position. Your previous response -- "Because we have defined the process that created Homo sapiens as being good/desirable. I will point out--yet again--that this was an axiomatic assertion, and subject to all of the weaknesses thereto." -- should apply here, and yet now you're again talking about objective morality.
If it makes a reference to something somebody cares about, then it's not an objective good. An objective good is something that is good intrinsically without an appeal to anybody's tastes, preferences, or desires. This is why nothing is objectively good.
When we're talking about "fitness[]," I want you to fill in that "[]" with something that is not "flourishing" or "fitness." Something coherent and single-faced.
Of course, when you do, I will say, "There's no ultimately rational reason that ought to be valued." This is because objective morality is false.
Do you take a relativist position, or something else (say, a subjectivist position)?
Both, in the sense that all moral statements are relative to preference referents.
This isn't to say that I'm a pure relativist across the board who denies the veracity of moral statements. As long as all preference referents are explicated, moral statements can have truth values (because they have, by explicating those referents, been thus reduced to mundane world-facts).
Consider the moral dungeon:
You're at the circular platform on the top. Once you hop down, there's no way to get to the other treasure chests. The question is, "Which way should you go?"
This question makes an appeal to 2 things:
(1) Which treat is desired? "Should" has an implicit a value referent in the form of what you're actually going for. If you're gluten intolerant, you don't want a donut.
(2) Which way actually goes to a chest containing the desired thing?
The latter is objective, and thus right decisionmaking does have an important objective component. But the former is completely subjective, that is, it makes an appeal to the preferences of some preferring agent (or group of agents; perhaps you were charged with this mission by a village of donut-eating, sentient mice).
Moral objectivists want to say that you can get #1 without making such an appeal. "Popsicles are just correct," they might say, "regardless of what anybody thinks."
Now, that's ludicrous on its face, and so they'll cloak that absurdity inside an ambiguous word, like "scrumtrilescence." "Nobody can deny that it is right to seek that which is scrumtrilescent," they'll proclaim. "It is objectively moral." And when you hear them say that, you might think, "Sure, I guess."
But then you ask what "scrumtrilescence" means, and they're like, "Oh, everyone knows what it means!" And you're like, "No, really, tell me." And they say things like, "You know... things that are awesome. It is a word with connotations of dolphin-riding, facepainting, and popsicle-acquiring."
If you gave me examples--like in those other fields I mentioned--I would have an easier time giving you what I want.
Here's an example. Let's say we're talking about fitness[basketball]. Basketball involves many different skills, and many different metrics. You could rate a person's skill at ball-handling, inside shooting, free-throw shooting, three-point shooting, passing, blocking, stealing, rebounding, etc. You could then give each of these skills a weight, and then do a weighted average to get a final "basketball skill" metric.
Even though "basketball skill" is many-faced, it is coherent. It doesn't require "connotations" or "notions" or "impressions"; each of its subcomponents can be put in terms of plausibly measurable things, and being a 10 in one skill does not create logical problems with being a 10 in any other skill. You being absolutely perfect at inside shooting does not necessarily mean you have to be worse at three-point shooting, for instance.
At the same time, we can also meaningfully talk about whether a certain skill ought to be considered basketball-skill-contributory. We might say that a person's singing skill should have zero weight contribution into the above weighted average.
In the real world, there's a real discussion about whether body fat percentage should be given contributory weight into what is considered "healthy." High body fat percentage might be generally correlated with various things that are consensus "unhealthy," like morbidity and diseases, but it might not be specifically correlated with an individual who is disease-free, will live for a long time, but who is also fat.
Now, there are various parties that want to gloss over that controversy, and brazenly proceed to maintain the use of "healthy" with implications of "low body fat" in their arguments, discussion, and product marketing. Is the proper response to their usage, "Oh, healthiness just has various connotations, we basically get it"? No. The proper response is, "What do you mean, in specific and measurable terms, when you say 'healthy?'"
And if they say "Healthiness is the optimization of health," they have said nothing.
So, the question is, when you talk about a word like "flourishing" that has "connotations" of various other things, I am challenging each of those other things by asking for their justifications. Why is generativity for Homo sapiens objectively good? Why is growth of the species Homo sapiens objectively good? Why is the resilience of species Homo sapiens objectively good?
And if you wish to defend your moral objectivism, you need to answer those questions with something other than, "Don't you agree X is good?" Consensus of subjects is not objectivism.
could you give me an example of what exactly you're looking for?
When we're talking about "fitness[]," I want you to fill in that "[]" with something that is not "flourishing" or "fitness." Something coherent and single-faced.
Of course, when you do, I will say, "There's no ultimately rational reason that ought to be valued." This is because objective morality is false.
Again, this would only be relevant if you deny that you can measure fitness as a relative quantity, IE that something can be shown to be more fit than something else.
Are you denying that is possible?
This is literally impossible without a value referent. It's like talking about "excellence at specific sport X" without telling me what X is. Fitness in evolution is "excellence at persistence in terms of what the environment demands."
Quote from Taylor »
I am not trying to maximizes the fitness of genetic traits; I am trying to maximizes the fitness of moral behaviors.
I wasn't talking about "fitness," I was talking about the metric to which "fitness" is supposed to be referring. You NEED to stop confusing "excellence" and "the sport at which you're excellent."
If you're trying to maximize the fitness[] of moral behaviors, you need to fill in that "[]" with the goal you're reaching for. You can't put "fitness" in there. "Fitness[fitness]" is completely meaningless. Fitness[flourishing] is near-useless because the term "flourishing" is many-faced and ambiguous to the point of incoherence. Evolution employs "fitness[gene persistence]," but you say that's not what you're talking about.
Quote from Taylor »
I believe YOU already answered this question for all objective morality earlier on the thread. As in, no one has to agree with me.
When I said that, it was because I was talking about the semantics of a term. I might define justice as "retributing in proportion to the infraction with the goal of deterrance, social protection, and rehabilitation," but someone else might define it as "a hot dog."
There is no purely objective morality, as in, morality divorced from the preferences of one or more agents. Folks can disagree with me on that, but they'll be absolutely mistaken to do so.
Quote from Taylor »
I am not attempting to make any value determinations on genetic traits, only behaviors.
You contradict this with the next thing you say:
Quote from Taylor »
I've had the teleology argument with Crashing00, and I agree. You do have to presuppose that you value the survivability of the human race, correct.
Do you?
The human race is a collection of organisms with a fuzzy, but limited, scope of genes. And yet, you're saying you don't care about genes. Which is it? You need to understand that telling me "I don't care about genetic persistence" and also "I care about human persistence" is a contradiction.
I value the survival of the human race, and also the survival of many other species of organisms. Also, the survival of various world heritage sites and natural formations. Perhaps what I value is also valued, 100%, by every other organism of species Homo sapiens. Great. That means we can create a "morality by universal Homo sapiens consensus." Again, we have not left port for "objective morality El Dorado." Calling my dog a unicorn does not make magical horse-like beings, each with a single horn, exist.
Here is the thought experiment you have to solve in order to truly make the "objective morality unicorn" exist: An army of extra-terrestrials named the Krol'Tar comes to Earth in an invasion force, and decides to start killing humans willy-nilly. You manage to make contact with their general, and tell him that their invasion is morally wrong. He asks, "Why?" You have to be able to tell him something that cogently answers his question without appeals to anyone's subjective interests (including attempts at sympathy).
Well, evolutionary biology already supplies a pretty good definition of the term "fitness." http://en.wikipedia.org/wiki/Fitness_(biology)
They'd have to or it wouldn't be much of a field. It takes very little legwork to adapt that definition to apply to behaviors instead of genetic traits.
It takes the leg-work of "referring to a metric." In the case of evolution, it's "genetic persistence." If "genetic persistence" is what you're trying to optimize, then we're back to the questions I was asking several posts ago.
Those questions include:
* Why ought that be the metric from which an objective morality flows? (Note that by "ought," I am demanding a parent justifying goal. If you proceed to provide that parent justifying goal, I will then ask why ought we care about that goal, demanding a parent justifying goal for that. It doesn't end; this is the realization of existentialism -- there is no ultimately rational source of values or morality -- and it is completely true.)
* If we ultimately value genetic persistence, it means we don't like mutation. Is genetic stagnation really part of your proposal?
Again, evolution doesn't "like" fitness. Evolution "likes" the fit and the unfit. It's just that "it kills" the unfit. You have to value survival walking in; evolution won't tell you that you should want you and your genetic progeny to survive.
I think what I am proposing is very analogous to the field of economics. What is the "optima" state for the economy to be in? What is the "right number" for our GDP, GWP, or the Dow Jones Industrial Average to be at forever?
Resilience is a metric. "That which simultaneously connotes each of resilience and growth and generativity" is not a metric.
In short, claiming that the stuff in my swimming pool is not water would be confusing and misleading, while claiming that the vacuum of space is actually a luminiferous aether would be confusing and misleading.
I think this is the crux of it, regardless of the historical origins of the terms or that to which they're associated. "Semantic ethics" is about molding nomenclature in service of coherent, consistent conference of information, and most (if not all) philosophical conversations reduce to semantic ethics unless kept aloft by sirens of mysticism or incoherence (like libertarian free will).
EDIT:
I shouldn't so brazenly declare the goal of semantic ethics, as if there's "A goal." The goal could be anything. The goals motivating my advocacy of semantic conservativism with regard to free will are:
1) I believe it's helpful to reject libertarian free will and the "buck stops here responsibility" folklore that comes along with it.
2) I think it's actually dangerous to throw the volitional dictionary (will, choice, responsibility, etc.) into the bonfire, because people in general are not astute enough to realize that fatalism does NOT follow from a lack of libertarian free will. Like the many folks who thought evolution implied eugenics, many think determinism implies fatalism, and the Sam Harris route of "you don't make choices, you're not responsible, and you have no free will" is literally socially dangerous.
"Fitness" is not a value referent at all. It's like saying "excellence" is a value referent. It means nothing in a vacuum, but requires an implicit value referent to receive meaning (for instance, "excellence at playing basketball").
There is no "health" metric. You can talk about your weight, your body fat percentage, the degree to which you're capable of doing physical activities of which you're fond, whether you know you're dying and how long you have, etc. But "health" itself is too incoherent, and/or ambiguous, and/or many-faced to be used for measurement.
You can measure fitness, however, and state something is more fit than something else.
Sure. In the example above, blondes would be most fit. Fitness means nothing except against a selective environment, which could select for anything. Evolutionary fitness is not synonymous with "goodness" in terms of things humans value in life.
But, you can use it as a comparative metric. We know that the behavior of murdering everyone would be less fit than prohibiting murder.
And if your metric is population size, then forcing every fertile woman to have children would be more fit than not. You need a coherent goal metric. Bandying about words like fitness in a vacuum of value fails to find the El Dorado of moral solutions; it doesn't even leave port.
Flourishing is “to live within an optimal range of human functioning,
"To live within an optimal range of human functioning" has a dangling value reference in the word "optimal." This definition so far adds no information.
one that connotes generativity, growth, and resilience.”
No. Vague connotation of a panoply of items does not get us coherence. That's textbook ambiguous "many-faced-ness," like the "metric" of "health."
It's one thing for a definition to be fuzzy. Fuzziness is blurred boundaries along a single dimension, like "warm <-> hot." "Flourishing," however, like "health," is like a 20-sider with various faces, some of which are incommensurable.
It's very easy to mistakenly think that ambiguous, many-faced ideas are coherent. That's because when you imagine the concept, you are provoked to imagine discrete things. Your brain creates an analogy scene. Perhaps you imagine flowers growing in fast-forward across a field, shining silver buildings popping up, a rainbow over a bustling morning marketplace, a happy family holding their newborn child, a spaceship blasting off, two young girls playing a cooperative video game together, two enemy warriors shaking hands in a truce, a Native American tribal dance around a fire, two monks giving a toast over a new batch of beer, a serial killer being found guilty, a pod of dolphins saving a toddler from drowning, etc.
Maybe that's what you think of when you think "flourishing." That's what I think of!
But that doesn't make it have a coherent definition sufficient enough to be used as a metric.
I think you need to understand that you can't "help evolution." Evolution is simply what "is."
For example, I say I'm "helping evolution" by killing every human without naturally blonde hair. The result will be a population entirely of blondes. That will be the "evolved" population, because "evolution" is the change in the genetics of a body of organisms as genetic drift occurs and selection takes its toll -- in this case, I represented the deadly selective environment. But it's absurd to say that what I did was "right" or "moral." Evolution doesn't "want" anything whatsoever. It doesn't "want" the "flourishing" of any species. It's just that organisms with traits adapted to their environments tend to persist.
Fitness, from the biological idea of fitness. "Population size" is too narrow a metric, because depending on context that can be helpful or detrimental for the continued flourishing of a society.
"Fitness" is not a target metric, fitness is the degree to which an organism, based on its properties, facilitates a target metric. Your target metric appears to be the word "flourishing," which apparently is not "has a large population size."
What does "flourishing" mean? Does it have a coherent definition, or is it a philosophical siren, dooming every conversation that depends on it to endlessness and fruitlessness? Everything I have read indicates that it is the latter; it's a word that lacks a positive definition, and is instead what is left after realizing that every coherent target metric, if declared, seems to yield absurda.
If the word means, "Psh, you know, the good stuff," then we have just woken up to find ourselves still docked at port.
I'm not talking about mutations; I am talking about 'social fitness,' which I am defining as the ability of that society to propagate its morals/culture. The fitness of its cultural meme.
That's what I was talking about, too. I was talking about the stagnation vs. volatility of meme mutation. By "propagate its morals," do you mean, the capacity to maintain a conservative zeitgeist against what might be risky mutations, or do you mean a quickly-moving progressive zeitgeist, daring to try new things?
I agree that things that will happen will happen. That may be the most pathetic moral claim ever, though (in the sense that it is a non-moral redundancy).
"Rationality" implies a parent appeal for justification. In decision theory, you assume some value, goal, preference, or interest coming in. But that value itself isn't "rational" unless it is in turn justified by some parent value.
This creates an interesting problem that yields either existentialism or nihilism: there is no ultimate rational value. That's because "ultimate" means "has no parent" and "rational" means "has a parent."
In economics, actors are treated as "rational" insofar as it is assumed that they generally are good at making decisions that satisfy their values, whatever those values are. Aggregate those actors together, and the net aim ought to be toward the value aggregate. There is a broad consensus in terms of many human values (most humans feel compelled to fill their stomachs and protect children, for example), so most individuals are cool with systems that optimize the value aggregate.
The thing is, this is a generalization we make because there's no way to practically model reality, which we know consists of people who are bad at making decisions in service of their values (especially higher-order values), and the solution is a shotgun that can only satisfy most; outliers may have eccentric values, and will be dissatisfied by a system that advances the interests of the value aggregate.
"Economics depends on this assumption" doesn't mean "this assumption is impeccably true." In fact, we know it's an imperfect whitewash.
In any case, all decisions make appeals to, ultimately, appetitious desires over which we have little arbitrary control. People have suffered brain damage that robs them of their appetitious desires. The ancient philosophers would expect that these people would make purely rational decisions. Instead, they don't make decisions. Put one on a cereal aisle to pick a cereal, and they have no appetites by which to choose one over the other, and thus are frozen. Even "objective-seeming" things like calorie counts and nutrients don't motivate if you have no appetite to be healthy and alive, which is ultimately irrational (like every value or interest).
In A.I., we run into this. A decision function looks like this:
[returns a Decision] GetOptimum(value V)
We have this waking delusion that we don't have to pass a V to get a decision. Turns out we do. It's just that we don't usually notice that we're passing Vs all the time, nor do we have a perfect understanding of what Vs we're passing (though we can figure out some of the big ones).
One way you can circumvent this is by randomizing its Vs, and then having the A.I. mutate (including its Vs), propagate, and then have some sort of selector that kills or makes sterile. Evolution by natural selection happens over time, crafting a set of Vs. But to what end? Well, the V set is crafted toward the meta-V of whatever selector you were applying.
But, this is disappointing, because as programmer, you arbitrated the meta-V of the selector.
If the selector happens to be a natural thing in the world, however, then it can seem like your A.I. has "real values" that aren't just contrived. Feels "magical," even if it's as stupid as "he likes grass because, over time, his ancestral mutants that acquired that preference were less likely to wander into the mountains and freeze."
Anyway, a lot of sci-fi that explores A.I. is bad because it assumes a robot would have self-interested motivations "magically," without appealing to a programmer's whims or a meta-V applying cutt-throat, selective pressure over time.
You're the one who used that word when you defined "flourishing"!
I must have missed this the first time. Once you assert a good, whether it be the resilience of the human race or the conquest of the planet by fungi, you'd be correct that an objective morality flows therefrom on the applied-side.
Okay... stuff like this is why I'm having trouble understanding your position. Your previous response -- "Because we have defined the process that created Homo sapiens as being good/desirable. I will point out--yet again--that this was an axiomatic assertion, and subject to all of the weaknesses thereto." -- should apply here, and yet now you're again talking about objective morality.
If it makes a reference to something somebody cares about, then it's not an objective good. An objective good is something that is good intrinsically without an appeal to anybody's tastes, preferences, or desires. This is why nothing is objectively good.
Both, in the sense that all moral statements are relative to preference referents.
This isn't to say that I'm a pure relativist across the board who denies the veracity of moral statements. As long as all preference referents are explicated, moral statements can have truth values (because they have, by explicating those referents, been thus reduced to mundane world-facts).
Consider the moral dungeon:
You're at the circular platform on the top. Once you hop down, there's no way to get to the other treasure chests. The question is, "Which way should you go?"
This question makes an appeal to 2 things:
(1) Which treat is desired? "Should" has an implicit a value referent in the form of what you're actually going for. If you're gluten intolerant, you don't want a donut.
(2) Which way actually goes to a chest containing the desired thing?
The latter is objective, and thus right decisionmaking does have an important objective component. But the former is completely subjective, that is, it makes an appeal to the preferences of some preferring agent (or group of agents; perhaps you were charged with this mission by a village of donut-eating, sentient mice).
Moral objectivists want to say that you can get #1 without making such an appeal. "Popsicles are just correct," they might say, "regardless of what anybody thinks."
Now, that's ludicrous on its face, and so they'll cloak that absurdity inside an ambiguous word, like "scrumtrilescence." "Nobody can deny that it is right to seek that which is scrumtrilescent," they'll proclaim. "It is objectively moral." And when you hear them say that, you might think, "Sure, I guess."
But then you ask what "scrumtrilescence" means, and they're like, "Oh, everyone knows what it means!" And you're like, "No, really, tell me." And they say things like, "You know... things that are awesome. It is a word with connotations of dolphin-riding, facepainting, and popsicle-acquiring."
Here's an example. Let's say we're talking about fitness[basketball]. Basketball involves many different skills, and many different metrics. You could rate a person's skill at ball-handling, inside shooting, free-throw shooting, three-point shooting, passing, blocking, stealing, rebounding, etc. You could then give each of these skills a weight, and then do a weighted average to get a final "basketball skill" metric.
Even though "basketball skill" is many-faced, it is coherent. It doesn't require "connotations" or "notions" or "impressions"; each of its subcomponents can be put in terms of plausibly measurable things, and being a 10 in one skill does not create logical problems with being a 10 in any other skill. You being absolutely perfect at inside shooting does not necessarily mean you have to be worse at three-point shooting, for instance.
At the same time, we can also meaningfully talk about whether a certain skill ought to be considered basketball-skill-contributory. We might say that a person's singing skill should have zero weight contribution into the above weighted average.
In the real world, there's a real discussion about whether body fat percentage should be given contributory weight into what is considered "healthy." High body fat percentage might be generally correlated with various things that are consensus "unhealthy," like morbidity and diseases, but it might not be specifically correlated with an individual who is disease-free, will live for a long time, but who is also fat.
Now, there are various parties that want to gloss over that controversy, and brazenly proceed to maintain the use of "healthy" with implications of "low body fat" in their arguments, discussion, and product marketing. Is the proper response to their usage, "Oh, healthiness just has various connotations, we basically get it"? No. The proper response is, "What do you mean, in specific and measurable terms, when you say 'healthy?'"
And if they say "Healthiness is the optimization of health," they have said nothing.
So, the question is, when you talk about a word like "flourishing" that has "connotations" of various other things, I am challenging each of those other things by asking for their justifications. Why is generativity for Homo sapiens objectively good? Why is growth of the species Homo sapiens objectively good? Why is the resilience of species Homo sapiens objectively good?
And if you wish to defend your moral objectivism, you need to answer those questions with something other than, "Don't you agree X is good?" Consensus of subjects is not objectivism.
Of what term?
"Flourishing?" There is no biological term "flourishing" that is not hopelessly incoherent.
"Fitness?" As I said above, the biological definition of fitness is "fitness[gene persistence]," which you say you're not talking about.
When we're talking about "fitness[]," I want you to fill in that "[]" with something that is not "flourishing" or "fitness." Something coherent and single-faced.
Of course, when you do, I will say, "There's no ultimately rational reason that ought to be valued." This is because objective morality is false.
This is literally impossible without a value referent. It's like talking about "excellence at specific sport X" without telling me what X is. Fitness in evolution is "excellence at persistence in terms of what the environment demands."
I wasn't talking about "fitness," I was talking about the metric to which "fitness" is supposed to be referring. You NEED to stop confusing "excellence" and "the sport at which you're excellent."
If you're trying to maximize the fitness[] of moral behaviors, you need to fill in that "[]" with the goal you're reaching for. You can't put "fitness" in there. "Fitness[fitness]" is completely meaningless. Fitness[flourishing] is near-useless because the term "flourishing" is many-faced and ambiguous to the point of incoherence. Evolution employs "fitness[gene persistence]," but you say that's not what you're talking about.
When I said that, it was because I was talking about the semantics of a term. I might define justice as "retributing in proportion to the infraction with the goal of deterrance, social protection, and rehabilitation," but someone else might define it as "a hot dog."
There is no purely objective morality, as in, morality divorced from the preferences of one or more agents. Folks can disagree with me on that, but they'll be absolutely mistaken to do so.
You contradict this with the next thing you say:
The human race is a collection of organisms with a fuzzy, but limited, scope of genes. And yet, you're saying you don't care about genes. Which is it? You need to understand that telling me "I don't care about genetic persistence" and also "I care about human persistence" is a contradiction.
I value the survival of the human race, and also the survival of many other species of organisms. Also, the survival of various world heritage sites and natural formations. Perhaps what I value is also valued, 100%, by every other organism of species Homo sapiens. Great. That means we can create a "morality by universal Homo sapiens consensus." Again, we have not left port for "objective morality El Dorado." Calling my dog a unicorn does not make magical horse-like beings, each with a single horn, exist.
Here is the thought experiment you have to solve in order to truly make the "objective morality unicorn" exist: An army of extra-terrestrials named the Krol'Tar comes to Earth in an invasion force, and decides to start killing humans willy-nilly. You manage to make contact with their general, and tell him that their invasion is morally wrong. He asks, "Why?" You have to be able to tell him something that cogently answers his question without appeals to anyone's subjective interests (including attempts at sympathy).
It takes the leg-work of "referring to a metric." In the case of evolution, it's "genetic persistence." If "genetic persistence" is what you're trying to optimize, then we're back to the questions I was asking several posts ago.
Those questions include:
* Why ought that be the metric from which an objective morality flows? (Note that by "ought," I am demanding a parent justifying goal. If you proceed to provide that parent justifying goal, I will then ask why ought we care about that goal, demanding a parent justifying goal for that. It doesn't end; this is the realization of existentialism -- there is no ultimately rational source of values or morality -- and it is completely true.)
* If we ultimately value genetic persistence, it means we don't like mutation. Is genetic stagnation really part of your proposal?
Again, evolution doesn't "like" fitness. Evolution "likes" the fit and the unfit. It's just that "it kills" the unfit. You have to value survival walking in; evolution won't tell you that you should want you and your genetic progeny to survive.
Resilience is a metric. "That which simultaneously connotes each of resilience and growth and generativity" is not a metric.
I think this is the crux of it, regardless of the historical origins of the terms or that to which they're associated. "Semantic ethics" is about molding nomenclature in service of coherent, consistent conference of information, and most (if not all) philosophical conversations reduce to semantic ethics unless kept aloft by sirens of mysticism or incoherence (like libertarian free will).
EDIT:
I shouldn't so brazenly declare the goal of semantic ethics, as if there's "A goal." The goal could be anything. The goals motivating my advocacy of semantic conservativism with regard to free will are:
1) I believe it's helpful to reject libertarian free will and the "buck stops here responsibility" folklore that comes along with it.
2) I think it's actually dangerous to throw the volitional dictionary (will, choice, responsibility, etc.) into the bonfire, because people in general are not astute enough to realize that fatalism does NOT follow from a lack of libertarian free will. Like the many folks who thought evolution implied eugenics, many think determinism implies fatalism, and the Sam Harris route of "you don't make choices, you're not responsible, and you have no free will" is literally socially dangerous.
I'm a radical on the issue of luminiferous aether, like nearly everyone.
There is no "health" metric. You can talk about your weight, your body fat percentage, the degree to which you're capable of doing physical activities of which you're fond, whether you know you're dying and how long you have, etc. But "health" itself is too incoherent, and/or ambiguous, and/or many-faced to be used for measurement.
Sure. In the example above, blondes would be most fit. Fitness means nothing except against a selective environment, which could select for anything. Evolutionary fitness is not synonymous with "goodness" in terms of things humans value in life.
And if your metric is population size, then forcing every fertile woman to have children would be more fit than not. You need a coherent goal metric. Bandying about words like fitness in a vacuum of value fails to find the El Dorado of moral solutions; it doesn't even leave port.
"To live within an optimal range of human functioning" has a dangling value reference in the word "optimal." This definition so far adds no information.
It concludes...
No. Vague connotation of a panoply of items does not get us coherence. That's textbook ambiguous "many-faced-ness," like the "metric" of "health."
It's one thing for a definition to be fuzzy. Fuzziness is blurred boundaries along a single dimension, like "warm <-> hot." "Flourishing," however, like "health," is like a 20-sider with various faces, some of which are incommensurable.
It's very easy to mistakenly think that ambiguous, many-faced ideas are coherent. That's because when you imagine the concept, you are provoked to imagine discrete things. Your brain creates an analogy scene. Perhaps you imagine flowers growing in fast-forward across a field, shining silver buildings popping up, a rainbow over a bustling morning marketplace, a happy family holding their newborn child, a spaceship blasting off, two young girls playing a cooperative video game together, two enemy warriors shaking hands in a truce, a Native American tribal dance around a fire, two monks giving a toast over a new batch of beer, a serial killer being found guilty, a pod of dolphins saving a toddler from drowning, etc.
Maybe that's what you think of when you think "flourishing." That's what I think of!
But that doesn't make it have a coherent definition sufficient enough to be used as a metric.
Then Tufts is admirable.
Functionalism is the best way for science and philosophy to play together, avoiding the trappings of mysticism.
For example, I say I'm "helping evolution" by killing every human without naturally blonde hair. The result will be a population entirely of blondes. That will be the "evolved" population, because "evolution" is the change in the genetics of a body of organisms as genetic drift occurs and selection takes its toll -- in this case, I represented the deadly selective environment. But it's absurd to say that what I did was "right" or "moral." Evolution doesn't "want" anything whatsoever. It doesn't "want" the "flourishing" of any species. It's just that organisms with traits adapted to their environments tend to persist.
"Fitness" is not a target metric, fitness is the degree to which an organism, based on its properties, facilitates a target metric. Your target metric appears to be the word "flourishing," which apparently is not "has a large population size."
What does "flourishing" mean? Does it have a coherent definition, or is it a philosophical siren, dooming every conversation that depends on it to endlessness and fruitlessness? Everything I have read indicates that it is the latter; it's a word that lacks a positive definition, and is instead what is left after realizing that every coherent target metric, if declared, seems to yield absurda.
If the word means, "Psh, you know, the good stuff," then we have just woken up to find ourselves still docked at port.
That's what I was talking about, too. I was talking about the stagnation vs. volatility of meme mutation. By "propagate its morals," do you mean, the capacity to maintain a conservative zeitgeist against what might be risky mutations, or do you mean a quickly-moving progressive zeitgeist, daring to try new things?
So... you ARE talking about population size? What on Earth is the metric, here?
I agree that things that will happen will happen. That may be the most pathetic moral claim ever, though (in the sense that it is a non-moral redundancy).