I have become quite interested in the science of morality recently. I was hoping to start a discussion about it.
I have been especially interested in causes of flourishing. I have read a few books that show--I think pretty clearly--that the morality of the human race is moving in a particular direction. Certain ethic systems are more beneficial for those that hold them than others. If an ethics system causes a society to be more efficient than another, that society will eventually absorb, destroy, or convert surrounding societies. This has been happening throughout human history. Just like anything else, given an assumed desired outcome, some kinds of ethics simply work better than others towards that goal. And that can be scientifically measured.
There has a push recently for a more scientific approach to ethics, and also a push back. Critics say there is not a sound philosophical bases for such an ethics system, the old Is-Ought problem. They say science cannot tell us what we ought to do, only what is. However, if we have a goal to work towards, science can tell us the best way to get there. Given how science is used to make things "better" that argument against seems stale to me, but I am not a philosopher. I don't know what philosophical reasons there would be to say Normative Evolutionary Ethics is worse than other Normative Ethics systems.
Many older ethical systems have made axiomatic arguments to frame what is good and what is bad. Why can't the science of morality simply do the same and find ways of getting there?
Part of what I am trying to get at is--strictly speaking--it does not matter if people adopt an Evolutionary Ethics mindset or not, because it will happen anyway. The Ethics systems which give populations an advantage will do well simply because they give their population an advantage. This will happen regardless. My point is that we can acknowledged that clear fact, and try to harness it.
If we do or do not, the system that works better will still beat out other systems. Our acknowledgment of that--or not--will not change the fact. However, if we DO start to acknowledge it we can start to work within that framework instead of allowing more random elements to decide what to try next.
Alright, so as it typical with me, after reading some responds and thinking about it more I have a clearer understanding of what I was trying to articulate. I'm going to give it another go and people can go back to telling me what is weak or misworded in it:
Evolution gives man his purpose.
That is not to say that there is a "purpose" to evolution, or the like. However, it is to say that evolution has things within it that work, while others do not. "Fit" I believe the term is. Man has evolved so as to be fit, and to be fit in a certain way. The "purpose" that evolution gives man is to maximize his own fitness. Nature has sculpted* man to have a certain nature. To be a 'good man,' in the way that a knife is a 'good knife' if it cuts and a bad knife if it does not. 'Telos' as BS said, and man's telos is to do what the forces of evolution sculpted* him to do.
Now--with his telos inhand--we can evaluate the behavior of man. Certain things that man does would take away from his fitness and some things man does would make him more fit. This is not to say that man should be evolving his genes, because that would be changing the nature of man; changing his telos. However, working within the structure already provided by evolution, behaviors can be seen to increase man's fitness.
And--indeed--this has already happened. If a behavior works better within that nature, those man following it thrive. This fitness is something physical and can be physically quantified. Thus, it can be modeld and studied by the scientific method. Science can tell us which behaviors would work within our nature(our genes) to make us more fit, and which would not. Within this behavioral structure we can start to call some behaviors "good" and some "bad" (or if you want to be more dramatic, "evil").
Thus, starting from evolution, we can now evaluate actions in a moral way. A normative ethics system is formed.**
*'sculpted' is being used here in the same way that a river sculpted the Grand Canyon, and its not meant to imply more.
**This system, however, woud not be able to morally evaluate what changing the course of human evolution woud be, or would claim such an action to be immoral.
I have become quite interested in the science of morality recently. I was hoping to start a discussion about it.
I have been especially interested in causes of flourishing. I have read a few books that show--I think pretty clearly--that the morality of the human race is moving in a particular direction. Certain ethic systems are more beneficial for those that hold them than others. If an ethics system causes a society to be more efficient than another, that society will eventually absorb, destroy, or convert surrounding societies. This has been happening throughout human history. Just like anything else, given an assumed desired outcome, some kinds of ethics simply work better than others towards that goal. And that can be scientifically measured.
There has a push recently for a more scientific approach to ethics, and also a push back. Critics say there is not a sound philosophical bases for such an ethics system, the old Is-Ought problem. They say science cannot tell us what we ought to do, only what is. However, if we have a goal to work towards, science can tell us the best way to get there. Given how science is used to make things "better" that argument against seems stale to me, but I am not a philosopher. I don't know what philosophical reasons there would be to say Normative Evolutionary Ethics is worse than other Normative Ethics systems.
Many older ethical systems have made axiomatic arguments to frame what is good and what is bad. Why can't the science of morality simply do the same and find ways of getting there?
Because the determination of what is good is not science. The problem I see with this suggestion is that it necessitates determining via your morality what the end goal is, and then determining the most utilitarian way to get there via science.
But it doesn't address the underlying issue of what is the end goal the way morality does. It just picks one.
But it doesn't address the underlying issue of what is the end goal the way morality does. It just picks one.
I'm not sure how that differentiates it from other ethics systems.
Why is the way morality picks an end goal better than the way the science of morality picks it? And, if it is better, why can't the science of morality use the same method?
But it doesn't address the underlying issue of what is the end goal the way morality does. It just picks one.
I'm not sure how that differentiates it from other ethics systems.
Why is the way morality picks an end goal better than the way the science of morality picks it? And, if it is better, why can't the science of morality use the same method?
I think you missed my point -- my point was not that morality is *better* at picking the end goal, my point was that morality is *required* to pick an end goal.
Science is incapable of dictating what the end goal should be or what the end goal is. Science is capable of telling us the utilitarian best way of getting to an end goal. But, since we need the "morality" that is not dictated by science to determine the end goal in the first place, its impossible to get there purely by science.
Science is incapable of dictating what the end goal should be or what the end goal is. Science is capable of telling us the utilitarian best way of getting to an end goal. But, since we need the "morality" that is not dictated by science to determine the end goal in the first place, its impossible to get there purely by science.
I reject that assertion based on the simple fact we have science missions and the like.
Science could not move forward without motivation.
Or are you saying the motivation and what not was not part of "science" and are claiming some different system was responsible? If so, which?
I also think you two are arguing about this the wrong way. Science and Morality are two lenses through which humans subjectively view the universe. Morality is another subjective filter, while science is an attempt at an objective filter. Science would not try to play a value judgment on which system is 'better', only which systems are more effective.
The most efficient route is not always the 'best' route. When examining any moral system, the moral system of the examiners has to be taken into account. Everything about morality is a value judgment.
What science can tell us is which societies were most successful against a lit of possible factors. But by scientific logic, slavery and the societal systems that are abhorrent to most modern people were by far more successful.
Going back to the original question - Science could define a beneficial system, if it were possible to accurately pinpoint the traits that led to successful and long-lived societies. The problem is that the very nature of the term 'successful' is a loaded value judgment. Are they more successful because they are happier? Long-lived? More in balance with available resources? Most expansive? Most progressive? Strongest economically or militarily? And what happens when the traits that define those are mutually exclusive? Ultimately it again becomes a human value judgment as to which traits are more important, and that human's decision is based on their own moral system.
Ultimately, the best thing for a species is the combination of balance in the ecosystem and propagation of the species in terms of the species' longevity. But those two factors don't take into account the subjective experience of the species.
Science is incapable of dictating what the end goal should be or what the end goal is. Science is capable of telling us the utilitarian best way of getting to an end goal. But, since we need the "morality" that is not dictated by science to determine the end goal in the first place, its impossible to get there purely by science.
I reject that assertion based on the simple fact we have science missions and the like.
Science could not move forward without motivation.
Or are you saying the motivation and what not was not part of "science" and are claiming some different system was responsible? If so, which?
Let me use an example:
We can scientifically determine what the best way to promote propagation of the human species is, and use that to determine a utilitarian system to get there.
What we can't scientifically determine is that promoting the propagation of the human species is, in fact, a good thing. We have to assume that basic underlying moral question before we can determine scientifically how to get there.
Edit: also, welcome back! I hope you're back around here for a while and not just stopping in temporarily
Science would not try to play a value judgment on which system is 'better', only which systems are more effective.
"Better" and "effective" are only different if you haven't worked out all of the goals.
If you know all of what is important, the the most effective way is the best way.
We can scientifically determine what the best way to promote propagation of the human species is, and use that to determine a utilitarian system to get there.
Well, propagation and survivability. Those are the two goals evolution designed us for.
A thing should do what it is designed to do.
What we can't scientifically determine is that promoting the propagation of the human species is, in fact, a good thing. We have to assume that basic underlying moral question before we can determine scientifically how to get there.
Well, we would not need to determine it, we would take it axiomatically. That's what ethics systems do--don't they? They axiomatically assume things like "justice" or "fairness" or "the powerful's will" are desirable.
Why can't we just take the motivation of "proliferation and survivability" that evolution provides us with as axiomatically good and go from there, just like every other ethics system.
Edit: also, welcome back! I hope you're back around here for a while and not just stopping in temporarily
I am quite happy to see you as well.
But, I'd rather keep the personal discussion here, if you don't mind. I'm trying to keep things... less messy this go around.
We can scientifically determine what the best way to promote propagation of the human species is, and use that to determine a utilitarian system to get there.
Well, propagation and survivability. Those are the two goals evolution designed us for.
A thing should do what it is designed to do.
Why? ( I don't disagree, I just couldn't help myself)
What we can't scientifically determine is that promoting the propagation of the human species is, in fact, a good thing. We have to assume that basic underlying moral question before we can determine scientifically how to get there.
Well, we would not need to determine it, we would take it axiomatically. That's what ethics systems do--don't they? They axiomatically assume things like "justice" or "fairness" or "the powerful's will" are desirable.
Why can't we just take the motivation of "proliferation and survivability" that evolution provides us with as axiomatically good and go from there, just like every other ethics system.
We can. But that's not scientific. In other words, we haven't scientifically determined morality at all. We've just abstracted it, and then filled in the details from the abstract scientifically.
We can. But that's not scientific. In other words, we haven't scientifically determined morality at all. We've just abstracted it, and then filled in the details from the abstract scientifically.
But, science can tell us what we were designed to do. What our purpose is.
I read the Moral Landscape(ok, listened to on tape) and had some people tell me it was crap philosophy, but I rather liked it (excluding chapter 4).
His argument about why we should care about consciousness seems horribly circular to me, but after thinking about it a bit it seemed to me most ethic systems just went axiomatic anyway. But, you would know better than I.
What does it take to be taken seriously as an ethics system in philosophy now-a-days? What's the problem with the science of morality when compared to other moral philosophies?
But, science can tell us what we were designed to do. What our purpose is.
[Note: 'design' is a somewhat loaded word when talking about evolution.]
It can? Science can tell us how we evolved, what our ancestors were like, how we as a species survived and multiplied, but attaching purpose is a different matter entirely. A tool has a purpose, but does a human have a well-defined purpose (beyond the evolutionary imperative to pass on one's genes)?
You should try to find ways of helping humanity proliferate and increase its survivability.
I'm not sure that proliferation is a good goal. I'd certainly highlight happiness as among the most important. Life, or survival, and happiness would seem to be the simplest goals.
[Note: 'design' is a somewhat loaded word when talking about evolution.]
But the concept of function, purpose, or telos is an important one in an Aristotelian-influenced theory of morality, which is actually what Harris is stumbling towards in my understanding (I've only read reviews of the book). The basic idea is that being morally good means being a good person in the same way that a knife is a good knife: morality is the human function.
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
But the concept of function, purpose, or telos is an important one in an Aristotelian-influenced theory of morality, which is actually what Harris is stumbling towards in my understanding (I've only read reviews of the book).
I'm not sure he was, but he certainly should've been if he wasn't.
It can? Science can tell us how we evolved, what our ancestors were like, how we as a species survived and multiplied, but attaching purpose is a different matter entirely. A tool has a purpose, but does a human have a well-defined purpose (beyond the evolutionary imperative to pass on one's genes)?
There are certain things evolution finds works better than others; that there is a direction to evolution. If you stray too far off that path you disappear.
We know--roughly--what the direction of evolution is, and science can tell us how best to continue on that path, and tell us if we are straying off it.
I'm not sure that proliferation is a good goal. I'd certainly highlight happiness as among the most important. Life, or survival, and happiness would seem to be the simplest goals.
Well, evolution tells us a certain level of unhappiness is a good thing, doesn't it?
Proliferation will help us get off this rock, but I would agree that survivability is probably the greater imperative.
There are certain things evolution finds works better than others; that there is a direction to evolution. If you stray too far off that path you disappear.
There are traits that are favored over others depending on the selective force being applied, but there isn't a defined, long-term direction of evolution at all. What is it means to be "fit" can change rapidly and suddenly.
We know--roughly--what the direction of evolution is, and science can tell us how best to continue on that path, and tell us if we are straying off it.
Well, evolution tells us a certain level of unhappiness is a good thing, doesn't it?
When I say happiness, I mean in the aggregate. Surely a girl who is dumped by her boyfriend will be sad, or someone who falls of their bike and breaks their leg will not he happy in the immediate, but it doesn't mean they aren't living lives of happiness. I'm sure I could state it better if I had more time, but I think you see what I'm getting at?
Certainly pain and sadness are good learning devices to help us learn to better navigate our world, but not when they rise to levels that they become our realities.
Proliferation will help us get off this rock, but I would agree that survivability is probably the greater imperative.
I don't see how proliferation gets us off this rock. We're already drastically and unsustainably overpopulated as a species. Moreover, I don't think getting "off this rock" is an admirable goal (if you mean it as abandoning a spent/unlivable Earth), except to explore and colonize other places.
There are traits that are favored over others depending on the selective force being applied, but there isn't a defined, long-term direction of evolution at all. What is it means to be "fit" can change rapidly and suddenly.
I see no reason why you can't make a definition that is flexible. Something like "increase survivability" means different things in different contexts, for example.
Increased survivability, if I'm not mistaken. Certain ethic systems help that, and others hinder it. Those that help it cause the people who subscribe to them to do well and flourish, and the converse is also true.
When I say happiness, I mean in the aggregate. Surely a girl who is dumped by her boyfriend will be sad, or someone who falls of their bike and breaks their leg will not he happy in the immediate, but it doesn't mean they aren't living lives of happiness. I'm sure I could state it better if I had more time, but I think you see what I'm getting at?
I do, but I see happiness as more of an after effect, a byproduct. If you feel happy because you did something meaningful, it's not really the happiness that matters, it's the fact you did something meaningful.
Happiness is a trophy for a job well done, but getting a trophy just for the trophy's sake is pointless--as is happiness without context.
Certainly pain and sadness are good learning devices to help us learn to better navigate our world, but not when they rise to levels that they become our realities.
I don't see how proliferation gets us off this rock. We're already drastically and unsustainably overpopulated as a species. Moreover, I don't think getting "off this rock" is an admirable goal (if you mean it as abandoning a spent/unlivable Earth), except to explore and colonize other places.
Well, clearly survivability is more important.
But, maybe you're right, maybe I should not have emphasizes proliferation at all. I think I was more thinking about "spreading" or something along those lines, but it seems like the real goal was survivability. I think I was getting my "means and ends" confused.
I see no reason why you can't make a definition that is flexible. Something like "increase survivability" means different things in different contexts, for example.
I'm not quite sure what you're getting at, but survival is only part of the game of evolution. The current trend in human longevity, for example, isn't a result of an increase in gene frequencies of FOXO3A, at least as far as I'm aware, because such genes would have little, if any, benefit to fitness. Rather, increases in medical care, availability of nutrients, and hygiene are much more likely causes.
In what sense? You could easily argue that there's always a trend towards increased survivability to reproduction in species, because that would be strongly selected for. Survivability, even if accurate, is kind of a non-answer.
Certain ethic systems help that, and others hinder it. Those that help it cause the people who subscribe to them to do well and flourish, and the converse is also true.
The converse isn't necessarily true. Quite often someone, or some animal, who breaks the social norms in such a way that exploits those in compliance with the ethic gains a distinct advantage. There are examples in game theory that illustrate this idea nicely.
(I'm not sure where you are wishing to inject evolution into this bigger picture, perhaps you could clarify?)
I do, but I see happiness as more of an after effect, a byproduct. If you feel happy because you did something meaningful, it's not really the happiness that matters, it's the fact you did something meaningful.
Happiness is a trophy for a job well done, but getting a trophy just for the trophy's sake is pointless--as is happiness without context.
I guess I kind of disagree here. I see happiness (not selfishly, but for everyone, or as many as possible) as the main goal, with most other things a means to that end. If something is meaningful, it should be so because it accomplishes something that we all agree that we want, so while it may make you feel good about doing it, it also contributes to total well-being of the group.
I'm not quite sure what you're getting at, but survival is only part of the game of evolution. The current trend in human longevity, for example, isn't a result of an increase in gene frequencies of FOXO3A, at least as far as I'm aware, because such genes would have little, if any, benefit to fitness. Rather, increases in medical care, availability of nutrients, and hygiene are much more likely causes.
Well, longevity might be bad for the species. Hard to say, but imagine if everyone that lived through the Civil War could still vote. That might offset the advantage of having Einstein still alive. Hard to say, but the kind of thing you could start to test once longevity comes to the table. I dare say--if some societies refuse the treatment for one reason on another--it WILL be tested when it comes to the table. Which group will do better, long term?
In what sense? You could easily argue that there's always a trend towards increased survivability to reproduction in species, because that would be strongly selected for. Survivability, even if accurate, is kind of a non-answer.
I see "increased survivability" a goal like "get to the moon." You are right that I'm not really getting into how one might achieve it, but that's sorta the point of science.
The issue in this case, however, is justifying the goal (I'm pretty sure that was what BS was saying, and I agree). The "How" would be the next step.
The converse isn't necessarily true. Quite often someone, or some animal, who breaks the social norms in such a way that exploits those in compliance with the ethic gains a distinct advantage. There are examples in game theory that illustrate this idea nicely.
Individuals gain advantages, but I did not say "individuals."
Or at least, when I said "people" I meant large groups, societies. However, I can see now my choice of wording was poor.
I guess I kind of disagree here. I see happiness (not selfishly, but for everyone, or as many as possible) as the main goal, with most other things a means to that end. If something is meaningful, it should be so because it accomplishes something that we all agree that we want, so while it may make you feel good about doing it, it also contributes to total well-being of the group.
So, you would hook yourself up to the Experience machine given a chance?
I see "increased survivability" a goal like "get to the moon." You are right that I'm not really getting into how one might achieve it, but that's sorta the point of science.
Well you gave "increased survivability" as an answer for the direction of evolution..
I have become quite interested in the science of morality recently. I was hoping to start a discussion about it.
I have been especially interested in causes of flourishing. I have read a few books that show--I think pretty clearly--that the morality of the human race is moving in a particular direction. Certain ethic systems are more beneficial for those that hold them than others. If an ethics system causes a society to be more efficient than another, that society will eventually absorb, destroy, or convert surrounding societies. This has been happening throughout human history. Just like anything else, given an assumed desired outcome, some kinds of ethics simply work better than others towards that goal. And that can be scientifically measured.
There has a push recently for a more scientific approach to ethics, and also a push back. Critics say there is not a sound philosophical bases for such an ethics system, the old Is-Ought problem. They say science cannot tell us what we ought to do, only what is. However, if we have a goal to work towards, science can tell us the best way to get there. Given how science is used to make things "better" that argument against seems stale to me, but I am not a philosopher. I don't know what philosophical reasons there would be to say Normative Evolutionary Ethics is worse than other Normative Ethics systems.
Many older ethical systems have made axiomatic arguments to frame what is good and what is bad. Why can't the science of morality simply do the same and find ways of getting there?
Part of what I am trying to get at is--strictly speaking--it does not matter if people adopt an Evolutionary Ethics mindset or not, because it will happen anyway. The Ethics systems which give populations an advantage will do well simply because they give their population an advantage. This will happen regardless. My point is that we can acknowledged that clear fact, and try to harness it.
If we do or do not, the system that works better will still beat out other systems. Our acknowledgment of that--or not--will not change the fact. However, if we DO start to acknowledge it we can start to work within that framework instead of allowing more random elements to decide what to try next.
Alright, so as it typical with me, after reading some responds and thinking about it more I have a clearer understanding of what I was trying to articulate. I'm going to give it another go and people can go back to telling me what is weak or misworded in it:
Evolution gives man his purpose.
That is not to say that there is a "purpose" to evolution, or the like. However, it is to say that evolution has things within it that work, while others do not. "Fit" I believe the term is. Man has evolved so as to be fit, and to be fit in a certain way. The "purpose" that evolution gives man is to maximize his own fitness. Nature has sculpted* man to have a certain nature. To be a 'good man,' in the way that a knife is a 'good knife' if it cuts and a bad knife if it does not. 'Telos' as BS said, and man's telos is to do what the forces of evolution sculpted* him to do.
Now--with his telos inhand--we can evaluate the behavior of man. Certain things that man does would take away from his fitness and some things man does would make him more fit. This is not to say that man should be evolving his genes, because that would be changing the nature of man; changing his telos. However, working within the structure already provided by evolution, behaviors can be seen to increase man's fitness.
And--indeed--this has already happened. If a behavior works better within that nature, those man following it thrive. This fitness is something physical and can be physically quantified. Thus, it can be modeld and studied by the scientific method. Science can tell us which behaviors would work within our nature(our genes) to make us more fit, and which would not. Within this behavioral structure we can start to call some behaviors "good" and some "bad" (or if you want to be more dramatic, "evil").
Thus, starting from evolution, we can now evaluate actions in a moral way. A normative ethics system is formed.**
*'sculpted' is being used here in the same way that a river sculpted the Grand Canyon, and its not meant to imply more.
**This system, however, woud not be able to morally evaluate what changing the course of human evolution woud be, or would claim such an action to be immoral.
Because the determination of what is good is not science. The problem I see with this suggestion is that it necessitates determining via your morality what the end goal is, and then determining the most utilitarian way to get there via science.
But it doesn't address the underlying issue of what is the end goal the way morality does. It just picks one.
I'm not sure how that differentiates it from other ethics systems.
Why is the way morality picks an end goal better than the way the science of morality picks it? And, if it is better, why can't the science of morality use the same method?
I think you missed my point -- my point was not that morality is *better* at picking the end goal, my point was that morality is *required* to pick an end goal.
Science is incapable of dictating what the end goal should be or what the end goal is. Science is capable of telling us the utilitarian best way of getting to an end goal. But, since we need the "morality" that is not dictated by science to determine the end goal in the first place, its impossible to get there purely by science.
I reject that assertion based on the simple fact we have science missions and the like.
Science could not move forward without motivation.
Or are you saying the motivation and what not was not part of "science" and are claiming some different system was responsible? If so, which?
The most efficient route is not always the 'best' route. When examining any moral system, the moral system of the examiners has to be taken into account. Everything about morality is a value judgment.
What science can tell us is which societies were most successful against a lit of possible factors. But by scientific logic, slavery and the societal systems that are abhorrent to most modern people were by far more successful.
Going back to the original question - Science could define a beneficial system, if it were possible to accurately pinpoint the traits that led to successful and long-lived societies. The problem is that the very nature of the term 'successful' is a loaded value judgment. Are they more successful because they are happier? Long-lived? More in balance with available resources? Most expansive? Most progressive? Strongest economically or militarily? And what happens when the traits that define those are mutually exclusive? Ultimately it again becomes a human value judgment as to which traits are more important, and that human's decision is based on their own moral system.
Ultimately, the best thing for a species is the combination of balance in the ecosystem and propagation of the species in terms of the species' longevity. But those two factors don't take into account the subjective experience of the species.
TerribleBad at Magic since 1998.A Vorthos Guide to Magic Story | Twitter | Tumblr
[Primer] Krenko | Azor | Kess | Zacama | Kumena | Sram | The Ur-Dragon | Edgar Markov | Daretti | Marath
Let me use an example:
We can scientifically determine what the best way to promote propagation of the human species is, and use that to determine a utilitarian system to get there.
What we can't scientifically determine is that promoting the propagation of the human species is, in fact, a good thing. We have to assume that basic underlying moral question before we can determine scientifically how to get there.
Edit: also, welcome back! I hope you're back around here for a while and not just stopping in temporarily
"Better" and "effective" are only different if you haven't worked out all of the goals.
If you know all of what is important, the the most effective way is the best way.
Well, propagation and survivability. Those are the two goals evolution designed us for.
A thing should do what it is designed to do.
Well, we would not need to determine it, we would take it axiomatically. That's what ethics systems do--don't they? They axiomatically assume things like "justice" or "fairness" or "the powerful's will" are desirable.
Why can't we just take the motivation of "proliferation and survivability" that evolution provides us with as axiomatically good and go from there, just like every other ethics system.
I am quite happy to see you as well.
But, I'd rather keep the personal discussion here, if you don't mind. I'm trying to keep things... less messy this go around.
Why? ( I don't disagree, I just couldn't help myself)
We can. But that's not scientific. In other words, we haven't scientifically determined morality at all. We've just abstracted it, and then filled in the details from the abstract scientifically.
No. Well, except for the crappy ones.
candidus inperti; si nil, his utere mecum.
Why not?
But, science can tell us what we were designed to do. What our purpose is.
Well, what do the good ones do then?
I read the Moral Landscape(ok, listened to on tape) and had some people tell me it was crap philosophy, but I rather liked it (excluding chapter 4).
His argument about why we should care about consciousness seems horribly circular to me, but after thinking about it a bit it seemed to me most ethic systems just went axiomatic anyway. But, you would know better than I.
What does it take to be taken seriously as an ethics system in philosophy now-a-days? What's the problem with the science of morality when compared to other moral philosophies?
Pick the most prosperous/powerful culture, and then emulate whatever their ethical system is?
[Note: 'design' is a somewhat loaded word when talking about evolution.]
It can? Science can tell us how we evolved, what our ancestors were like, how we as a species survived and multiplied, but attaching purpose is a different matter entirely. A tool has a purpose, but does a human have a well-defined purpose (beyond the evolutionary imperative to pass on one's genes)?
I'm not sure that proliferation is a good goal. I'd certainly highlight happiness as among the most important. Life, or survival, and happiness would seem to be the simplest goals.
Justify their claims about the nature of the good in one way or another.
The ability to be argued, for starters. You can't argue an axiom.
But the concept of function, purpose, or telos is an important one in an Aristotelian-influenced theory of morality, which is actually what Harris is stumbling towards in my understanding (I've only read reviews of the book). The basic idea is that being morally good means being a good person in the same way that a knife is a good knife: morality is the human function.
candidus inperti; si nil, his utere mecum.
I'm not sure he was, but he certainly should've been if he wasn't.
There are certain things evolution finds works better than others; that there is a direction to evolution. If you stray too far off that path you disappear.
We know--roughly--what the direction of evolution is, and science can tell us how best to continue on that path, and tell us if we are straying off it.
Well, evolution tells us a certain level of unhappiness is a good thing, doesn't it?
Proliferation will help us get off this rock, but I would agree that survivability is probably the greater imperative.
There are traits that are favored over others depending on the selective force being applied, but there isn't a defined, long-term direction of evolution at all. What is it means to be "fit" can change rapidly and suddenly.
What direction is human evolution going?
When I say happiness, I mean in the aggregate. Surely a girl who is dumped by her boyfriend will be sad, or someone who falls of their bike and breaks their leg will not he happy in the immediate, but it doesn't mean they aren't living lives of happiness. I'm sure I could state it better if I had more time, but I think you see what I'm getting at?
Certainly pain and sadness are good learning devices to help us learn to better navigate our world, but not when they rise to levels that they become our realities.
I don't see how proliferation gets us off this rock. We're already drastically and unsustainably overpopulated as a species. Moreover, I don't think getting "off this rock" is an admirable goal (if you mean it as abandoning a spent/unlivable Earth), except to explore and colonize other places.
Increased survivability, if I'm not mistaken. Certain ethic systems help that, and others hinder it. Those that help it cause the people who subscribe to them to do well and flourish, and the converse is also true.
I do, but I see happiness as more of an after effect, a byproduct. If you feel happy because you did something meaningful, it's not really the happiness that matters, it's the fact you did something meaningful.
Happiness is a trophy for a job well done, but getting a trophy just for the trophy's sake is pointless--as is happiness without context.
I don't disagree.
Well, clearly survivability is more important.
But, maybe you're right, maybe I should not have emphasizes proliferation at all. I think I was more thinking about "spreading" or something along those lines, but it seems like the real goal was survivability. I think I was getting my "means and ends" confused.
I'm not quite sure what you're getting at, but survival is only part of the game of evolution. The current trend in human longevity, for example, isn't a result of an increase in gene frequencies of FOXO3A, at least as far as I'm aware, because such genes would have little, if any, benefit to fitness. Rather, increases in medical care, availability of nutrients, and hygiene are much more likely causes.
In what sense? You could easily argue that there's always a trend towards increased survivability to reproduction in species, because that would be strongly selected for. Survivability, even if accurate, is kind of a non-answer.
The converse isn't necessarily true. Quite often someone, or some animal, who breaks the social norms in such a way that exploits those in compliance with the ethic gains a distinct advantage. There are examples in game theory that illustrate this idea nicely.
(I'm not sure where you are wishing to inject evolution into this bigger picture, perhaps you could clarify?)
I guess I kind of disagree here. I see happiness (not selfishly, but for everyone, or as many as possible) as the main goal, with most other things a means to that end. If something is meaningful, it should be so because it accomplishes something that we all agree that we want, so while it may make you feel good about doing it, it also contributes to total well-being of the group.
I see "increased survivability" a goal like "get to the moon." You are right that I'm not really getting into how one might achieve it, but that's sorta the point of science.
The issue in this case, however, is justifying the goal (I'm pretty sure that was what BS was saying, and I agree). The "How" would be the next step.
Individuals gain advantages, but I did not say "individuals."
Or at least, when I said "people" I meant large groups, societies. However, I can see now my choice of wording was poor.
So, you would hook yourself up to the Experience machine given a chance?
I tell you, I would not.
Would you plug yourself into an immortality machine that lets you live forever, but prevents you from interacting with the outside world in any way?
Well you gave "increased survivability" as an answer for the direction of evolution..
No, I would not.
Had to think about it for a bit, but not being able to interact with another sentient thing was too much of a deal breaker; immortality or no.
I would concede that you likely know better than I. So, am I wrong? And--if so--do you know a better answer?
Then something matters to you more than contextless happiness.
Okay, so does your refusal to do so mean that survival does not constitute moral good?
However, you are making me remember the exact reason I went with "survivability and proliferation" to start.