The main Debate forum is making me facepalm pretty hard right now, so how about a "lighthearted" diversion into ethics?
An ethicist by the name of Peter Singer (almost certainly not the first to come up with this notion, but he puts a fine point on it) suggests the following ethical thought experiment: imagine a man has just left a boutique store having purchased an expensive coat for, say, $200. As he walks home wearing his new coat, he encounters a boy who can't swim drowning in a pond. No one else is capable of intervening to save the child. What would we think of this man upon finding out that he elects to let the child drown because he doesn't want to ruin his new coat?
Singer points out, quite correctly I think, that most people would call that monstrous. So the first half of the paradox is what we might call Singer's premise: in a moral tradeoff between human life on one hand and some luxury or quality-of-life improvement on the other, human life should win.
The second half of the paradox can be seen by taking Singer's premise and running with it. Singer himself took the first step; he points out that one does not have to wait until one happens upon a drowning boy while wearing nice clothes in order to realize this moral tradeoff. Opportunities to save human life abound. Probably the best example is malaria remedies: those who study the efficacy of such things estimate that every $10 spent on malaria prevention and treatment measures saves one human life. So those of you who accept Singer's premise are all (almost certainly) already monsters by your own lights: every movie you've been to see has come at the cost of a life, your dinner bill might be worth five or six lives if you eat at nice places, et cetera.
But that's not really the paradox; in fact, it's an argument that Singer and others in his school take perfectly seriously. One can consistently believe people are just obliged to be that radically altruistic. The paradox comes when we consider an additional argument made by Derek Parfit, which goes as follows: Singer's premise, when consistently applied, leads one to the belief that a world (World Z) of many billions of people, all of whom have quality-of-life just barely at subsistence level, is morally superior to a world (World A) where there are only a few billions of people, all of whom lead drastically better lives than the best life led by anyone in World Z.
The argument goes as follows. Start in World A, with a certain population and a good average quality of life. Construct a new world, World B, from World A by having some set of people in World A make a collection of ethical tradeoffs consistent with Singer's premise. Then World B has a higher population than World A (some number of lives have been saved/prolonged) but a lower average quality of life (some quality of life items have been traded for those lives.) Repeat this procedure until every possible ethical tradeoff consistent with Singer's premise has been made and you end up in World Z. (Note, of course, that this is dependent on the premise that Singerian ethical tradeoffs can basically always be made, but this premise should be fairly uncontroversial -- there's always something someone can do to help someone else.)
Now we can state Parfit's premise: World Z is bad. Much like Singer's premise, most people are willing to grant this, though for some leftists World Z actually represents a utopia, so this may be slightly more controversial ab initio than Singer's premise.
So from Singer's premise it follows that World Z is good, which is inconsistent with Parfit's premise. This is the paradox. As usual in philosophy, there are only so many ways to attack an argument, so here are the three questions I propose for discussion:
1) Is Singer's premise true or false? Why?
2) Is Parfit's premise true or false? Why?
3) If you answered "true" to both 1 and 2, then you must also have found a mistake in the preceding argument. What is it?
This is an interesting ethical debate. I believe both of the premises are half-true. Yes, when you can save a life directly by being at the moment, it should be saved, but you are also making life worse for everyone by increasing population growth. Therefore, I take no stance in this argument and think the choice to choose to save a life should be on a situation by situation premise.
This should be an interesting thread to watch though.
Private Mod Note
():
Rollback Post to RevisionRollBack
Modern:
URB Grixis Delver URB
WUB Ad Nauseam WUB
(On Lantern Control)"A guy who literally just sits there and mills cards he doesn't like from your library while he slowly, slowly kills you this way."
"If a person's profile includes anime or My Little Pony, feel free to ignore everything they say."
Preface: I probably am about to say a few rather monstrous things. This does reflect my actions in life, nor my stances on ethic apart from how I see this ethic playing out. Secondly, I foolishly endeavored to type this on my phone - I have big fingers, a lousy autocorrect, and no red squiggly for misspelled words: I believe I caught most mistakes, but please forgive the remainder.
Both premises are obviously false. Mainly that both derive from the idea of a life-money fungibility (which I hope is a word). I cannot indefinitely forgo money for the sake of Malaria, especially since some amounts of things at one point considered luxuries were required to have a Malaria cure/treatment to begin with - aka technological progress. Which gets us to the idea that it would be subsistence living a la Parfit (I want to make a joke about parfaits at this point, but it is too multilayered a topic). In a subsistence world, it is clear that there are not many billions of people, in fact there are many less than today. How do we know? We can look at history - in a subsistence world, people die younger.
This appears, of course, to be an optimization problem at that point. Surely, Mr. Gates could give more to charity. That in turn would save more people, we just need to stop at the inflection point of the equation.
But unfortunately, we now need to figure out what is luxury. Let's use something obviously luxurious - commercial space flight. This is something which does nothing to benefit humanity, and literally billions is spent on it. But, perhaps, this leads to a capacity for interstellar travel. At this point, it could ostensibly save many more lives over the course of hundreds of years. The idea of a car was once a luxury, but now being able to deliver a heart from Chicago to Milwaukee in an hour means many people live. Assuredly, dyed clothes was a luxury, and that now keeps many hunters alive (and remember in subsistence world, hunting will be essential). How can we, before the fact, know what will happen. The man who lets his coat get ruined may drown due to its weight (although, we will never know why he didn't simply shed it in either situation - and to be clear this parenthetical is a joke over pedantry, not a point).
To be clear, I am not really proposing a situational ethic, simply stating that a fungibility of money and lives is inherently flawed without omniscience of future events; it is the inherent flaw of any utilitarian ethic. In fact, we can easily see how repulsed we are to this when we look at the fact that insurance companies factor in education levels and job when doing the acturial work behind life insurance or death in a car crash. Even more so if they simply were to give that money Malaria research (after all if lives = money then lives = lives, and they can do a whole lot better than paying a widow by saving hundreds, even if the widow dies due not receiving recompense).
As a side note, reaching the end of this, I realize another major problem is simply this being a combination of the worst parts of deontology with the worst parts utilitarianism.
Both premises are obviously false. Mainly that both derive from the idea of a life-money fungibility (which I hope is a word). I cannot indefinitely forgo money for the sake of Malaria
Here I must point out that Singer's premise does not insist that you indefinitely forgo money for malaria cures -- only when such money is being otherwise allocated to luxury or quality of life improvements. When such money becomes essential to someone's survival, you may stop making the tradeoff. (Though I still think you raise a good point about the details of the tradeoffs.)
especially since some amounts of things at one point considered luxuries were required to have a Malaria cure/treatment to begin with - aka technological progress.
For the record, I agree with you that Singer's premise is false, but I want to play devil's advocate here: it's not clear that the development of a malaria cure is contingent on luxury spending beyond some very minimum level, at which level the paradox will still be present.
Which gets us to the idea that it would be subsistence living a la Parfit (I want to make a joke about parfaits at this point, but it is too multilayered a topic). In a subsistence world, it is clear that there are not many billions of people, in fact there are many less than today. How do we know? We can look at history - in a subsistence world, people die younger.
How sure can we be that a future subsistence world will look like the past state of subsistence? For instance, there is no reason we can't keep what technology we already have. Moreover, we could extend "subsistence" to include the development of those and only those technologies that would enhance our ability to subsist. Maybe "subsistence" is no longer the correct word for this. Let's call it "subsistence +1". I claim the paradox remains even under "subsistence +1"
This appears, of course, to be an optimization problem at that point. Surely, Mr. Gates could give more to charity. That in turn would save more people, we just need to stop at the inflection point of the equation.
I don't understand what the second term of the equation is supposed to be, such that we could even begin to evaluate where the inflection point is or if there is one. (Well, to be fair, I don't understand either term, but I really don't understand the second.) If there's one term for life and one term for quality-of-life, what does the term for quality-of-life look like.
But unfortunately, we now need to figure out what is luxury. Let's use something obviously luxurious - commercial space flight. This is something which does nothing to benefit humanity, and literally billions is spent on it. But, perhaps, this leads to a capacity for interstellar travel. At this point, it could ostensibly save many more lives over the course of hundreds of years. The idea of a car was once a luxury, but now being able to deliver a heart from Chicago to Milwaukee in an hour means many people live. Assuredly, dyed clothes was a luxury, and that now keeps many hunters alive (and remember in subsistence world, hunting will be essential). How can we, before the fact, know what will happen. The man who lets his coat get ruined may drown due to its weight (although, we will never know why he didn't simply shed it in either situation - and to be clear this parenthetical is a joke over pedantry, not a point).
I totally agree that we can't predict what the far-future consequences of our actions will be with any accuracy, and you're right to point it out. But note that this argument can be made by anyone, including someone on the opposing position, and is in essence a reductio of all of ethics if permitted. The cheap version is something I'm sure you've heard before: "What if the kid grows up to be the next Hitler?" The expensive version is that any action you take could lead, through some causal chain you can't foresee, to something monstrous, so you are forbidden to take any action whatever.
The general upshot of all these arguments is this: We can't use 20-20 hindsight to evaluate ethical decisions. We have to go with the best information we can obtain at the time of making the decision, and at the time of making a decision re: Singer's premise, we must go with our current knowledge on what constitutes an extravagance, even if that extravagance would later lead to a greater utilitarian benefit, provided we were unable to predict that benefit in advance. Ethics can't be plagued by hindsight bias.
(P.S.: not playing devil's advocate here, I believe keeping hindsight out of ethics is an important principle which this argument of yours violates.)
To be clear, I am not really proposing a situational ethic, simply stating that a fungibility of money and lives is inherently flawed without omniscience of future events; it is the inherent flaw of any utilitarian ethic. In fact, we can easily see how repulsed we are to this when we look at the fact that insurance companies factor in education levels and job when doing the acturial work behind life insurance or death in a car crash. Even more so if they simply were to give that money Malaria research (after all if lives = money then lives = lives, and they can do a whole lot better than paying a widow by saving hundreds, even if the widow dies due not receiving recompense).
Utilitarians would deny that this is a flaw. Inability to solve a mathematical model due to lack of information or mathematical ability does not make that model bad. Insurance companies approximating that model in the way that you suggest is a thing that seems to be "working." People buy insurance voluntarily and feel at least a subjective benefit from doing so.
As a side note, reaching the end of this, I realize another major problem is simply this being a combination of the worst parts of deontology with the worst parts utilitarianism.
I'm not sure I can agree. Certainly I simplified the presentation greatly in the name of brevity and glossed over a lot of details, but I believe most of those details can be fixed upon examination. I think both halves of this dichotomy are capturing the "correct" essence of those two ethical philosophies.
P.S. You said both premises are obviously false. (Hopefully I've convinced you to drop "obviously.") You said a lot about Singer's premise, but you haven't said why Parfit's premise ("World Z is bad") is false. Why do you think that?
I agree with Tim_T (at least on some level, it's not clear), morality is situational, universal rules are mostly unnecessary and overcomplicating in trying to make them all work practically. The only universal moral rule for me is to promote the wellbeing of life, everything else is contextually determined.
I don't get Parfit's premise: why is World Z bad? I think World Z is very poorly defined. People may consider it bad, because of the loaded language used ("barely at subsistence level", "drastically better lives"), but we don't know what this subsistence level looks like. If we include room for technological maintenance and development, for buffers to cope with catastrophies and for some other societal elements that may be considered to be required for a base quality of life, the notion about barely subsistent becomes much less controversial.
Unfortunately, ethics appears to be inextricably enmeshed with vaguely defined terms, and you definitely can't do ethics without loaded language. After all, "loaded" in other contexts just means something like "carries a potentially unjustified ethical connotation" -- which in this context is precisely the intent! (Though obviously, in this case, one hopes the connotation can be justified at least somewhat.)
That being said, I think you're right to try to get some details about World Z, and I think that we can derive some interesting properties of World Z from its definition, vague though it may be.
World Z is the world that results when no further ethical tradeoffs that are consistent with Singer's premise can be made. In other words, nobody in World Z is ethically permitted to take any action that he cannot himself predict, in advance, will inure to the subsistence (or subsistence +1, etc) of the society.
So, put ourselves into that mindset and consider some actions which are ethically impermissible: leisure reading (in fact, we could burn the books to create energy to heat more homes to shelter more people...), listening to music (making sounds requires energy which could be used to grow more food...), sleeping in (that extra 5 minutes of work could be used to...), in fact any hour not spent working other than the minimal quantity of sleep (extra time could be used to...), et cetera.
(There is a thought device in communities that like to discuss the future of AI known as "gray goo" -- a connected mesh of nanomachines programmed with only the directive to make more of itself, and possessed of the intelligence necessary to carry out that directive. The gray goo cannot do anything "fun" even if it is intelligent enough to do so, because that energy could instead be used to make more goo. Eventually such a system turns all mass-energy in the universe into boring gray goo. World Z is gray goo made out of humans instead of nanomachines.)
So this is what World Z is like -- now use your moral intuition. Or apply a Rawlsian heuristic: would you want to live in this world?
We also can't know how lives in either world are better, because we have not even considered a metric for how to appreciate life.
Here's a simple one. It's only a binary metric, but it puts a fine enough point upon the issue: I open a portal to World Z right now, you can walk through or not, do you?
At the same time, the thought experiment's simplicity also hurts its implications for political discourse. How would such a society be enforced, for instance (how great is a new Holodomor)?
Any consideration of this question seems to push one toward agreement with Parfit's premise. (Well, unless one thinks a Holodomor is good, I suppose.)
Indeed, the very fact that your first (and correct) instinct is that the conditions of World Z need to be enforced on people, rather than the majority going along voluntarily, suggests immediately that World Z is bad!
How would free-riders be treated?
Is this relevant? Suppose they would be killed off; does that fix the paradox? Suppose they would be given subsistence; does that fix the paradox?
How would the extra considerations for subsistence+1 be enforced?
You have more or less stated why Parfit's premise is false - we don't get there, we can't. We stop as you stated in sub+1, but really don't truly get there even. My belief (not an argument, simply a statement which colors parts of what I am about to say) is that the things developed for luxury are the things that will be used for doing good eventually.
My historical description is based heavily in this, and it is why I am calling out utilitarianism (and really always have, not here but personally) - an ethical system unable to be followed is by nature flawed, heck, that is the *only* argument against most ethical systems. Rephrased, a system to dictate the corrwct course of action, that fails to lead me to a correct course of action, is useless. It does not matter if we start when language or paper or the printing press or typewriters or computers or whatever comes next - what is essential now was luxury once. What is luxury now is perhaps essential tomorrow. I guess what I am saying is that since hind sight informs that we are certainly wrong about what will save people long term, we would be deluded to think a course of action chosen now is right if these are our metrics.
On my reference to inflection point, I was more simply imagining the shape of a logistic function. Although perhaps a type of bell function is better. Main point is that as charitable action increases (x axis) population is affected (y axis). So at some point in the equation population would peak, and that would be the amount of charitable action we should do. Any more us not truly charitable, as we a measuring the word charitable in some way by number of lives saved.
There is a large part of this that is merely a spitball, I didn't type either reply with a real goal in mind and we are watching my thoughts as they work their way out, as I haven't actively argued a point with so mathematical a utilitarian viewpoint before, usually people start with Mill instead of his offshoots (though Singer is likely more like a result on Bentham and his Felicific Calculus). I say this by way of apology for an ill defined structure to my posts, I do tend to meander through it.
An ethicist by the name of Peter Singer (almost certainly not the first to come up with this notion, but he puts a fine point on it) suggests the following ethical thought experiment: imagine a man has just left a boutique store having purchased an expensive coat for, say, $200. As he walks home wearing his new coat, he encounters a boy who can't swim drowning in a pond. No one else is capable of intervening to save the child. What would we think of this man upon finding out that he elects to let the child drown because he doesn't want to ruin his new coat?
Singer points out, quite correctly I think, that most people would call that monstrous. So the first half of the paradox is what we might call Singer's premise: in a moral tradeoff between human life on one hand and some luxury or quality-of-life improvement on the other, human life should win
Are we assuming the person has the ability to save the child. Can he swim. Two people drowning instead of one does not make him more ethical.
The second half of the paradox can be seen by taking Singer's premise and running with it. Singer himself took the first step; he points out that one does not have to wait until one happens upon a drowning boy while wearing nice clothes in order to realize this moral tradeoff. Opportunities to save human life abound. Probably the best example is malaria remedies: those who study the efficacy of such things estimate that every $10 spent on malaria prevention and treatment measures saves one human life. So those of you who accept Singer's premise are all (almost certainly) already monsters by your own lights: every movie you've been to see has come at the cost of a life, your dinner bill might be worth five or six lives if you eat at nice places, et cetera.
SO only if you give all you money to charity and become as poor as the starving, malaria ridden children of Africa are you a morally good person?
The argument goes as follows. Start in World A, with a certain population and a good average quality of life. Construct a new world, World B, from World A by having some set of people in World A make a collection of ethical tradeoffs consistent with Singer's premise. Then World B has a higher population than World A (some number of lives have been saved/prolonged) but a lower average quality of life (some quality of life items have been traded for those lives.) Repeat this procedure until every possible ethical tradeoff consistent with Singer's premise has been made and you end up in World Z. (Note, of course, that this is dependent on the premise that Singerian ethical tradeoffs can basically always be made, but this premise should be fairly uncontroversial -- there's always something someone can do to help someone else.)
I think the initial thought experiment is flawed from a economics'/rational choice theory point of view. We are discussing which should we prefer: a 'coat' or a 'random kid's life'. However microeconomics have already made the conclusion that comparing individual goods is a utterly futile exercise (see cardinal utility vs. ordinal utility). We have good reasons to believe we make choices between sets of things and not things itself.
Then the dilemma consists of (1,0) vs. (0,1) the reference being ('coat', 'random kid's life'). Our first conclusion is that (0,1)>(1,0) [were >, < and ~ stands for ethical preference relantionships, the preference relation of a ethically ideal human being]. Singer is under the impression this first conclusion implies that (a,b)>(c,d) for all b>d but that's not logically true. You need additional hypothesis on the properties of > to make this deduction, hypothesis that needs some justification.
In order words, saying saving the first kid is the right choice in no way should imply that's the right choice if later on you face similar circumstances. There's no contradiction in saving the kid one day and spending 10 bucks on movies later that night.
I think Singer's premise is true while Parfit's is misleading.
It depends on who you ask. The question, "would you want to live in world Z" is misleading. No one would choose to live in World Z over World A, but that presumes they are one of the lucky ones with the choice. The better question to ask is would you rather die (presumably a horrible death from starvation or a preventable disease) or live in World Z? Most people would choose World Z. But we can't ask them, they are dead.
The second half of the paradox can be seen by taking Singer's premise and running with it. Singer himself took the first step; he points out that one does not have to wait until one happens upon a drowning boy while wearing nice clothes in order to realize this moral tradeoff. Opportunities to save human life abound. Probably the best example is malaria remedies: those who study the efficacy of such things estimate that every $10 spent on malaria prevention and treatment measures saves one human life. So those of you who accept Singer's premise are all (almost certainly) already monsters by your own lights: every movie you've been to see has come at the cost of a life, your dinner bill might be worth five or six lives if you eat at nice places, et cetera.
I'm probably wading into the deep end, but I would surmise this premise is flawed. Money is a utility. It is what provides the means to saves some lives, in some situations, if the opportunity to do so exist. If you do not have the opportunity to effectively use the money, its wasted, in any endeavor. In some places, people do not starve due to lack of resources, but rather due to the actions of corruption or other nefarious acts. In some cases, providing money or resources to keep a person from starving can start wars.
I'm probably wading into the deep end, but I would surmise this premise is flawed.
Which premise? Singer's premise that "in a moral tradeoff between human life on one hand and some luxury or quality-of-life improvement on the other, human life should win"? Or Crashing00's that "every $10 spent on malaria prevention and treatment measures saves one human life"?
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
Both..money alone or accompanying other factors does not guarantee a life will be saved. The guy could drown saving the boy (wasting the money and costing another life)and the 10 could be stolen to buy guns to kill many more lives.
I'll elaborate more, maybe someone can tell me where I'm mistaken.
I think about the boy and the rich person, and the choice that was presented. It's not as simple decision as the story portends. First, it presents two choices with two potential out comes. In my mind my mind, weighing right from wrong involves an incomprehensible amount of variables and it's just plain lazy to limit those choices or outcomes to an overly simplistic situation. The ethical result of your choice would ultimately depend on the sheer amount of known and unknown variables to the point you would have no clue whether you made the ethically correct decision. You could make an argument that if you knew all the variables and their outcomes, it may ethically be better to let the kid die.
You can only make decisions based upon the information you have, whether its ethically correct is impossible to figure out, except in the most general of terms and those terms are largely dictated by social norms to which you were raised and will end up being incorrect/correct half the time due to ignorance of variables and outcomes and learning. You'd consider the variables you can and make your decision (including using your value system). I do not think the decision is a question of ethics, but rather question of risk/reward (or whats more likely to be right according to what's important to the individual) where some of the time you are going to wrong and some of the time you will be right, once its all said and done.
I do not think the decision is a question of ethics, but rather question of risk/reward
Singer's contention, following that of other utilitarians like Jeremy Bentham and John Stuart Mill, is that this is precisely what ethics is. "The morally right action to perform is the one which, given the information available to you, is most likely to result in a net favorable outcome" is a coherent ethical claim. The question here is: is it the correct claim?
Private Mod Note
():
Rollback Post to RevisionRollBack
Vive, vale. Siquid novisti rectius istis,
candidus inperti; si nil, his utere mecum.
I do not think the decision is a question of ethics, but rather question of risk/reward
Singer's contention, following that of other utilitarians like Jeremy Bentham and John Stuart Mill, is that this is precisely what ethics is. "The morally right action to perform is the one which, given the information available to you, is most likely to result in a net favorable outcome" is a coherent ethical claim. The question here is: is it the correct claim?
I think it's entirely dependent on ones values.
I'm sure you are aware of this:
Moral relativism is the theory that moral standards vary from society to society, and from time to time in history. Under this theory, ethical principles are not universal and are instead social products. This theory argues that there is no objective moral order or absolute truth.
This is the part where I guess I have to bow out, as after moral relativism, it gets into an entire meta where you have to compute so much, it becomes impossible for me to keep track and I'm not sure how others are able to do it and speak with any degree of certainty. My guess is, they try to do it with numbers and numbers are abstract to me with no context, consequently limiting my understanding. I'm going to go back to the shallow end. I appreciate the response.
Private Mod Note
():
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
An ethicist by the name of Peter Singer (almost certainly not the first to come up with this notion, but he puts a fine point on it) suggests the following ethical thought experiment: imagine a man has just left a boutique store having purchased an expensive coat for, say, $200. As he walks home wearing his new coat, he encounters a boy who can't swim drowning in a pond. No one else is capable of intervening to save the child. What would we think of this man upon finding out that he elects to let the child drown because he doesn't want to ruin his new coat?
Singer points out, quite correctly I think, that most people would call that monstrous. So the first half of the paradox is what we might call Singer's premise: in a moral tradeoff between human life on one hand and some luxury or quality-of-life improvement on the other, human life should win.
The second half of the paradox can be seen by taking Singer's premise and running with it. Singer himself took the first step; he points out that one does not have to wait until one happens upon a drowning boy while wearing nice clothes in order to realize this moral tradeoff. Opportunities to save human life abound. Probably the best example is malaria remedies: those who study the efficacy of such things estimate that every $10 spent on malaria prevention and treatment measures saves one human life. So those of you who accept Singer's premise are all (almost certainly) already monsters by your own lights: every movie you've been to see has come at the cost of a life, your dinner bill might be worth five or six lives if you eat at nice places, et cetera.
But that's not really the paradox; in fact, it's an argument that Singer and others in his school take perfectly seriously. One can consistently believe people are just obliged to be that radically altruistic. The paradox comes when we consider an additional argument made by Derek Parfit, which goes as follows: Singer's premise, when consistently applied, leads one to the belief that a world (World Z) of many billions of people, all of whom have quality-of-life just barely at subsistence level, is morally superior to a world (World A) where there are only a few billions of people, all of whom lead drastically better lives than the best life led by anyone in World Z.
The argument goes as follows. Start in World A, with a certain population and a good average quality of life. Construct a new world, World B, from World A by having some set of people in World A make a collection of ethical tradeoffs consistent with Singer's premise. Then World B has a higher population than World A (some number of lives have been saved/prolonged) but a lower average quality of life (some quality of life items have been traded for those lives.) Repeat this procedure until every possible ethical tradeoff consistent with Singer's premise has been made and you end up in World Z. (Note, of course, that this is dependent on the premise that Singerian ethical tradeoffs can basically always be made, but this premise should be fairly uncontroversial -- there's always something someone can do to help someone else.)
Now we can state Parfit's premise: World Z is bad. Much like Singer's premise, most people are willing to grant this, though for some leftists World Z actually represents a utopia, so this may be slightly more controversial ab initio than Singer's premise.
So from Singer's premise it follows that World Z is good, which is inconsistent with Parfit's premise. This is the paradox. As usual in philosophy, there are only so many ways to attack an argument, so here are the three questions I propose for discussion:
1) Is Singer's premise true or false? Why?
2) Is Parfit's premise true or false? Why?
3) If you answered "true" to both 1 and 2, then you must also have found a mistake in the preceding argument. What is it?
Discuss!
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
This should be an interesting thread to watch though.
Both premises are obviously false. Mainly that both derive from the idea of a life-money fungibility (which I hope is a word). I cannot indefinitely forgo money for the sake of Malaria, especially since some amounts of things at one point considered luxuries were required to have a Malaria cure/treatment to begin with - aka technological progress. Which gets us to the idea that it would be subsistence living a la Parfit (I want to make a joke about parfaits at this point, but it is too multilayered a topic). In a subsistence world, it is clear that there are not many billions of people, in fact there are many less than today. How do we know? We can look at history - in a subsistence world, people die younger.
This appears, of course, to be an optimization problem at that point. Surely, Mr. Gates could give more to charity. That in turn would save more people, we just need to stop at the inflection point of the equation.
But unfortunately, we now need to figure out what is luxury. Let's use something obviously luxurious - commercial space flight. This is something which does nothing to benefit humanity, and literally billions is spent on it. But, perhaps, this leads to a capacity for interstellar travel. At this point, it could ostensibly save many more lives over the course of hundreds of years. The idea of a car was once a luxury, but now being able to deliver a heart from Chicago to Milwaukee in an hour means many people live. Assuredly, dyed clothes was a luxury, and that now keeps many hunters alive (and remember in subsistence world, hunting will be essential). How can we, before the fact, know what will happen. The man who lets his coat get ruined may drown due to its weight (although, we will never know why he didn't simply shed it in either situation - and to be clear this parenthetical is a joke over pedantry, not a point).
To be clear, I am not really proposing a situational ethic, simply stating that a fungibility of money and lives is inherently flawed without omniscience of future events; it is the inherent flaw of any utilitarian ethic. In fact, we can easily see how repulsed we are to this when we look at the fact that insurance companies factor in education levels and job when doing the acturial work behind life insurance or death in a car crash. Even more so if they simply were to give that money Malaria research (after all if lives = money then lives = lives, and they can do a whole lot better than paying a widow by saving hundreds, even if the widow dies due not receiving recompense).
As a side note, reaching the end of this, I realize another major problem is simply this being a combination of the worst parts of deontology with the worst parts utilitarianism.
Here I must point out that Singer's premise does not insist that you indefinitely forgo money for malaria cures -- only when such money is being otherwise allocated to luxury or quality of life improvements. When such money becomes essential to someone's survival, you may stop making the tradeoff. (Though I still think you raise a good point about the details of the tradeoffs.)
For the record, I agree with you that Singer's premise is false, but I want to play devil's advocate here: it's not clear that the development of a malaria cure is contingent on luxury spending beyond some very minimum level, at which level the paradox will still be present.
How sure can we be that a future subsistence world will look like the past state of subsistence? For instance, there is no reason we can't keep what technology we already have. Moreover, we could extend "subsistence" to include the development of those and only those technologies that would enhance our ability to subsist. Maybe "subsistence" is no longer the correct word for this. Let's call it "subsistence +1". I claim the paradox remains even under "subsistence +1"
I don't understand what the second term of the equation is supposed to be, such that we could even begin to evaluate where the inflection point is or if there is one. (Well, to be fair, I don't understand either term, but I really don't understand the second.) If there's one term for life and one term for quality-of-life, what does the term for quality-of-life look like.
I totally agree that we can't predict what the far-future consequences of our actions will be with any accuracy, and you're right to point it out. But note that this argument can be made by anyone, including someone on the opposing position, and is in essence a reductio of all of ethics if permitted. The cheap version is something I'm sure you've heard before: "What if the kid grows up to be the next Hitler?" The expensive version is that any action you take could lead, through some causal chain you can't foresee, to something monstrous, so you are forbidden to take any action whatever.
The general upshot of all these arguments is this: We can't use 20-20 hindsight to evaluate ethical decisions. We have to go with the best information we can obtain at the time of making the decision, and at the time of making a decision re: Singer's premise, we must go with our current knowledge on what constitutes an extravagance, even if that extravagance would later lead to a greater utilitarian benefit, provided we were unable to predict that benefit in advance. Ethics can't be plagued by hindsight bias.
(P.S.: not playing devil's advocate here, I believe keeping hindsight out of ethics is an important principle which this argument of yours violates.)
Utilitarians would deny that this is a flaw. Inability to solve a mathematical model due to lack of information or mathematical ability does not make that model bad. Insurance companies approximating that model in the way that you suggest is a thing that seems to be "working." People buy insurance voluntarily and feel at least a subjective benefit from doing so.
I'm not sure I can agree. Certainly I simplified the presentation greatly in the name of brevity and glossed over a lot of details, but I believe most of those details can be fixed upon examination. I think both halves of this dichotomy are capturing the "correct" essence of those two ethical philosophies.
P.S. You said both premises are obviously false. (Hopefully I've convinced you to drop "obviously.") You said a lot about Singer's premise, but you haven't said why Parfit's premise ("World Z is bad") is false. Why do you think that?
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
RUNIN: Norse mythology set (awaiting further playtesting)
FATE of ALARA: Multicolour factions (currently on hiatus)
Contibutor to the Pyrulea community set
I'm here to tell you that all your set mechanics are bad
#Defundthepolice
Unfortunately, ethics appears to be inextricably enmeshed with vaguely defined terms, and you definitely can't do ethics without loaded language. After all, "loaded" in other contexts just means something like "carries a potentially unjustified ethical connotation" -- which in this context is precisely the intent! (Though obviously, in this case, one hopes the connotation can be justified at least somewhat.)
That being said, I think you're right to try to get some details about World Z, and I think that we can derive some interesting properties of World Z from its definition, vague though it may be.
World Z is the world that results when no further ethical tradeoffs that are consistent with Singer's premise can be made. In other words, nobody in World Z is ethically permitted to take any action that he cannot himself predict, in advance, will inure to the subsistence (or subsistence +1, etc) of the society.
So, put ourselves into that mindset and consider some actions which are ethically impermissible: leisure reading (in fact, we could burn the books to create energy to heat more homes to shelter more people...), listening to music (making sounds requires energy which could be used to grow more food...), sleeping in (that extra 5 minutes of work could be used to...), in fact any hour not spent working other than the minimal quantity of sleep (extra time could be used to...), et cetera.
(There is a thought device in communities that like to discuss the future of AI known as "gray goo" -- a connected mesh of nanomachines programmed with only the directive to make more of itself, and possessed of the intelligence necessary to carry out that directive. The gray goo cannot do anything "fun" even if it is intelligent enough to do so, because that energy could instead be used to make more goo. Eventually such a system turns all mass-energy in the universe into boring gray goo. World Z is gray goo made out of humans instead of nanomachines.)
So this is what World Z is like -- now use your moral intuition. Or apply a Rawlsian heuristic: would you want to live in this world?
Here's a simple one. It's only a binary metric, but it puts a fine enough point upon the issue: I open a portal to World Z right now, you can walk through or not, do you?
Any consideration of this question seems to push one toward agreement with Parfit's premise. (Well, unless one thinks a Holodomor is good, I suppose.)
Indeed, the very fact that your first (and correct) instinct is that the conditions of World Z need to be enforced on people, rather than the majority going along voluntarily, suggests immediately that World Z is bad!
Is this relevant? Suppose they would be killed off; does that fix the paradox? Suppose they would be given subsistence; does that fix the paradox?
See previous answer about enforcement.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
My historical description is based heavily in this, and it is why I am calling out utilitarianism (and really always have, not here but personally) - an ethical system unable to be followed is by nature flawed, heck, that is the *only* argument against most ethical systems. Rephrased, a system to dictate the corrwct course of action, that fails to lead me to a correct course of action, is useless. It does not matter if we start when language or paper or the printing press or typewriters or computers or whatever comes next - what is essential now was luxury once. What is luxury now is perhaps essential tomorrow. I guess what I am saying is that since hind sight informs that we are certainly wrong about what will save people long term, we would be deluded to think a course of action chosen now is right if these are our metrics.
On my reference to inflection point, I was more simply imagining the shape of a logistic function. Although perhaps a type of bell function is better. Main point is that as charitable action increases (x axis) population is affected (y axis). So at some point in the equation population would peak, and that would be the amount of charitable action we should do. Any more us not truly charitable, as we a measuring the word charitable in some way by number of lives saved.
There is a large part of this that is merely a spitball, I didn't type either reply with a real goal in mind and we are watching my thoughts as they work their way out, as I haven't actively argued a point with so mathematical a utilitarian viewpoint before, usually people start with Mill instead of his offshoots (though Singer is likely more like a result on Bentham and his Felicific Calculus). I say this by way of apology for an ill defined structure to my posts, I do tend to meander through it.
Are we assuming the person has the ability to save the child. Can he swim. Two people drowning instead of one does not make him more ethical.
SO only if you give all you money to charity and become as poor as the starving, malaria ridden children of Africa are you a morally good person?
The argument goes as follows. Start in World A, with a certain population and a good average quality of life. Construct a new world, World B, from World A by having some set of people in World A make a collection of ethical tradeoffs consistent with Singer's premise. Then World B has a higher population than World A (some number of lives have been saved/prolonged) but a lower average quality of life (some quality of life items have been traded for those lives.) Repeat this procedure until every possible ethical tradeoff consistent with Singer's premise has been made and you end up in World Z. (Note, of course, that this is dependent on the premise that Singerian ethical tradeoffs can basically always be made, but this premise should be fairly uncontroversial -- there's always something someone can do to help someone else.)
Or you could just get a lot of drowning victims.
I think the initial thought experiment is flawed from a economics'/rational choice theory point of view. We are discussing which should we prefer: a 'coat' or a 'random kid's life'. However microeconomics have already made the conclusion that comparing individual goods is a utterly futile exercise (see cardinal utility vs. ordinal utility). We have good reasons to believe we make choices between sets of things and not things itself.
Then the dilemma consists of (1,0) vs. (0,1) the reference being ('coat', 'random kid's life'). Our first conclusion is that (0,1)>(1,0) [were >, < and ~ stands for ethical preference relantionships, the preference relation of a ethically ideal human being]. Singer is under the impression this first conclusion implies that (a,b)>(c,d) for all b>d but that's not logically true. You need additional hypothesis on the properties of > to make this deduction, hypothesis that needs some justification.
In order words, saying saving the first kid is the right choice in no way should imply that's the right choice if later on you face similar circumstances. There's no contradiction in saving the kid one day and spending 10 bucks on movies later that night.
BGU Control
R Aggro
Standard - For Fun
BG Auras
It depends on who you ask. The question, "would you want to live in world Z" is misleading. No one would choose to live in World Z over World A, but that presumes they are one of the lucky ones with the choice. The better question to ask is would you rather die (presumably a horrible death from starvation or a preventable disease) or live in World Z? Most people would choose World Z. But we can't ask them, they are dead.
I'm probably wading into the deep end, but I would surmise this premise is flawed. Money is a utility. It is what provides the means to saves some lives, in some situations, if the opportunity to do so exist. If you do not have the opportunity to effectively use the money, its wasted, in any endeavor. In some places, people do not starve due to lack of resources, but rather due to the actions of corruption or other nefarious acts. In some cases, providing money or resources to keep a person from starving can start wars.
candidus inperti; si nil, his utere mecum.
I think about the boy and the rich person, and the choice that was presented. It's not as simple decision as the story portends. First, it presents two choices with two potential out comes. In my mind my mind, weighing right from wrong involves an incomprehensible amount of variables and it's just plain lazy to limit those choices or outcomes to an overly simplistic situation. The ethical result of your choice would ultimately depend on the sheer amount of known and unknown variables to the point you would have no clue whether you made the ethically correct decision. You could make an argument that if you knew all the variables and their outcomes, it may ethically be better to let the kid die.
You can only make decisions based upon the information you have, whether its ethically correct is impossible to figure out, except in the most general of terms and those terms are largely dictated by social norms to which you were raised and will end up being incorrect/correct half the time due to ignorance of variables and outcomes and learning. You'd consider the variables you can and make your decision (including using your value system). I do not think the decision is a question of ethics, but rather question of risk/reward (or whats more likely to be right according to what's important to the individual) where some of the time you are going to wrong and some of the time you will be right, once its all said and done.
candidus inperti; si nil, his utere mecum.
I think it's entirely dependent on ones values.
I'm sure you are aware of this:
This is the part where I guess I have to bow out, as after moral relativism, it gets into an entire meta where you have to compute so much, it becomes impossible for me to keep track and I'm not sure how others are able to do it and speak with any degree of certainty. My guess is, they try to do it with numbers and numbers are abstract to me with no context, consequently limiting my understanding. I'm going to go back to the shallow end. I appreciate the response.