Hi, I'm the guy from the non free will thread. I'm just a newbie in those regards and i'm here to ask some questions, since you guys are the best source of info i have available. Off we go !
Under material determinism, decisions are the outcome of a material and determined brain process, not a arbitrary free will, correct ? If that's the case, how would this problem would be solved:
- Someone have to choose between A and B within a limited time frame. It does prefer A or B over nothing.
- The same brain process take makes A desirable is also making B desirable in the same precise same amount. Everything else is constant.
If determined forces would make A and B 'draw' wouldn't the brain fall in a eternal dilemma ?
Possible solution:
- The brain can trigger a truly random experiment to decide. It doesn't need to be truly random, just random to the own brain's domain (such as, i will pick left if it's night time and right if it's day time - those things are not truly random, but they are random to the brain since he don't control the time of the experiment).
It might sound weird to present a problem and a solution but it isn't ! I want to know if there's a scientifically proven solution to this or even if the problem is formulated in a valid manner.
- Someone have to choose between A and B within a limited time frame. It does prefer A or B over nothing.
- The same brain process take makes A desirable is also making B desirable in the same precise same amount. Everything else is constant.
If determined forces would make A and B 'draw' wouldn't the brain fall in a eternal dilemma ?
Two-stage models might be interesting to you. Robert Kane, specifically, writes well on this problem.
Private Mod Note
():
Rollback Post to RevisionRollBack
A limit of time is fixed for thee
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
- Someone have to choose between A and B within a limited time frame. It does prefer A or B over nothing.
- The same brain process take makes A desirable is also making B desirable in the same precise same amount. Everything else is constant.
The same way as in an indeterministic universe. A very poor intellect might be paralyzed by choice. Most actual consciousnesses will use a tiebreaker.
This tiebreaker probably has to be indeterministic from the standpoint of the consciousness - that is, it doesn't have to be genuinely indeterministic, it's sufficient that *I* can't predict the outcome right now.
If determined forces would make A and B 'draw' wouldn't the brain fall in a eternal dilemma ?
Possible solution:
- The brain can trigger a truly random experiment to decide. It doesn't need to be truly random, just random to the own brain's domain (such as, i will pick left if it's night time and right if it's day time - those things are not truly random, but they are random to the brain since he don't control the time of the experiment).
It might sound weird to present a problem and a solution but it isn't ! I want to know if there's a scientifically proven solution to this or even if the problem is formulated in a valid manner.
Genuine randomness simply doesn't matter here. Nobody would argue that most computers (as macro level constructs) are indeterministic in practice, and yet they don't suffer from this problem.
It's very easy to write a totally deterministic rule for a system along the following lines:
1. Weight each option according to desirability versus cost to achieve.
2. If the weights are different, choose the highest weight.
3. If the weights of the two (or more) most desirable options are identical, look at an external physical event complex enough to be beyond this system's power to compute the outcome (e.g. a coin toss, a roll of some dice), and tie the various potential results of that event to the options tied as most desirable. E.g. if the coin flip comes up heads, choose A, otherwise choose B.
Well, I think we have real world examples of something close.
The Blind Taste Test.
Lets say that you prefer Coke, and you would always choose Coke if given the choice over other drinks.
Choosing Coke is not random, you choose it because of many factors, taste, price, loyalty, habit, polar bear commercials, etc.
But, lets say that you are participating in a blindfolded taste test, and asked to choose which drink was best out of say 4 different drinks and you aren't told which are which.
Some of the information you usually used to determine your choice of Coke now aren't available. Woops, you tasted all 4 and ended up choosing Shasta Cola.
The choice you made was "freer" because there was an absolute minimum amount of causal determinants (taste alone) present when making the choice.
Is this sort of what you meant? If not, then I'll just agree with everything Taylor pointed out.
Private Mod Note
():
Rollback Post to RevisionRollBack
Thanks to Xenphire @ Inkfox for the amazing new sig
“Thus strangely are our souls constructed, and by slight ligaments
are we bound to prosperity and ruin.”
― Mary Shelley, Frankenstein
If determined forces would make A and B 'draw' wouldn't the brain fall in a eternal dilemma ?
Why is this a problem? My computer sometimes falls into a loop and needs to be rebooted.
As you said, there is a time limit. The brain could just 'freeze' until the time expires. Lots of people 'freeze up' in some situations, why would this be different?
- Someone have to choose between A and B within a limited time frame. It does prefer A or B over nothing.
- The same brain process take makes A desirable is also making B desirable in the same precise same amount. Everything else is constant.
If determined forces would make A and B 'draw' wouldn't the brain fall in a eternal dilemma ?
Two-stage models might be interesting to you. Robert Kane, specifically, writes well on this problem.
- Someone have to choose between A and B within a limited time frame. It does prefer A or B over nothing.
- The same brain process take makes A desirable is also making B desirable in the same precise same amount. Everything else is constant.
The same way as in an indeterministic universe. A very poor intellect might be paralyzed by choice. Most actual consciousnesses will use a tiebreaker.
This tiebreaker probably has to be indeterministic from the standpoint of the consciousness - that is, it doesn't have to be genuinely indeterministic, it's sufficient that *I* can't predict the outcome right now.
If determined forces would make A and B 'draw' wouldn't the brain fall in a eternal dilemma ?
Possible solution:
- The brain can trigger a truly random experiment to decide. It doesn't need to be truly random, just random to the own brain's domain (such as, i will pick left if it's night time and right if it's day time - those things are not truly random, but they are random to the brain since he don't control the time of the experiment).
It might sound weird to present a problem and a solution but it isn't ! I want to know if there's a scientifically proven solution to this or even if the problem is formulated in a valid manner.
Genuine randomness simply doesn't matter here. Nobody would argue that most computers (as macro level constructs) are indeterministic in practice, and yet they don't suffer from this problem.
It's very easy to write a totally deterministic rule for a system along the following lines:
1. Weight each option according to desirability versus cost to achieve.
2. If the weights are different, choose the highest weight.
3. If the weights of the two (or more) most desirable options are identical, look at an external physical event complex enough to be beyond this system's power to compute the outcome (e.g. a coin toss, a roll of some dice), and tie the various potential results of that event to the options tied as most desirable. E.g. if the coin flip comes up heads, choose A, otherwise choose B.
The way I see it this is the solution ii have proposed. Genuine randomness doesn't matter, it only need to be random in the brain's perspective.
I was only asking if scientists had discovered the brain actually engage in a random tie break in those situations.
If determined forces would make A and B 'draw' wouldn't the brain fall in a eternal dilemma ?
Why is this a problem? My computer sometimes falls into a loop and needs to be rebooted.
As you said, there is a time limit. The brain could just 'freeze' until the time expires. Lots of people 'freeze up' in some situations, why would this be different?
Is this sort of what you meant? If not, then I'll just agree with everything Taylor pointed out.
Sorry I deleted my post before reading this because I thought it was off topic.
It was essentially just a quote from Compatibilism, and didn't really answer the question.
Freezing out would be irrational. As I said, this person prefer A and B over nothing. If it falls into a dilemma until the time ends, this would show up a trace of economic irrationality in people.
I've my share of reading in behavior economics and my opinion is that all studies that reveal irrational behavior are very poorly executed and never replicated (while studies showing verifying the validity of rational conditions are well spread). There's a reason why the state of the art considers human beings perfectly rational decision makers when it comes to choice theory (economics). This result would be seriously confronting of the state of the art.
Until the exact nanosecond time expires, freezing seems tobe the only rational choice. Also, people have been known to be irrational, especially in a time limit situation.
But, I think your answer that the person would just do an "eeny-meeny-miny-moe" is the correct one. If the choice is too hard to make most people will just "flip a coin" and pick one.
IF "Choice too hard" THEN "Flip Coin"
Situations like this is exactly why eeny-meeny-miny-moe is in common use today.
If the choice is too hard, then flip a coin is correct. If you can't decide between the two, and it's not because you haven't thought it through - you aren't missing any crucial considerations by just being hasty - then the 'weight' of the two choices is virtually identical and wasting more time agonizing over that is just that - a waste of resources.
Our brains have evolved to be extremely robust over a wide range of practical scenarios - any brain that "got stuck" in such situations would be selected against. As to the actual mechanism that allows this - well, that's a complicated subject. The first thing that comes to mind is that they are more similar to analog computers than digital ones - they're not deterministic from the design/intentional stance.
I've my share of reading in behavior economics and my opinion is that all studies that reveal irrational behavior are very poorly executed and never replicated
Not sure where you're getting that. Humans exhibit a large number of significant, and often obvious and costly cognitive biases and otherwise irrational behavior. Many of these biases seem to be heuristic shortcuts, originally useful but conceived in our pre-technological ancestral environment.
Our brains have evolved to be extremely robust over a wide range of practical scenarios - any brain that "got stuck" in such situations would be selected against. As to the actual mechanism that allows this - well, that's a complicated subject. The first thing that comes to mind is that they are more similar to analog computers than digital ones - they're not deterministic from the design/intentional stance.
I've my share of reading in behavior economics and my opinion is that all studies that reveal irrational behavior are very poorly executed and never replicated
Not sure where you're getting that. Humans exhibit a large number of significant, and often obvious and costly cognitive biases and otherwise irrational behavior. Many of these biases seem to be heuristic shortcuts, originally useful but conceived in our pre-technological ancestral environment.
The breaking always appear only in bounded rationality studies (such as imperfect information). One can't be considered irrational if he was rational 'inside the boundaries'. Picking the wrong cookie because you have imperfect info of it is not irrationality, it's bounded rationality.
Someone would never 'freeze' and not pick A or B and end up if nothing it is known to this person that she would end up with nothing. Freezing until the time ends would be irrational and thous would never happen in this experiment (as it is described).
Freezing out would be irrational. As I said, this person prefer A and B over nothing. If it falls into a dilemma until the time ends, this would show up a trace of economic irrationality in people.
I've my share of reading in behavior economics and my opinion is that all studies that reveal irrational behavior are very poorly executed and never replicated (while studies showing verifying the validity of rational conditions are well spread). There's a reason why the state of the art considers human beings perfectly rational decision makers when it comes to choice theory (economics). This result would be seriously confronting of the state of the art.
"Rationality" implies a parent appeal for justification. In decision theory, you assume some value, goal, preference, or interest coming in. But that value itself isn't "rational" unless it is in turn justified by some parent value.
This creates an interesting problem that yields either existentialism or nihilism: there is no ultimate rational value. That's because "ultimate" means "has no parent" and "rational" means "has a parent."
In economics, actors are treated as "rational" insofar as it is assumed that they generally are good at making decisions that satisfy their values, whatever those values are. Aggregate those actors together, and the net aim ought to be toward the value aggregate. There is a broad consensus in terms of many human values (most humans feel compelled to fill their stomachs and protect children, for example), so most individuals are cool with systems that optimize the value aggregate.
The thing is, this is a generalization we make because there's no way to practically model reality, which we know consists of people who are bad at making decisions in service of their values (especially higher-order values), and the solution is a shotgun that can only satisfy most; outliers may have eccentric values, and will be dissatisfied by a system that advances the interests of the value aggregate.
"Economics depends on this assumption" doesn't mean "this assumption is impeccably true." In fact, we know it's an imperfect whitewash.
In any case, all decisions make appeals to, ultimately, appetitious desires over which we have little arbitrary control. People have suffered brain damage that robs them of their appetitious desires. The ancient philosophers would expect that these people would make purely rational decisions. Instead, they don't make decisions. Put one on a cereal aisle to pick a cereal, and they have no appetites by which to choose one over the other, and thus are frozen. Even "objective-seeming" things like calorie counts and nutrients don't motivate if you have no appetite to be healthy and alive, which is ultimately irrational (like every value or interest).
In A.I., we run into this. A decision function looks like this:
[returns a Decision] GetOptimum(value V)
We have this waking delusion that we don't have to pass a V to get a decision. Turns out we do. It's just that we don't usually notice that we're passing Vs all the time, nor do we have a perfect understanding of what Vs we're passing (though we can figure out some of the big ones).
One way you can circumvent this is by randomizing its Vs, and then having the A.I. mutate (including its Vs), propagate, and then have some sort of selector that kills or makes sterile. Evolution by natural selection happens over time, crafting a set of Vs. But to what end? Well, the V set is crafted toward the meta-V of whatever selector you were applying.
But, this is disappointing, because as programmer, you arbitrated the meta-V of the selector.
If the selector happens to be a natural thing in the world, however, then it can seem like your A.I. has "real values" that aren't just contrived. Feels "magical," even if it's as stupid as "he likes grass because, over time, his ancestral mutants that acquired that preference were less likely to wander into the mountains and freeze."
Anyway, a lot of sci-fi that explores A.I. is bad because it assumes a robot would have self-interested motivations "magically," without appealing to a programmer's whims or a meta-V applying cutt-throat, selective pressure over time.
Freezing out would be irrational. As I said, this person prefer A and B over nothing. If it falls into a dilemma until the time ends, this would show up a trace of economic irrationality in people.
I've my share of reading in behavior economics and my opinion is that all studies that reveal irrational behavior are very poorly executed and never replicated (while studies showing verifying the validity of rational conditions are well spread). There's a reason why the state of the art considers human beings perfectly rational decision makers when it comes to choice theory (economics). This result would be seriously confronting of the state of the art.
"Rationality" implies a parent appeal for justification. In decision theory, you assume some value, goal, preference, or interest coming in. But that value itself isn't "rational" unless it is in turn justified by some parent value.
This creates an interesting problem that yields either existentialism or nihilism: there is no ultimate rational value. That's because "ultimate" means "has no parent" and "rational" means "has a parent."
No idea why you believe rationality needs a parent appeal for justification. Those kind of discussion always seems completely semantic: what each one likes to call rational.
Rationality, defined by choice theory, exist in its bounded form (bounded rationality). This is the rationality i'm referring too and it's useful to argue again the 'eternal dilemma' outcome of the problem i proposed.
You can argue 'rationality' is not a proper name for the propriety of human beings to take internally consistent choices given the boundaries but this is not in dispute here. You can call what choice theory defines a rationality wherever you want and the whole argument stays the same.
In economics, actors are treated as "rational" insofar as it is assumed that they generally are good at making decisions that satisfy their values, whatever those values are. Aggregate those actors together, and the net aim ought to be toward the value aggregate. There is a broad consensus in terms of many human values (most humans feel compelled to fill their stomachs and protect children, for example), so most individuals are cool with systems that optimize the value aggregate.
The fundamental idea is that the only thing that keeps people from making perfectly good choices is ignorance or lack of perfect information. In experiments, revealed preferences shows itself very consistent at any given information set and also when you expand the information sets people tends to make progressively better choices.
This leads us to believe rationality exist as long as you respect the boundaries (human information and capacity to compute this information).
Also aggregating preferences is a area of dispute in economics. Arrow's impossibility theorem sets up the rules for choices being opt for aggregation. In reality, value aggregation likely doesn't exist out side of consensus cases (since arrow's conditions for aggregate value are super hard to come by). Also, I don't understand why you're adding value aggregates when discussing rationality since rationality is defined and it does work on a individual level. Is that your need for a 'rational goal' ? (this is legitimate question, i'm not being ironic).
The thing is, this is a generalization we make because there's no way to practically model reality, which we know consists of people who are bad at making decisions in service of their values (especially higher-order values), and the solution is a shotgun that can only satisfy most; outliers may have eccentric values, and will be dissatisfied by a system that advances the interests of the value aggregate.
Seems you're mixing up decision theory and welfare economics (which studied choices and outcome of choices in groups) as I don't know the relevance of 'eccentric values' and value aggregations to this discussion.
You can argue ourliers exist in decision theory, but only if you don't respect the boundaries. For example, a mad person might make non-transitive choices but this person is mad and is bounded by it's lack of capacity to compute information.
"Economics depends on this assumption" doesn't mean "this assumption is impeccably true." In fact, we know it's an imperfect whitewash.
This is a classic misreading of the subject. The assumption (most) economics takes and people (wrongly) criticize are the lack of boundaries, not rationality itself. Most economist does not only assumes the validity of rational choice, they also assume perfect information and perfect capacity to compute information.
Economists do it because they are interested in deriving a set of laws and relationships without delving in information and computation issues. But those laws function 'inside those boundaries'.
Some people (as I believe is a your case) also criticize what economists and choice theorist defines as 'rational'. And again, this is a purely semantic and mute point.
In any case, all decisions make appeals to, ultimately, appetitious desires over which we have little arbitrary control. People have suffered brain damage that robs them of their appetitious desires. The ancient philosophers would expect that these people would make purely rational decisions. Instead, they don't make decisions. Put one on a cereal aisle to pick a cereal, and they have no appetites by which to choose one over the other, and thus are frozen. Even "objective-seeming" things like calorie counts and nutrients don't motivate if you have no appetite to be healthy and alive, which is ultimately irrational (like every value or interest).
Again you're arguing semantics here. A person who choose to not eat is rational and the only way you call it 'irrational' if is you define rationality in a different manner.
Does this person makes transitive choices ? Are it's choice set complete ? Does it choices have IIA (independence of irrelevant alternatives) ?
If yes to those three question, the person is considered rational, by the given definition of rationality.
Again you're arguing semantics here. A person who choose to not eat is rational and the only way you call it 'irrational' if is you define rationality in a different manner.
Does this person makes transitive choices ? Are it's choice set complete ? Does it choices have IIA (independence of irrelevant alternatives) ?
If yes to those three question, the person is considered rational, by the given definition of rationality.
He's not arguing that choosing not to eat is irrational. He's arguing that having no desire to be healthy and alive is not rational, and further, that having a desire to be healthy and alive is not rational either.
I dispute (and, unlike his argument, which is quite substantive, this is a semantic argument - but bear in mind that semantic arguments have a place in philosophy, even if the term 'semantic' in everyday discussion is taken to be synonymous with 'irrelevant') terming them 'irrational' - irrational implies that it could be rational, but isn't (as in an irrational fear - a fear that presents itself in such a way as to override your rational decision-making when in the presence of its object). Basic desires are simply non-rational - they're not the kind of thing that could be rational in the first place. They're non-rational by definition. It's the distinction between immoral and amoral.
Minor quibble, and maybe even just clarification of something we agree on, but there it is.
In economics, actors are treated as "rational" insofar as it is assumed that they generally are good at making decisions that satisfy their values, whatever those values are.
I don't think this is quite right. For instance, in the simple market paradigm, a rational actor is one that "buys low and sells high." An actor that eg. "buys high and sells low" is deemed irrational by the model, irrespective of whether his personal goal is to lose money, or utiles, or whatever commodity is being traded on the market. It is the presence of a great number of these irrational actors that breaks simple market theories.
Aggregate those actors together, and the net aim ought to be toward the value aggregate. There is a broad consensus in terms of many human values (most humans feel compelled to fill their stomachs and protect children, for example), so most individuals are cool with systems that optimize the value aggregate.
But it is incurious to treat this as arbitrary or to stop the chain of inquiry at this point. Why is there a broad consensus? Why is it possible to agglomerate these values that you are calling arbitrary and ungrounded and get something on the other end that at least appears to be neither?
Put one on a cereal aisle to pick a cereal, and they have no appetites by which to choose one over the other, and thus are frozen. Even "objective-seeming" things like calorie counts and nutrients don't motivate if you have no appetite to be healthy and alive, which is ultimately irrational (like every value or interest).
Right, but like the amoral "moral dungeon" you posted in the evolutionary morality thread, this is another case where you have chosen your example a little too carefully. A person who has been robbed of their appetitive desires can't make an appetitive decision, e.g. picking tasty cereal. But can they really not make a decision between life and death? Will they really allow themselves to starve? I don't know because I haven't seen any data on cases like these, but I would like to know whether this is speculative or buttressed by some evidence.
If the selector happens to be a natural thing in the world, however, then it can seem like your A.I. has "real values" that aren't just contrived. Feels "magical," even if it's as stupid as "he likes grass because, over time, his ancestral mutants that acquired that preference were less likely to wander into the mountains and freeze."
I don't understand how that isn't precisely the kind of thing that would count as a real value that isn't just contrived.
Private Mod Note
():
Rollback Post to RevisionRollBack
A limit of time is fixed for thee
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
This is an interesting read. I've never studied or heard of Determinism, and I'm a little bit confused by a few things...
But from what I can gather, the OP is basically saying that Determinism is a theory that basically says, "There is no Free will, or unpredictable behaviour in the Human Brain, everything can be predicted and determined..... in some fashion"
Am I correct so far?
So, after that, the OP is asking if a person has to choose between A and B, how would you protect which they would choose?
I just thought of this example, can people let me know if it helps with the whole "determinism is incompatible with free will?"
There are two people( Jon and Tim). Tim must pick between chocolate ice cream or vanilla.
Jon says to Tim: "I know you will pick chocolate."
Now, if Tim DOES pick chocolate, that means that Jon has perfectly predicted his actions, right? So, if you buy into Incompatibilism it would mean Tim's actions were practiced, thus he didn't have free will for that choice.
Therefore, Tim MUST pick vanilla if he wants to have free will for this decision.
But, if he MUST do something, then how can the action be made from free will? If he is forced into picking vanilla by Jon's statement, it can't be Tim's 'choice.'
The only way out I can see would be to assume that Jon's knowledge of Tim's future choice must have no baring on Tim's Free Will. It can't matter what Jon knows or does not know.
Not necessarily, at least not according to the standard philosophical notion of libertarian free will -- which holds that an agent is free just when he could have done otherwise given the same past.
Even if someone makes a prediction and the agent in question bears out that prediction, it does not militate against libertarian free will as long as the agent could just as well have gone against the prediction.
You can throw a wrench in this works by making the prediction infallible or somehow metaphysically connected to the act itself, but a prediction isn't infallible just because it comes out right; it's only infallible if it couldn't have been wrong.
Private Mod Note
():
Rollback Post to RevisionRollBack
A limit of time is fixed for thee
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
If Jon says Tim picks chocolate but Tim picks vanilla, even in a determined universe, it means Jon didn't understand the system well enough to make a prediction.
If Jon says Tim will pick chocolate, and Tim picks chocolate how are you determining (within this bounds of this hypothetical situation) that Jon didn't have perfect knowledge of the outcome of this event?
His prediction was 100% correct without error.
Not necessarily, at least not according to the standard philosophical notion of libertarian free will -- which holds that an agent is free just when he could have done otherwise given the same past.
Even if someone makes a prediction and the agent in question bears out that prediction, it does not militate against libertarian free will as long as the agent could just as well have gone against the prediction.
You can throw a wrench in this works by making the prediction infallible or somehow metaphysically connected to the act itself, but a prediction isn't infallible just because it comes out right; it's only infallible if it couldn't have been wrong.
But, my response to it would be very similar to my response to gumOnShoe. If Jon says Tim will pick chocolate, and Tim picks chocolate how are you determining (within this bounds of this hypothetical situation) that Jon didn't have perfect knowledge of the outcome of this event? His prediction was 100% correct without error.
It seems (to me anyway) you would have to presuppose determinism false to come to the conclusion Jon's perfect perdition couldn't be a deterministic one (in the case where he was right).
Well.....
...I guess it's because we are assuming at the start he could be right or wrong? Alright, that makes sense to me then. I guess my example is a better fit for the 'omniscience vs free will' argument in that case.
If Jon says Tim will pick chocolate, and Tim picks chocolate how are you determining (within this bounds of this hypothetical situation) that Jon didn't have perfect knowledge of the outcome of this event?
How are you determining that he did?
His prediction was 100% correct without error.
That may be so, but you haven't shown a connection between the prediction and the decision. I watch Magic on Twitch.tv and call the plays exactly as they're played out from time to time; are the two events connected in such a manner, am I abrogating a player's free will by calling his play?
Private Mod Note
():
Rollback Post to RevisionRollBack
"If you're Havengul problems I feel bad for you son, I got 99 problems and a Lich ain't one." - FSM
"In a world where money talks, silence is horrifying."
That may be so, but you haven't shown a connection between the prediction and the decision. I watch Magic on Twitch.tv and call the plays exactly as they're played out from time to time; are the two events connected in such a manner, am I abrogating a player's free will by calling his play?
No, you're not. Your correct predictions based on past knowledge wouldn't have any bearing on their free will.
Under material determinism, decisions are the outcome of a material and determined brain process, not a arbitrary free will, correct ? If that's the case, how would this problem would be solved:
- Someone have to choose between A and B within a limited time frame. It does prefer A or B over nothing.
- The same brain process take makes A desirable is also making B desirable in the same precise same amount. Everything else is constant.
If determined forces would make A and B 'draw' wouldn't the brain fall in a eternal dilemma ?
Possible solution:
- The brain can trigger a truly random experiment to decide. It doesn't need to be truly random, just random to the own brain's domain (such as, i will pick left if it's night time and right if it's day time - those things are not truly random, but they are random to the brain since he don't control the time of the experiment).
It might sound weird to present a problem and a solution but it isn't ! I want to know if there's a scientifically proven solution to this or even if the problem is formulated in a valid manner.
BGU Control
R Aggro
Standard - For Fun
BG Auras
Two-stage models might be interesting to you. Robert Kane, specifically, writes well on this problem.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
The same way as in an indeterministic universe. A very poor intellect might be paralyzed by choice. Most actual consciousnesses will use a tiebreaker.
This tiebreaker probably has to be indeterministic from the standpoint of the consciousness - that is, it doesn't have to be genuinely indeterministic, it's sufficient that *I* can't predict the outcome right now.
Genuine randomness simply doesn't matter here. Nobody would argue that most computers (as macro level constructs) are indeterministic in practice, and yet they don't suffer from this problem.
It's very easy to write a totally deterministic rule for a system along the following lines:
1. Weight each option according to desirability versus cost to achieve.
2. If the weights are different, choose the highest weight.
3. If the weights of the two (or more) most desirable options are identical, look at an external physical event complex enough to be beyond this system's power to compute the outcome (e.g. a coin toss, a roll of some dice), and tie the various potential results of that event to the options tied as most desirable. E.g. if the coin flip comes up heads, choose A, otherwise choose B.
The Blind Taste Test.
Lets say that you prefer Coke, and you would always choose Coke if given the choice over other drinks.
Choosing Coke is not random, you choose it because of many factors, taste, price, loyalty, habit, polar bear commercials, etc.
But, lets say that you are participating in a blindfolded taste test, and asked to choose which drink was best out of say 4 different drinks and you aren't told which are which.
Some of the information you usually used to determine your choice of Coke now aren't available. Woops, you tasted all 4 and ended up choosing Shasta Cola.
The choice you made was "freer" because there was an absolute minimum amount of causal determinants (taste alone) present when making the choice.
Is this sort of what you meant? If not, then I'll just agree with everything Taylor pointed out.
Thanks to Xenphire @ Inkfox for the amazing new sig
“Thus strangely are our souls constructed, and by slight ligaments
are we bound to prosperity and ruin.”
― Mary Shelley, Frankenstein
Why is this a problem? My computer sometimes falls into a loop and needs to be rebooted.
As you said, there is a time limit. The brain could just 'freeze' until the time expires. Lots of people 'freeze up' in some situations, why would this be different?
Interestingly enough, there was a ST: Voyager episode were an AI had this same dilemma.
Sorry I deleted my post before reading this because I thought it was off topic.
It was essentially just a quote from Compatibilism, and didn't really answer the question.
Thnx a lot for the reference.
The way I see it this is the solution ii have proposed. Genuine randomness doesn't matter, it only need to be random in the brain's perspective.
I was only asking if scientists had discovered the brain actually engage in a random tie break in those situations.
Freezing out would be irrational. As I said, this person prefer A and B over nothing. If it falls into a dilemma until the time ends, this would show up a trace of economic irrationality in people.
I've my share of reading in behavior economics and my opinion is that all studies that reveal irrational behavior are very poorly executed and never replicated (while studies showing verifying the validity of rational conditions are well spread). There's a reason why the state of the art considers human beings perfectly rational decision makers when it comes to choice theory (economics). This result would be seriously confronting of the state of the art.
BGU Control
R Aggro
Standard - For Fun
BG Auras
But, I think your answer that the person would just do an "eeny-meeny-miny-moe" is the correct one. If the choice is too hard to make most people will just "flip a coin" and pick one.
Situations like this is exactly why eeny-meeny-miny-moe is in common use today.
Not sure where you're getting that. Humans exhibit a large number of significant, and often obvious and costly cognitive biases and otherwise irrational behavior. Many of these biases seem to be heuristic shortcuts, originally useful but conceived in our pre-technological ancestral environment.
What we call rational is the abeyance of the completeness, transitivity and binary independence axioms. http://en.wikipedia.org/wiki/Rational_choice_theory.
The breaking always appear only in bounded rationality studies (such as imperfect information). One can't be considered irrational if he was rational 'inside the boundaries'. Picking the wrong cookie because you have imperfect info of it is not irrationality, it's bounded rationality.
Someone would never 'freeze' and not pick A or B and end up if nothing it is known to this person that she would end up with nothing. Freezing until the time ends would be irrational and thous would never happen in this experiment (as it is described).
BGU Control
R Aggro
Standard - For Fun
BG Auras
"Rationality" implies a parent appeal for justification. In decision theory, you assume some value, goal, preference, or interest coming in. But that value itself isn't "rational" unless it is in turn justified by some parent value.
This creates an interesting problem that yields either existentialism or nihilism: there is no ultimate rational value. That's because "ultimate" means "has no parent" and "rational" means "has a parent."
In economics, actors are treated as "rational" insofar as it is assumed that they generally are good at making decisions that satisfy their values, whatever those values are. Aggregate those actors together, and the net aim ought to be toward the value aggregate. There is a broad consensus in terms of many human values (most humans feel compelled to fill their stomachs and protect children, for example), so most individuals are cool with systems that optimize the value aggregate.
The thing is, this is a generalization we make because there's no way to practically model reality, which we know consists of people who are bad at making decisions in service of their values (especially higher-order values), and the solution is a shotgun that can only satisfy most; outliers may have eccentric values, and will be dissatisfied by a system that advances the interests of the value aggregate.
"Economics depends on this assumption" doesn't mean "this assumption is impeccably true." In fact, we know it's an imperfect whitewash.
In any case, all decisions make appeals to, ultimately, appetitious desires over which we have little arbitrary control. People have suffered brain damage that robs them of their appetitious desires. The ancient philosophers would expect that these people would make purely rational decisions. Instead, they don't make decisions. Put one on a cereal aisle to pick a cereal, and they have no appetites by which to choose one over the other, and thus are frozen. Even "objective-seeming" things like calorie counts and nutrients don't motivate if you have no appetite to be healthy and alive, which is ultimately irrational (like every value or interest).
In A.I., we run into this. A decision function looks like this:
[returns a Decision] GetOptimum(value V)
We have this waking delusion that we don't have to pass a V to get a decision. Turns out we do. It's just that we don't usually notice that we're passing Vs all the time, nor do we have a perfect understanding of what Vs we're passing (though we can figure out some of the big ones).
One way you can circumvent this is by randomizing its Vs, and then having the A.I. mutate (including its Vs), propagate, and then have some sort of selector that kills or makes sterile. Evolution by natural selection happens over time, crafting a set of Vs. But to what end? Well, the V set is crafted toward the meta-V of whatever selector you were applying.
But, this is disappointing, because as programmer, you arbitrated the meta-V of the selector.
If the selector happens to be a natural thing in the world, however, then it can seem like your A.I. has "real values" that aren't just contrived. Feels "magical," even if it's as stupid as "he likes grass because, over time, his ancestral mutants that acquired that preference were less likely to wander into the mountains and freeze."
Anyway, a lot of sci-fi that explores A.I. is bad because it assumes a robot would have self-interested motivations "magically," without appealing to a programmer's whims or a meta-V applying cutt-throat, selective pressure over time.
No idea why you believe rationality needs a parent appeal for justification. Those kind of discussion always seems completely semantic: what each one likes to call rational.
Rationality, defined by choice theory, exist in its bounded form (bounded rationality). This is the rationality i'm referring too and it's useful to argue again the 'eternal dilemma' outcome of the problem i proposed.
You can argue 'rationality' is not a proper name for the propriety of human beings to take internally consistent choices given the boundaries but this is not in dispute here. You can call what choice theory defines a rationality wherever you want and the whole argument stays the same.
The fundamental idea is that the only thing that keeps people from making perfectly good choices is ignorance or lack of perfect information. In experiments, revealed preferences shows itself very consistent at any given information set and also when you expand the information sets people tends to make progressively better choices.
This leads us to believe rationality exist as long as you respect the boundaries (human information and capacity to compute this information).
Also aggregating preferences is a area of dispute in economics. Arrow's impossibility theorem sets up the rules for choices being opt for aggregation. In reality, value aggregation likely doesn't exist out side of consensus cases (since arrow's conditions for aggregate value are super hard to come by). Also, I don't understand why you're adding value aggregates when discussing rationality since rationality is defined and it does work on a individual level. Is that your need for a 'rational goal' ? (this is legitimate question, i'm not being ironic).
Seems you're mixing up decision theory and welfare economics (which studied choices and outcome of choices in groups) as I don't know the relevance of 'eccentric values' and value aggregations to this discussion.
You can argue ourliers exist in decision theory, but only if you don't respect the boundaries. For example, a mad person might make non-transitive choices but this person is mad and is bounded by it's lack of capacity to compute information.
This is a classic misreading of the subject. The assumption (most) economics takes and people (wrongly) criticize are the lack of boundaries, not rationality itself. Most economist does not only assumes the validity of rational choice, they also assume perfect information and perfect capacity to compute information.
Economists do it because they are interested in deriving a set of laws and relationships without delving in information and computation issues. But those laws function 'inside those boundaries'.
Some people (as I believe is a your case) also criticize what economists and choice theorist defines as 'rational'. And again, this is a purely semantic and mute point.
Again you're arguing semantics here. A person who choose to not eat is rational and the only way you call it 'irrational' if is you define rationality in a different manner.
Does this person makes transitive choices ? Are it's choice set complete ? Does it choices have IIA (independence of irrelevant alternatives) ?
If yes to those three question, the person is considered rational, by the given definition of rationality.
BGU Control
R Aggro
Standard - For Fun
BG Auras
He's not arguing that choosing not to eat is irrational. He's arguing that having no desire to be healthy and alive is not rational, and further, that having a desire to be healthy and alive is not rational either.
I dispute (and, unlike his argument, which is quite substantive, this is a semantic argument - but bear in mind that semantic arguments have a place in philosophy, even if the term 'semantic' in everyday discussion is taken to be synonymous with 'irrelevant') terming them 'irrational' - irrational implies that it could be rational, but isn't (as in an irrational fear - a fear that presents itself in such a way as to override your rational decision-making when in the presence of its object). Basic desires are simply non-rational - they're not the kind of thing that could be rational in the first place. They're non-rational by definition. It's the distinction between immoral and amoral.
Minor quibble, and maybe even just clarification of something we agree on, but there it is.
I don't think this is quite right. For instance, in the simple market paradigm, a rational actor is one that "buys low and sells high." An actor that eg. "buys high and sells low" is deemed irrational by the model, irrespective of whether his personal goal is to lose money, or utiles, or whatever commodity is being traded on the market. It is the presence of a great number of these irrational actors that breaks simple market theories.
But it is incurious to treat this as arbitrary or to stop the chain of inquiry at this point. Why is there a broad consensus? Why is it possible to agglomerate these values that you are calling arbitrary and ungrounded and get something on the other end that at least appears to be neither?
Right, but like the amoral "moral dungeon" you posted in the evolutionary morality thread, this is another case where you have chosen your example a little too carefully. A person who has been robbed of their appetitive desires can't make an appetitive decision, e.g. picking tasty cereal. But can they really not make a decision between life and death? Will they really allow themselves to starve? I don't know because I haven't seen any data on cases like these, but I would like to know whether this is speculative or buttressed by some evidence.
I don't understand how that isn't precisely the kind of thing that would count as a real value that isn't just contrived.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
This is an interesting read. I've never studied or heard of Determinism, and I'm a little bit confused by a few things...
But from what I can gather, the OP is basically saying that Determinism is a theory that basically says, "There is no Free will, or unpredictable behaviour in the Human Brain, everything can be predicted and determined..... in some fashion"
Am I correct so far?
So, after that, the OP is asking if a person has to choose between A and B, how would you protect which they would choose?
http://forums.mtgsalvation.com/showthread.php?t=409478
There are two people( Jon and Tim). Tim must pick between chocolate ice cream or vanilla.
Jon says to Tim: "I know you will pick chocolate."
Now, if Tim DOES pick chocolate, that means that Jon has perfectly predicted his actions, right? So, if you buy into Incompatibilism it would mean Tim's actions were practiced, thus he didn't have free will for that choice.
Therefore, Tim MUST pick vanilla if he wants to have free will for this decision.
But, if he MUST do something, then how can the action be made from free will? If he is forced into picking vanilla by Jon's statement, it can't be Tim's 'choice.'
The only way out I can see would be to assume that Jon's knowledge of Tim's future choice must have no baring on Tim's Free Will. It can't matter what Jon knows or does not know.
Right?
Not necessarily, at least not according to the standard philosophical notion of libertarian free will -- which holds that an agent is free just when he could have done otherwise given the same past.
Even if someone makes a prediction and the agent in question bears out that prediction, it does not militate against libertarian free will as long as the agent could just as well have gone against the prediction.
You can throw a wrench in this works by making the prediction infallible or somehow metaphysically connected to the act itself, but a prediction isn't infallible just because it comes out right; it's only infallible if it couldn't have been wrong.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
His prediction was 100% correct without error.
Right, exactly my point.
After googling http://en.wikipedia.org/wiki/Libertarianism_(metaphysics) I think I see where they're coming from.
But, my response to it would be very similar to my response to gumOnShoe. If Jon says Tim will pick chocolate, and Tim picks chocolate how are you determining (within this bounds of this hypothetical situation) that Jon didn't have perfect knowledge of the outcome of this event? His prediction was 100% correct without error.
It seems (to me anyway) you would have to presuppose determinism false to come to the conclusion Jon's perfect perdition couldn't be a deterministic one (in the case where he was right).
Well.....
...I guess it's because we are assuming at the start he could be right or wrong? Alright, that makes sense to me then. I guess my example is a better fit for the 'omniscience vs free will' argument in that case.
How are you determining that he did?
That may be so, but you haven't shown a connection between the prediction and the decision. I watch Magic on Twitch.tv and call the plays exactly as they're played out from time to time; are the two events connected in such a manner, am I abrogating a player's free will by calling his play?
"In a world where money talks, silence is horrifying."
Ashcoat Bear of Limited
No, you're not. Your correct predictions based on past knowledge wouldn't have any bearing on their free will.
That's my point.