You know how we've taught computers to play chess at a very high level (defeating grandmasters and such), would it be possible to teach a computer to play MtG? Further, would it be possible to get computers to "solve" a meta-game? Like, given a particular format / meta-game the program should be able to construct a deck that can win tournaments. Alternatively, it should also be able to play limited formats.
You're comparing something with very few "moves" available at each time but always available on a static basis to one where everything is random and the "moves" are nearly infinite.
It's theoretically possible should the rules ever stop getting modified and new "moves" (cards) stop getting added with a TON of time however but it would take many computers working a very long time at that.
He's right. MtG is far too complex for a computer to be programmed for. Chess gets less complicated as you remove pieces from the board. It's the same reason that they can't get a computer to play Go at a level that is much higher than a moderate amateur level.
Can we? In the theoretical sense, yes. Can we teach them to play at a high level? Not easily.
Magic IS a fairly easy game for computers. There are defined zones and attributes, with rules that essentially function as programming logic (When and If-Then statements). The difficulty would be in the intercomplexity.
But even that can be overcome - while there can be many different board states at a given time, it wouldn't be hard to program a computer to maximize their attacks and spell usage. The problem would come from the computer anticipating the opponent. In that way, Magic is more like poker than Chess - and while you can program a computer to play Poker well, you can't program it to call a bluff.
Another big issue would probably be to actually implement strategies more difficult than "kill it dead". Yes, it is doable, by making a list of things that are good to do, and others which would be bad, and adjusting the 'danger level' of different creatures and stuff. But specific strategies would have to be more or less spelled out in extreme detail. And that's not even going into responding to the opponent's moves and strategy. You probably could, by looking at the opponent's board, comparing it with decks in the current metagame and whatnot, but it'd still be overly complicated.
In chess, it's quite easy to program; look at all possible moves, doe that for X turns and see what the optimal move is.
Private Mod Note
():
Rollback Post to RevisionRollBack
We have laboured long to build a heaven, only to find it populated with horrors.
Actually i think you can program a computer to play a game of MtG well, pairing up one deck vs another deck, and programming it to specifically play that matchup.
I think you can run simulations between two decks and exhaust general combinations of interaction, especially since most games don't go much beyond 20 turns, and many will be far shorter.
But perfect play is not the issue in MtG, since perfect play (vs just very good play) might net you 5% more wins. Perfect MtG play with the "best deck in the format" in some formats, might net you a % win rate.
-
So the real task is programming a computer that can design decks, analyze card lists in metagames, and create decks that do best against a field, AND anticipate how the field might change.
-
A computer can only be programmed to do a given task by a person who understands how to break down the process of doing that task. I don't think there's any players around (let alone players who program well) who can quantify (in a generalizable way) exactly how they construct decks, pick decks, study metas, play games.
But you can probably eventually program a computer that can be as good as any player in the game at MTGO at all of the above...
You create classes to categorize deck types, first, and how they win, how they interact with other decks. You have the computers run exhaustive numbers of simulations, and actually see what play wins in a given situation, vs what loses in types of matchups. Decks that interact. Decks that don't interact much. Disruption, direct and indirect. But nothing beats running simulations, and the simulations usually do not have to go that deep.
During each step, there is a finite number of plays available to you.
-
Of course cardsets change. Metas change. and quickly. So new mechanics can potentially alter the nature of the simulations, but overall, I do feel that a computer program could match the performance of the best human Magic player, across the board, except for the part about "card playing", where you READ your opponent's tells, intimidate, etc.
You know how we've taught computers to play chess at a very high level (defeating grandmasters and such), would it be possible to teach a computer to play MtG? Further, would it be possible to get computers to "solve" a meta-game? Like, given a particular format / meta-game the program should be able to construct a deck that can win tournaments. Alternatively, it should also be able to play limited formats.
Do you think this is a real possibility?
Well, Duels of the Planeswalkers has an AI opponent, so computers can already play Magic to an extent. How good is that AI? That part is difficult to tell since the environment is so constrained, and you can't build tier 1 decks to play against other tier 1 decks.
In that way, Magic is more like poker than Chess - and while you can program a computer to play Poker well, you can't program it to call a bluff.
Just for the record, this is not true. The best poker AIs can read bluffs and run their own bluffs.
Quote from dcartist »
A computer can only be programmed to do a given task by a person who understands how to break down the process of doing that task.
You are talking about a so-called "rule-based" AI. It is true that a rule-based AI is in general no better than its programmer, but it is not the only way to develop an AI. (An interesting article here elucidates some different ways of developing an AI in the context of poker.)
Private Mod Note
():
Rollback Post to RevisionRollBack
A limit of time is fixed for thee
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Chess is a game of perfect information, Magic is not. Even the best computer isn't going to be better than a good human at beating human opponents in long games with imperfect information (Hearts, Poker, etc.).
Chess is a game of perfect information, Magic is not. Even the best computer isn't going to be better than a good human at beating human opponents in long games with imperfect information (Hearts, Poker, etc.).
The Polaris poker AI won a series of recorded matches with human professionals with a record of two wins, one loss, and one draw in 2008.
Private Mod Note
():
Rollback Post to RevisionRollBack
A limit of time is fixed for thee
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
Just for the record, this is not true. The best poker AIs can read bluffs and run their own bluffs.
computer AIs don't "read bluffs". There is no known facial recognition software that detects "tells" or "lying". The AI can take into account the possibility of a bluff for a situation based on previous history, but poker AIs cannot READ a bluff, because that involves seeing the other player and general body language reading, which is beyond the ability of any publicly announced computer system.
You are talking about a so-called "rule-based" AI. It is true that a rule-based AI is in general no better than its programmer, but it is not the only way to develop an AI. (An interesting article here elucidates some different ways of developing an AI in the context of poker.)
I am not talking about rules based AI, nor did I say that the AI cannot be better than its programmer. I merely pointed out that the programmer needs to know what the problem is that he's trying to solve, and currently the problem to be solved when creating a Magic AI, is not entirely clear. There are constraints that simplify the problems, obviously, based on the limited size of the total card pool, and a general idea that the minimum legal number of cards in a deck (or close to it) tends to be optimal, but nobody even knows where to begin to design a computer that has the creativity to DESIGN decks, or analyze a metagame.
I do think that the brute force approach can be used to create an AI that plays optimally for one known deck against another known deck, includin optimizing mulligans, etc.
The Polaris poker AI won a series of recorded matches with human professionals with a record of two wins, one loss, and one draw in 2008.
I agree computers can deal with imprefect information as long as you can figure out a way to "weigh" the imperfect information. And of course 1 on 1, poker AIs can challenge individual high level pros. But I think that it would be hard for AIs to beat the bet pros at a world series, for example. The best players have an a huge advantage against the weaker players in early rounds, whom they can exploit, exceeding typical value betting, to build bigger bankrolls, while computer AI cannot, because AIs cannot read a face or a weak player's tells, or goad them into bigger bets... Read when theyre on tilt. In the early rounds of WSOPs, there's plenty of scrubs to brutalize, and the AI will leave money on the table against them, compared to a pro who can really read people.
The size of that pot you bring along with you really increases you chance of an ultimate win.
Just for the record, this is not true. The best poker AIs can read bluffs and run their own bluffs.
That isn't quite what I meant, though. Poker bluffs are easy - there are a discrete number of cards in every deck, and good program can tell you probability from this. Hands can be ranked quantitatively, so that hands will always be greater than, less than or equal to.
Magic hands are much for qualitative and context sensitive. AI would need complex programming on both the interrelationships of the cards in their own deck, the actual cards in the opponent's deck, the relationships of cards in the opponent's deck, the number of cards in the opponent's deck, the number of each [Card Name] in each deck. It's a lot more to work from - it isn't so much 'is this hand better', than 'is this worth playing now, later, and do they have an answer?'
Shandalar has an AI. And that was made more than ten years ago.
There's tons of other card games with AIs that are often just as complex as mtg. Look at the Yugioh video games.
Hmm. No, I don't. Do tell.
Deep Blue didn't know how to play chess and it wasn't taught. Deep Blue searched lookup tables. It did it really fast. It did it super, super well. Hot damn, can Deep Blue search those lookup tables.
Here's something odd. You make a board that doesn't occur in a game of chess usually, and Deep Blue can't play it. Have you met any person, ever, whom, you have taught chess, and literally spasms with cognitive paralysis at the sight of a not-quite-chess board? Did that person really learn chess?
Regarding the link of Thomas Bakker: It's odd that this has been linked to prove a point about what computers can play, when he is only discussing what a program can do. Artificial Intelligence is not limited to the 'classical' programming techniques he presumes in his discussion - even before he figures to distinguish three kinds of approaches. He is only talking about data transformation in one paradigm of machine algorithms; the very principle of working within some system run by a compiler or interpreter with sequence, branching, and looping, is something called Good Old-Fashioned AI, and is not the only thing a Computer can do - although hangers-on to GOFAI would say that if GOFAI can't do it, that's the end to what anyone ever meant by a computer doing it. (These people are confused when it comes to characterizing thought.). (actually it's imperative programming, but non-imperative programming all comes to that in the ultimate hardware of computers. -all those- though, the idea of specifying and enacting, of Turing Machines, is GOFAI.)
If beast89 is committed to some view about only Humans being able to play MtG, ever, then I'd want to challenge him. But if his question, by his experience, only meant something about whether -programs- can play MtG or not, then the preceding goes away, and I proceed to argue about Thomas Bakker having no systematic elimination of possible approaches; he doesn't give me a reason to believe him in his "And why should I believe anything you say?" section. However it becomes more appreciable that perhaps programs can't play MtG. Because programs, in my opinion, can never be equivalent to thought.
That isn't what machines can't do, though.
Private Mod Note
():
Rollback Post to RevisionRollBack
Epic banner by Erasmus of æтђєг.
Awesome avatar provided by Krashbot @ [Epic Graphics].
Here's something odd. You make a board that doesn't occur in a game of chess usually, and Deep Blue can't play it. Have you met any person, ever, whom, you have taught chess, and literally spasms with cognitive paralysis at the sight of a not-quite-chess board? Did that person really learn chess?
Are you sure you don't have your facts backwards? Because my recollection was that Deep Blue was better at impossible board positions than human masters. And I know for a fact that - you know that trick chessmasters can do where they quickly and accurately reproduce a board position from memory? They can't do that with impossible boards. Whether the board position is possible or impossible matters for human players' psychology. But I don't see how it would matter for Deep Blue, which as I understand it was the pinnacle of the brute-force approach.
Yeah, since because of how memory works they aren't going to "file12354123.sav" they are actually replaying a good chunk of a good, just at a very fast because because they know the flow of the game.
Don't newer chess computers work less like deep blue and more like that computer from War Games? Playing the game over and over again and figuring out what wins that way, was reading a pop psych book and it mentioned one that played backgammon and it was terrible, making near random moves, but then after it played against some grandmasters or whatever backgammon has and then set to play with itself it learned how to be really good.
Hmm. No, I don't. Do tell.
Deep Blue didn't know how to play chess and it wasn't taught. Deep Blue searched lookup tables. It did it really fast. It did it super, super well. Hot damn, can Deep Blue search those lookup tables.
Here's something odd. You make a board that doesn't occur in a game of chess usually, and Deep Blue can't play it. Have you met any person, ever, whom, you have taught chess, and literally spasms with cognitive paralysis at the sight of a not-quite-chess board? Did that person really learn chess?
I don't think this is true. Deep Blue used lookups to resolve certain end positions and openers, but it would be impossible to create a look-up table for the entire game. Rather, Deep Blue tried each possible move, and then each possible response, and so for, until it reached a certain depth, and then evaluated the resulting positions by heuristics. I don't see any reason why it couldn't handle a board position which is unreachable but playable.
Yeah, since because of how memory works they aren't going to "file12354123.sav" they are actually replaying a good chunk of a good, just at a very fast because because they know the flow of the game.
Don't newer chess computers work less like deep blue and more like that computer from War Games? Playing the game over and over again and figuring out what wins that way, was reading a pop psych book and it mentioned one that played backgammon and it was terrible, making near random moves, but then after it played against some grandmasters or whatever backgammon has and then set to play with itself it learned how to be really good.
Deep Blue's learning was actually similar to this - it was fed a large number of grandmaster games, and used them to derive a heuristic function for evaluating positions.
Are you sure you don't have your facts backwards? Because my recollection was that Deep Blue was better at impossible board positions than human masters. And I know for a fact that - you know that trick chessmasters can do where they quickly and accurately reproduce a board position from memory? They can't do that with impossible boards. Whether the board position is possible or impossible matters for human players' psychology. But I don't see how it would matter for Deep Blue, which as I understand it was the pinnacle of the brute-force approach.
Bullet. Teeth.
Brute-force approach? Is brute-force really thinking?
I don't think this is true. Deep Blue used lookups to resolve certain end positions and openers, but it would be impossible to create a look-up table for the entire game. Rather, Deep Blue tried each possible move, and then each possible response, and so for, until it reached a certain depth, and then evaluated the resulting positions by heuristics. I don't see any reason why it couldn't handle a board position which is unreachable but playable.
Even still, is that playing Chess? Is it doing anything smart? Experts which are Human don't think about things. That's how they achieve smart plans. They already intuit what to think about.
But I've gone too far: Perhaps a computer can play the game, but not like an expert. Maybe only like an amateur. That would still be something. Well, for space, I'll leave arguing that aside if I can.
I argue this point about whether the machine "plays Chess" because, if Deep Blue or anything else, 'learned', 'thought' or 'knew' how to play something.. then machines already can play anything. One is already doing the same thing - and it's not like they're made of unique, unreproducible components. They don't have complex histories like neurons; they're just transistors.
The answer to beast89 is a yes, but a hollow yes; it's what should be an unsatisfying 'yes' if it is won that way. And if it's so unsatisfying, it must be a mistake that it seems to be an answer, unless someone can prove the question utterly meant nothing.
It doesn't mean nothing because no one can actually say why Pat Chapin knows how to play Magic. But he sorta can. So asking if things can do that is something of a mystery.
---
I'm interested in hearing the answer to if a computer can "solve" a metagame. This falls squarely within computational tractability, but it's not quite exponentially hard, if I don't miss my math, to make a deck given the card pool's size. It's just polynomial and huge. But the meta is some function again of the possible decks.
If P = NP, it might be possible to verify that a given metagame is at its height - assuming that this matchup matrixing problem isn't even harder than NP. But there's good reason to think P is not NP.
Private Mod Note
():
Rollback Post to RevisionRollBack
Epic banner by Erasmus of æтђєг.
Awesome avatar provided by Krashbot @ [Epic Graphics].
I'm sure if you went and looked at Deep Blue's heuristic function for evaluating game states, you wouldn't be able to explain to me "why" it was good at chess.
Computer AIs don't "read bluffs". There is no known facial recognition software that detects "tells" or "lying". The AI can take into account the possibility of a bluff for a situation based on previous history, but poker AIs cannot READ a bluff, because that involves seeing the other player and general body language reading, which is beyond the ability of any publicly announced computer system.
You are assuming that the only way of reading a bluff is by examining bodily tells, which is of course not the case. Amongst professional players, who have trained themselves to control their tells, abnormal betting patterns are more useful in detecting bluffs than bodily cues.
It took me all of ten seconds to find that, by the way. I wish people would stop saying that things that have been done can't be done.
Quote from dcartist »
But I think that it would be hard for AIs to beat the bet pros at a world series, for example. The best players have an a huge advantage against the weaker players in early rounds, whom they can exploit, exceeding typical value betting, to build bigger bankrolls, while computer AI cannot, because AIs cannot read a face or a weak player's tells, or goad them into bigger bets... Read when theyre on tilt. In the early rounds of WSOPs, there's plenty of scrubs to brutalize, and the AI will leave money on the table against them, compared to a pro who can really read people.
You are right that inexperienced players do have bodily tells that the software would miss (at least until they install the aforementioned facial analysis software). However, facial cues are not the only way of reading a weak player. The article I linked even says that all of today's best poker AIs are what the author calls "Exploitative AI's" -- in other words they actually do most of the stuff you are saying they don't do, including "milking" cash-cow noobs.
Would Polaris win a WSOP right now? I agree with you, probably not. Finish in the money, though? I'd bet on it. I guarantee you that there would be countless individuals and companies willing to put up a $10k buy-in for an AI if the rules permitted it.
But this is really just quibbling. The point is that poker AI isn't some fever dream of a mad scientist. Humans aren't uniquely capable of playing poker where a machine can't.
Quote from Jay13x »
Magic hands are much for qualitative and context sensitive. AI would need complex programming on both the interrelationships of the cards in their own deck, the actual cards in the opponent's deck, the relationships of cards in the opponent's deck, the number of cards in the opponent's deck, the number of each [Card Name] in each deck. It's a lot more to work from - it isn't so much 'is this hand better', than 'is this worth playing now, later, and do they have an answer?'
Magic is way more complicated than poker, no doubt about it. I'm only saying if there is an answer to "Why can't a computer play Magic well?" it is not going to be "Because the concept of bluffing is a uniquely human element that can't be computerized" -- because clearly that is not the case.
Quote from Horseshoe_Hermit »
Regarding the link of Thomas Bakker: It's odd that this has been linked to prove a point about what computers can play, when he is only discussing what a program can do. Artificial Intelligence is not limited to the 'classical' programming techniques he presumes in his discussion - even before he figures to distinguish three kinds of approaches. He is only talking about data transformation in one paradigm of machine algorithms; the very principle of working within some system run by a compiler or interpreter with sequence, branching, and looping, is something called Good Old-Fashioned AI, and is not the only thing a Computer can do - although hangers-on to GOFAI would say that if GOFAI can't do it, that's the end to what anyone ever meant by a computer doing it.
You're right, of course. Thomas Bakker is an online poker pro, not by any means an AI expert. The point of linking that article was to point out that even the work of non-experts is far more advanced than many posters here are giving credit for, and also because it's a good layman's-terms overview. I never would hold it up as an exhaustive enumeration of approaches to AI or poker.
Quote from Horseshoe_Hermit »
Even still, is that playing Chess? Is it doing anything smart? Experts which are Human don't think about things. That's how they achieve smart plans. They already intuit what to think about.
But I've gone too far: Perhaps a computer can play the game, but not like an expert. Maybe only like an amateur. That would still be something. Well, for space, I'll leave arguing that aside if I can.
If "playing chess better than an expert" means anything coherent, then consistently beating experts at chess ought to be a confirmatory instance. Computers do that, so they "play chess better than experts."
It seems like you want to simply define playing as something only humans can do, effectively legislating your unsupported intuitions into the language. But that is just engendering confusion and begging the question. Under those auspices, statements like "Deep Blue couldn't play chess and Deep Blue beat Garry Kasparov at chess." would be true. Down that road lies madness.
Crashing00 has got it. You are a programmer, right?
If they can make yugioh games with very simple AI play well against humans, then MtG is no different.
Private Mod Note
():
Rollback Post to RevisionRollBack
Pure, in its general form, is acting with selfless intentions whilst living a life of proactive, correct and logical choices where blame is nonexistent and there replaced with gratitude.
Join the Pure Alliance! For fun, making friends, and the purification of your soul!
Every *TBD*, right here, we discuss cute things over some healthy green tea.
Shandalar has an AI. And that was made more than ten years ago.
There's tons of other card games with AIs that are often just as complex as mtg. Look at the Yugioh video games.
Have you tried playing against the Shandalar AI? It's terrible.
I don't think all the chess analogies are really helping, because chess and magic aren't really very similar games.
The biggest barrier to developing a good Mtg AI, in my opinion, is the extraordinary difficulty involved in judging how much a card is worth at any stage of the game.
Let us suppose we are playing Magic 2011 (yes, 2011) limited. I have drafted a u/w flyer deck, and my opponent is playing some kind of b/r thing. I have a Stormfront Pegasus and my opponent attacks me with a Goblin Piker. He has 3 mana available, I have none, and we're both at 20 life. Under these circumstances, I would almost certainly not block, because my flyer is worth more than his piker. This kind of interaction isn't too much of a difficulty for the computer, since it involves the kind of comparison that is easy to program.
Now, let's change the scenario. Same cards, but this time I'm at 7 life, and he's at 20. He has 5 mana available, I have none, and he's holding 3 cards. This time, I would have to block, because I know that Lava Axe exists and if I take the damage, I may very well die. This is a little more difficult to program - you'd have to make the computer know what every possible card in the set is, and be able to guess that there is a possibility of being lava axed.
Let's again change the scenario. Same as the last scenario, but he's at 2 life instead of 20. Do I block this time? Way harder to judge now - this would all depend on personal playstyle. Personally, I would rather take the risk of not blocking and losing to a hypothetical lava axe, with the possibility of winning on my next attack step. How do you program that?
This changes again depending on what game it is. Suppose it's game two, and I lost game one due to getting lava axed twice from 10 life. Now I will almost certainly block, since I know my opponent has at least two lava axes in his deck. This is again something that we'd have to weight in the AI.
Do you see the problem here? In chess, the value of a piece is relatively static - it is hard to argue that a pawn is worth more than a rook. In Magic, the value of a card changes constantly based on an uncountable number of different variables. In addition, we can't just measure the current value of a card - we also have to consider the future worth of it. Turn 2, do I trade my Llanowar Elf for my opponent's Elite Vanguard? There aren't a lot of creatures that are as strong as a vanguard on turn 2. But the elf goes up in value a lot when I untap, because it lets me play a three-drop a turn early. How do you quantify that?
In addition, you cannot evaluate cards in a vacuum- you HAVE to consider the environment. Let's return to the original scenario - pegasus, goblin piker, turn 2. However, we make one minor change - we change sets. Now we're playing in Magic 2012 land.
This time, blocking no longer seems like such a terrible idea. If I know my opponent is B/R, then there is a non-trivial possibility that taking any damage will result in me needing to face down a pumped up Blood Ogre. Can this be quantified?
Now, keep in mind that we are still only discussing the very most basic type of MtG interaction - every card so far mentioned is a core set common (and one uncommon). I didn't even mention instant-speed combat tricks and bluffing (mostly because I don't really believe bluffing is useful tactically except at the highest levels of play). How much more complicated and difficult to compute do these interactions get when you tie in all the thousands and thousands of cards that have already been printed?
So no, I don't think a good MtG AI is possible, at least without some huge increase in processing power and data capacity.
Private Mod Note
():
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
Do you think this is a real possibility?
BRG Loam Control (Assault - Loam) BRG
W Mono White Control (Martyr - Proc) W
It's theoretically possible should the rules ever stop getting modified and new "moves" (cards) stop getting added with a TON of time however but it would take many computers working a very long time at that.
Re: People misusing the term Vanilla to describe a flying, unleash (sometimes trample) critter.
Magic IS a fairly easy game for computers. There are defined zones and attributes, with rules that essentially function as programming logic (When and If-Then statements). The difficulty would be in the intercomplexity.
But even that can be overcome - while there can be many different board states at a given time, it wouldn't be hard to program a computer to maximize their attacks and spell usage. The problem would come from the computer anticipating the opponent. In that way, Magic is more like poker than Chess - and while you can program a computer to play Poker well, you can't program it to call a bluff.
TerribleBad at Magic since 1998.A Vorthos Guide to Magic Story | Twitter | Tumblr
[Primer] Krenko | Azor | Kess | Zacama | Kumena | Sram | The Ur-Dragon | Edgar Markov | Daretti | Marath
In chess, it's quite easy to program; look at all possible moves, doe that for X turns and see what the optimal move is.
Avatar by Numotflame96 of Maelstrom Graphics
Sig banner thanks to DarkNightCavalier of Heroes of the Plane Studios!
I think you can run simulations between two decks and exhaust general combinations of interaction, especially since most games don't go much beyond 20 turns, and many will be far shorter.
But perfect play is not the issue in MtG, since perfect play (vs just very good play) might net you 5% more wins. Perfect MtG play with the "best deck in the format" in some formats, might net you a % win rate.
-
So the real task is programming a computer that can design decks, analyze card lists in metagames, and create decks that do best against a field, AND anticipate how the field might change.
-
A computer can only be programmed to do a given task by a person who understands how to break down the process of doing that task. I don't think there's any players around (let alone players who program well) who can quantify (in a generalizable way) exactly how they construct decks, pick decks, study metas, play games.
But you can probably eventually program a computer that can be as good as any player in the game at MTGO at all of the above...
You create classes to categorize deck types, first, and how they win, how they interact with other decks. You have the computers run exhaustive numbers of simulations, and actually see what play wins in a given situation, vs what loses in types of matchups. Decks that interact. Decks that don't interact much. Disruption, direct and indirect. But nothing beats running simulations, and the simulations usually do not have to go that deep.
During each step, there is a finite number of plays available to you.
-
Of course cardsets change. Metas change. and quickly. So new mechanics can potentially alter the nature of the simulations, but overall, I do feel that a computer program could match the performance of the best human Magic player, across the board, except for the part about "card playing", where you READ your opponent's tells, intimidate, etc.
Well, Duels of the Planeswalkers has an AI opponent, so computers can already play Magic to an extent. How good is that AI? That part is difficult to tell since the environment is so constrained, and you can't build tier 1 decks to play against other tier 1 decks.
Just for the record, this is not true. The best poker AIs can read bluffs and run their own bluffs.
You are talking about a so-called "rule-based" AI. It is true that a rule-based AI is in general no better than its programmer, but it is not the only way to develop an AI. (An interesting article here elucidates some different ways of developing an AI in the context of poker.)
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
The Polaris poker AI won a series of recorded matches with human professionals with a record of two wins, one loss, and one draw in 2008.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
I am not talking about rules based AI, nor did I say that the AI cannot be better than its programmer. I merely pointed out that the programmer needs to know what the problem is that he's trying to solve, and currently the problem to be solved when creating a Magic AI, is not entirely clear. There are constraints that simplify the problems, obviously, based on the limited size of the total card pool, and a general idea that the minimum legal number of cards in a deck (or close to it) tends to be optimal, but nobody even knows where to begin to design a computer that has the creativity to DESIGN decks, or analyze a metagame.
I do think that the brute force approach can be used to create an AI that plays optimally for one known deck against another known deck, includin optimizing mulligans, etc.
I agree computers can deal with imprefect information as long as you can figure out a way to "weigh" the imperfect information. And of course 1 on 1, poker AIs can challenge individual high level pros. But I think that it would be hard for AIs to beat the bet pros at a world series, for example. The best players have an a huge advantage against the weaker players in early rounds, whom they can exploit, exceeding typical value betting, to build bigger bankrolls, while computer AI cannot, because AIs cannot read a face or a weak player's tells, or goad them into bigger bets... Read when theyre on tilt. In the early rounds of WSOPs, there's plenty of scrubs to brutalize, and the AI will leave money on the table against them, compared to a pro who can really read people.
The size of that pot you bring along with you really increases you chance of an ultimate win.
That isn't quite what I meant, though. Poker bluffs are easy - there are a discrete number of cards in every deck, and good program can tell you probability from this. Hands can be ranked quantitatively, so that hands will always be greater than, less than or equal to.
Magic hands are much for qualitative and context sensitive. AI would need complex programming on both the interrelationships of the cards in their own deck, the actual cards in the opponent's deck, the relationships of cards in the opponent's deck, the number of cards in the opponent's deck, the number of each [Card Name] in each deck. It's a lot more to work from - it isn't so much 'is this hand better', than 'is this worth playing now, later, and do they have an answer?'
TerribleBad at Magic since 1998.A Vorthos Guide to Magic Story | Twitter | Tumblr
[Primer] Krenko | Azor | Kess | Zacama | Kumena | Sram | The Ur-Dragon | Edgar Markov | Daretti | Marath
There's tons of other card games with AIs that are often just as complex as mtg. Look at the Yugioh video games.
Back up.
Further...
Hmm. No, I don't. Do tell.
Deep Blue didn't know how to play chess and it wasn't taught. Deep Blue searched lookup tables. It did it really fast. It did it super, super well. Hot damn, can Deep Blue search those lookup tables.
Here's something odd. You make a board that doesn't occur in a game of chess usually, and Deep Blue can't play it. Have you met any person, ever, whom, you have taught chess, and literally spasms with cognitive paralysis at the sight of a not-quite-chess board? Did that person really learn chess?
Regarding the link of Thomas Bakker: It's odd that this has been linked to prove a point about what computers can play, when he is only discussing what a program can do. Artificial Intelligence is not limited to the 'classical' programming techniques he presumes in his discussion - even before he figures to distinguish three kinds of approaches. He is only talking about data transformation in one paradigm of machine algorithms; the very principle of working within some system run by a compiler or interpreter with sequence, branching, and looping, is something called Good Old-Fashioned AI, and is not the only thing a Computer can do - although hangers-on to GOFAI would say that if GOFAI can't do it, that's the end to what anyone ever meant by a computer doing it. (These people are confused when it comes to characterizing thought.).
(actually it's imperative programming, but non-imperative programming all comes to that in the ultimate hardware of computers. -all those- though, the idea of specifying and enacting, of Turing Machines, is GOFAI.)
If beast89 is committed to some view about only Humans being able to play MtG, ever, then I'd want to challenge him. But if his question, by his experience, only meant something about whether -programs- can play MtG or not, then the preceding goes away, and I proceed to argue about Thomas Bakker having no systematic elimination of possible approaches; he doesn't give me a reason to believe him in his "And why should I believe anything you say?" section. However it becomes more appreciable that perhaps programs can't play MtG. Because programs, in my opinion, can never be equivalent to thought.
That isn't what machines can't do, though.
Awesome avatar provided by Krashbot @ [Epic Graphics].
Are you sure you don't have your facts backwards? Because my recollection was that Deep Blue was better at impossible board positions than human masters. And I know for a fact that - you know that trick chessmasters can do where they quickly and accurately reproduce a board position from memory? They can't do that with impossible boards. Whether the board position is possible or impossible matters for human players' psychology. But I don't see how it would matter for Deep Blue, which as I understand it was the pinnacle of the brute-force approach.
candidus inperti; si nil, his utere mecum.
Don't newer chess computers work less like deep blue and more like that computer from War Games? Playing the game over and over again and figuring out what wins that way, was reading a pop psych book and it mentioned one that played backgammon and it was terrible, making near random moves, but then after it played against some grandmasters or whatever backgammon has and then set to play with itself it learned how to be really good.
I don't think this is true. Deep Blue used lookups to resolve certain end positions and openers, but it would be impossible to create a look-up table for the entire game. Rather, Deep Blue tried each possible move, and then each possible response, and so for, until it reached a certain depth, and then evaluated the resulting positions by heuristics. I don't see any reason why it couldn't handle a board position which is unreachable but playable.
Deep Blue's learning was actually similar to this - it was fed a large number of grandmaster games, and used them to derive a heuristic function for evaluating positions.
Bullet. Teeth.
Brute-force approach? Is brute-force really thinking?
Even still, is that playing Chess? Is it doing anything smart? Experts which are Human don't think about things. That's how they achieve smart plans. They already intuit what to think about.
But I've gone too far: Perhaps a computer can play the game, but not like an expert. Maybe only like an amateur. That would still be something. Well, for space, I'll leave arguing that aside if I can.
I argue this point about whether the machine "plays Chess" because, if Deep Blue or anything else, 'learned', 'thought' or 'knew' how to play something.. then machines already can play anything. One is already doing the same thing - and it's not like they're made of unique, unreproducible components. They don't have complex histories like neurons; they're just transistors.
The answer to beast89 is a yes, but a hollow yes; it's what should be an unsatisfying 'yes' if it is won that way. And if it's so unsatisfying, it must be a mistake that it seems to be an answer, unless someone can prove the question utterly meant nothing.
It doesn't mean nothing because no one can actually say why Pat Chapin knows how to play Magic. But he sorta can. So asking if things can do that is something of a mystery.
---
I'm interested in hearing the answer to if a computer can "solve" a metagame. This falls squarely within computational tractability, but it's not quite exponentially hard, if I don't miss my math, to make a deck given the card pool's size. It's just polynomial and huge. But the meta is some function again of the possible decks.
If P = NP, it might be possible to verify that a given metagame is at its height - assuming that this matchup matrixing problem isn't even harder than NP. But there's good reason to think P is not NP.
Awesome avatar provided by Krashbot @ [Epic Graphics].
EDIT: Removed incorrectness.
You are assuming that the only way of reading a bluff is by examining bodily tells, which is of course not the case. Amongst professional players, who have trained themselves to control their tells, abnormal betting patterns are more useful in detecting bluffs than bodily cues.
Also, again for the record, it appears that software that analyzes facial cues for lie detection purposes is extant, publicly announced, and coming soon to an airport near you: http://news.cnet.com/8301-17938_105-20106077-1/lie-detecting-camera-tracks-facial-blood-flow/
It took me all of ten seconds to find that, by the way. I wish people would stop saying that things that have been done can't be done.
You are right that inexperienced players do have bodily tells that the software would miss (at least until they install the aforementioned facial analysis software). However, facial cues are not the only way of reading a weak player. The article I linked even says that all of today's best poker AIs are what the author calls "Exploitative AI's" -- in other words they actually do most of the stuff you are saying they don't do, including "milking" cash-cow noobs.
Would Polaris win a WSOP right now? I agree with you, probably not. Finish in the money, though? I'd bet on it. I guarantee you that there would be countless individuals and companies willing to put up a $10k buy-in for an AI if the rules permitted it.
But this is really just quibbling. The point is that poker AI isn't some fever dream of a mad scientist. Humans aren't uniquely capable of playing poker where a machine can't.
Magic is way more complicated than poker, no doubt about it. I'm only saying if there is an answer to "Why can't a computer play Magic well?" it is not going to be "Because the concept of bluffing is a uniquely human element that can't be computerized" -- because clearly that is not the case.
You're right, of course. Thomas Bakker is an online poker pro, not by any means an AI expert. The point of linking that article was to point out that even the work of non-experts is far more advanced than many posters here are giving credit for, and also because it's a good layman's-terms overview. I never would hold it up as an exhaustive enumeration of approaches to AI or poker.
If "playing chess better than an expert" means anything coherent, then consistently beating experts at chess ought to be a confirmatory instance. Computers do that, so they "play chess better than experts."
It seems like you want to simply define playing as something only humans can do, effectively legislating your unsupported intuitions into the language. But that is just engendering confusion and begging the question. Under those auspices, statements like "Deep Blue couldn't play chess and Deep Blue beat Garry Kasparov at chess." would be true. Down that road lies madness.
Which if thou dost not use for clearing away the clouds from thy mind
It will go and thou wilt go, never to return.
If they can make yugioh games with very simple AI play well against humans, then MtG is no different.
Join the Pure Alliance! For fun, making friends, and the purification of your soul!
Every *TBD*, right here, we discuss cute things over some healthy green tea.
Have you tried playing against the Shandalar AI? It's terrible.
I don't think all the chess analogies are really helping, because chess and magic aren't really very similar games.
The biggest barrier to developing a good Mtg AI, in my opinion, is the extraordinary difficulty involved in judging how much a card is worth at any stage of the game.
Let us suppose we are playing Magic 2011 (yes, 2011) limited. I have drafted a u/w flyer deck, and my opponent is playing some kind of b/r thing. I have a Stormfront Pegasus and my opponent attacks me with a Goblin Piker. He has 3 mana available, I have none, and we're both at 20 life. Under these circumstances, I would almost certainly not block, because my flyer is worth more than his piker. This kind of interaction isn't too much of a difficulty for the computer, since it involves the kind of comparison that is easy to program.
Now, let's change the scenario. Same cards, but this time I'm at 7 life, and he's at 20. He has 5 mana available, I have none, and he's holding 3 cards. This time, I would have to block, because I know that Lava Axe exists and if I take the damage, I may very well die. This is a little more difficult to program - you'd have to make the computer know what every possible card in the set is, and be able to guess that there is a possibility of being lava axed.
Let's again change the scenario. Same as the last scenario, but he's at 2 life instead of 20. Do I block this time? Way harder to judge now - this would all depend on personal playstyle. Personally, I would rather take the risk of not blocking and losing to a hypothetical lava axe, with the possibility of winning on my next attack step. How do you program that?
This changes again depending on what game it is. Suppose it's game two, and I lost game one due to getting lava axed twice from 10 life. Now I will almost certainly block, since I know my opponent has at least two lava axes in his deck. This is again something that we'd have to weight in the AI.
Do you see the problem here? In chess, the value of a piece is relatively static - it is hard to argue that a pawn is worth more than a rook. In Magic, the value of a card changes constantly based on an uncountable number of different variables. In addition, we can't just measure the current value of a card - we also have to consider the future worth of it. Turn 2, do I trade my Llanowar Elf for my opponent's Elite Vanguard? There aren't a lot of creatures that are as strong as a vanguard on turn 2. But the elf goes up in value a lot when I untap, because it lets me play a three-drop a turn early. How do you quantify that?
In addition, you cannot evaluate cards in a vacuum- you HAVE to consider the environment. Let's return to the original scenario - pegasus, goblin piker, turn 2. However, we make one minor change - we change sets. Now we're playing in Magic 2012 land.
This time, blocking no longer seems like such a terrible idea. If I know my opponent is B/R, then there is a non-trivial possibility that taking any damage will result in me needing to face down a pumped up Blood Ogre. Can this be quantified?
Now, keep in mind that we are still only discussing the very most basic type of MtG interaction - every card so far mentioned is a core set common (and one uncommon). I didn't even mention instant-speed combat tricks and bluffing (mostly because I don't really believe bluffing is useful tactically except at the highest levels of play). How much more complicated and difficult to compute do these interactions get when you tie in all the thousands and thousands of cards that have already been printed?
So no, I don't think a good MtG AI is possible, at least without some huge increase in processing power and data capacity.