Fun flavor text. Nix the first line and it's a better card, possibly as a 2-3 CC card.
One thing I don't think you emphasized in the MTGgoldfish discussion was how nixing the public stats on deck win percentages doesn't get rid of them, it only alienates them from more casual players. Pro players/teams and dedicated folks will still mine and have that data, giving them a clear advantage. It just takes that knowledge away from the more mid level players that may be pushing towards being more competitive.
It's kinda like the difference between top tier Poker players, who have all the stats memorized, to those who have some of the percentages down but don't have it 100%. It just increases the gap in the odds of winning between those skill level of players, which in turn turns people off from trying to push into the game competitively. It also reminds me over the recent controversy with Fantasy Daily drafting, and how that top one percent of players win most of the money, and most of that one percent are data miners. Is it more fun when the data miners are the only winners, or when that data is public and you can choose to use it?
Sure not all players care about being competitive or doing well at even the local tournament level, bBut I feel that increasing the knowledge gap between pros and non pros by privatizing that kind of statistic is a big negative to consider.
I agree with all the discussion about "Design better" being a foolhardy notion in the sense of "make a harder to solve metagame." Maybe I'm too far removed from standard and am too much of a limited player, but I'm curious if the problem is really a problem at all. Why is the metagame being "solvable" a problem? Like you've said and like IcariiFA pointed out, the metagame being solved is inevitable. So Wizards and players should embrace that reality.
I think the problem instead is: when the metagame is solved, there is only one "correct" option. In limited, the format is inherently self-correcting - if one deck is stronger than other options, more players try to go into that deck, spreading the resources thin and leaving the other options stronger since their resources are easier to consolidate. For standard/constructed, I think the solution to strive for involves managing tools for counterplay so that when the format is inevitably solved, there are still multiple correct options (rock/paper/scissors/lizard/spock) so the metagame can thrive and be sustainably healthy.
While 'Design Better' is a dumb argument, 'Design better' isn't the best argument for allowing the data mining. If we go back to the comparison with Steroids, the problem with wizards decision becomes obvious. In telling mtggoldfish to stop publishing their metagame stats, what wizards has effectively done is gone to steroid makers and asked them to stop advertising. The steroids are still available to the players who want them as they can just do the data mining themselves. It would be akin to the MLB convincing steroid makers to close up shop without actually banning steroids from the game. Many players would still use the steroids leaving us in the same two tiered system of players who have access to good stats dominating those who don't. Stopping MTGgoldfish doesn't actually solve the problem, it just makes it so that fewer people talk about it.
The next obvious argument is to question whether or not public metagame statistics lead to a more quickly solved metagame. I would argue that it isn't. To say that a metagame is solved is to say that the best deck(s) have discovered. In other words, there exists no unexplored deck builds that are capable of competing with the top tier decks. The question of whether or not a metagame is solved is a question about whether or not there exists a competitive deck that hasn't been discovered. In order to know if a metagame has been solved, one must look at decks that do not already exist. Because Metagame statistics only give us information about decks that already exist, they fundementally don't tell us anything about decks that do not yet exist. Because Metagame statistics tell us nothing about decks that don't yet exist, they can have no impact on the question of whether or not the metagame has been solved.
Private Mod Note
():
Rollback Post to RevisionRollBack
Every time I read a comment about "Well if this card had card draw/trample/haste/indestructible/hexproof/life gain...", I think "You're missing the point." They're armchair developer comments that fail to take into account the card's role in the greater Limited and Standard environment. No, it may not be as good as whatever card you're comparing it to. There's a reason for that. Not every burn spell is Lightning Bolt, nor does it need to be or should be.
- Manite
One thing I don't think you emphasized in the MTGgoldfish discussion was how nixing the public stats on deck win percentages doesn't get rid of them, it only alienates them from more casual players. Pro players/teams and dedicated folks will still mine and have that data, giving them a clear advantage. It just takes that knowledge away from the more mid level players that may be pushing towards being more competitive.
Firstly there is very little evidence that this is happening currently as far as I'm aware. Secondly if this is an issue this is a separate issue that can be solved at far smaller cost that the alternative, simply by delaying or manipulating the release of MTGO data, or warning or punishing pros who abuse this.
In the end the gap between pros and non-pros is another issue with many different aspects and solutions. Sacrificing the potential fun of the entire format for the majority of the standard playerbase by having it be solved more easily doesn't seem like the best solution especially as removing this data from the public is most likely making the data unavailable to the majority of pro players already.
Also this data is never available before the pro tour and thus any "pro" using this data as a crutch is going to find themselves having difficulty at those tournaments.
I was sighting two huge game communities (poker/fantasy daily drafts) where the trend exists that data mining (in some form) puts a huge gap between players. Acting like Wizards can effectively warn or punish folks for data mining is ludicrous. Further, acting like pro players and teams either a) aren't already data mining or b) won't start to based on decisions like these seems naive. Pretending the data isn't valuable to them because they can't have it before a pro tour is likewise untrue as analyzing data post events is also valuable in determining trends.
Of course the gap between pros and non-pros is a different issue, though one that I feel decisions such as the goldfish one plays into. The fun of the format is integral, and perhaps asking goldfish to stop posting that data is the best solution. All I'm saying is that there is another downside to consider.
@Icarifaa - You obviously can't just tell people to not use datamining. But there are things you can do to control the information's availability. I think Reuben's "warning or punishing pros who abuse this" is meant much more broadly than saying "we'll punish you if you analyze datamining". "Abuse this" can apply to all sorts of different practices of how data is retrieved or published thereafter. I can see why it reads that way though.
Giving widespread data information to all players definitely lowers the divide between pros and non-pros. Because suddenly there's less of a puzzle to solve. If not having access to this the data creates a better format experience, and the pool of people doing the work themselves to datamine is extremely small (and they were probably already way better than you) this seems like a net win. Of course there are definitely positives and negatives, like with everything.
@Ber_F - There's currently no amount of money you can spend to have R&D outperform the hivemind. Even if you hired every pro-level player in the world and had them test constantly, they wouldn't be able to build a large enough sample size to counter the MTGO datamining. They COULD hire more people to make and release sets even faster, so there's a new format out before the old one grows stale on this accelerated timescale, but that's extremely inelegant and would lead to bad consumer impact - forcing people to buy even more stuff to stay current.
@Piar - They already do create self-correcting, shifting environments. That's the level they're already pulling, the one that allows them to keep things as interesting as they currently are despite the sheer power of the hivemind. It's unfortunately not enough to outperform it entirely, and deck-tuning to tournament-specific metas doesn't interest as many people as I wish it did. Fine-tuning often doesn't have the exploratory fun they desire.
They COULD hire more people to make and release sets even faster, so there's a new format out before the old one grows stale on this accelerated timescale, but that's extremely inelegant and would lead to bad consumer impact - forcing people to buy even more stuff to stay current.
Wouldn't reducing set size and increasing frequency of set releases prevent stale formats while not forcing people to buy more stuff to stay current?
They COULD hire more people to make and release sets even faster, so there's a new format out before the old one grows stale on this accelerated timescale, but that's extremely inelegant and would lead to bad consumer impact - forcing people to buy even more stuff to stay current.
Wouldn't reducing set size and increasing frequency of set releases prevent stale formats while not forcing people to buy more stuff to stay current?
Probably not. People don't usually buy ALL the cards, just the small number of top tier cards for their decks. If sets had 10,000 cards, the top 100 cards would still be the top 100 cards (roughly speaking) for tier 1 play. If the sets are smaller, the top cards are still the top cards too.
Also, if they DO make the format with a smaller pool of top-tier cards successfully, that just makes things solved even faster.
But in the scenario I mention, the card pool wouldn't necessarily be larger or smaller. It would be changing more rapidly by having smaller numbers of cards poured into the card pool more often - say ~half the number of cards per set released ~twice as frequently.
If you keep the exact same number of tournament-relevant cards, but only change smaller number more frequently, that doesn't make it harder to solve. Lots of small changes take less time to figure out than a few gigantic ones, due to the smaller number of as-yet-uncharted interactions. It's why we do rapid small iterations in balance development rather than changing a lot of things at once.
If you keep the exact same number of tournament-relevant cards, but only change smaller number more frequently, that doesn't make it harder to solve. Lots of small changes take less time to figure out than a few gigantic ones, due to the smaller number of uncharted interactions. It's why we do rapid small iterations in balance development rather than changing a lot of things at once.
While its true that small changes are easier to figure out than larger changes, many small changes may be harder (read more time consuming) to figure out than a single large change. I say may here because the truth of this claim is entirely dependant on the way magic players explore the space of all possible decks. To demonstrate this, lets start with some real values. Lets imagine a current environment (E) with 400 cards and that wizards has the option to release either a single set A of 200 cards or two sets (B, and C) of 100 cards. To make the math easier lets also limit decks to exactly 60 cards and force players to use at most 1 of any legal card. These changes to deck building rules just simplify discussion, they do not effect conclusions. You are correct in thinking that the total space of create able decks is the same regardless of whether wizards releases set A or releases sets B and C. In either case, the set of all explore able New decks is given by 600 choose 60 - 400 choose 60 decks. If players explore the format by testing all possible decks, then the format takes exactly the same amount of time to solve, however, players never acually test every possible deck. Players use modified search algorithms because they are faster.
So, the question isn't how large the space of all possible decks is, the question is how much time it takes players to explore the space using their fast search algorithms. So, if we imagine for a moment that players explore the space of all possible decks using a binary search, then the set of all decks that need to be tested becomes far smaller than the set of all possible decks. If wizards releases set A, then players need to test approximately 278 decks. However, if wizards releases sets B and C, then players need to test approximately 538 decks. By releasing smaller sets more often, players need to do more deck building and testing. The format remains unsolved a far larger portion of time.
To summarize, if wizards releases smaller sets more often, it will probably result in a format being solved a smaller percent of the time, depending upon how players move through the set of all possible decks while solving the format.
Every time I read a comment about "Well if this card had card draw/trample/haste/indestructible/hexproof/life gain...", I think "You're missing the point." They're armchair developer comments that fail to take into account the card's role in the greater Limited and Standard environment. No, it may not be as good as whatever card you're comparing it to. There's a reason for that. Not every burn spell is Lightning Bolt, nor does it need to be or should be.
- Manite
Not seeing how this addresses the point I raised. Raising the total number of unknown interactions exponentially increases how hard it is to solve a format. we do multiple small iterations in testing because it makes it EASIER to figure out the environment. If it made it harder, we'd just do a small number of gigantic changes.
Additionally, shrinking the scale of the changes is a pretty poor way to increase a player's sense of discovery and possibility within the format.
Not seeing how this addresses the point I raised. Raising the total number of unknown interactions exponentially increases how hard it is to solve a format.
Only if you expect players to test all possible interactions. If players are able to move through the space of possible decks without having to test all possible interactions (they don't), then the multiple updates increase the amount of time players spend solving the format.
we do multiple small iterations in testing because it makes it EASIER to figure out the environment. If it made it harder, we'd just do a small number of gigantic changes.
Small changes are easier to figure out than large changes, but a constantly changing set of cards is far harder to figure out than a small number of large changes in a set of cards.
Additionally, shrinking the scale of the changes is a pretty poor way to increase a player's sense of discovery and possibility within the format.
Maybe. arguably, that is exactly the function that small sets serve.
Private Mod Note
():
Rollback Post to RevisionRollBack
Every time I read a comment about "Well if this card had card draw/trample/haste/indestructible/hexproof/life gain...", I think "You're missing the point." They're armchair developer comments that fail to take into account the card's role in the greater Limited and Standard environment. No, it may not be as good as whatever card you're comparing it to. There's a reason for that. Not every burn spell is Lightning Bolt, nor does it need to be or should be.
- Manite
*shrugs* Not sure how to respond at this point. I'm telling you that balance teams intentionally do the things you're suggesting to make competitive environments easier to solve. Not sure why you think what makes it easier for the developer makes it harder for the hivemind. That's just not how it works in practice. Or in, like, anything. Changing a smaller number of variables at a time in a larger system makes it easier to isolate and solve the changes.
Smaller percentage changes happening more often create a constant state of SOMETHING changing, but make it much harder to create that sense of potential, depth and unknown exploration that Wizards is shooting for. That's why we use this exact format when developing. Cause it makes it easier to quickly get an understanding of how everything works. Not harder.
I expect they'll eventually have to introduce software to prevent data-mining from happening in the first place. Banning bots has always been hard though. Of course, it's completely legitimate to do something simple and easy that impacts the vast majority of players (the ones that just read datamining articles rather than doing it themselves) to help address the issue in the meantime.
DIRECT DOWNLOAD
Podcast archive link
RSS feed
iTunes Channel
MTGcast page
Check out the Remaking Magic blog and ask us questions
In this episode:
Card Renders:
Contact details:
remakingmagic.tumblr.com
Reuben Covington
Twitter: @reubencovington
Email: reubencovington@gmail.com
MTGsalvation Account: Doombringer
Dan Felder
Twitter: @DesignerDanF NEW!
Email: minimallyexceptional@gmail.com
MTGsalvation Account: Stairc
Are you designing commons? Check out my primer on NWO.
Interested in making a custom set? Check out my Set skeleton and archetype primer.
I also write articles about getting started with custom card creation.
Go and PLAYTEST your designs, you will learn more in a single playtests than a dozen discussions.
My custom sets:
Dreamscape
Coins of Mercalis [COMPLETE]
Exodus of Zendikar - ON HOLD
One thing I don't think you emphasized in the MTGgoldfish discussion was how nixing the public stats on deck win percentages doesn't get rid of them, it only alienates them from more casual players. Pro players/teams and dedicated folks will still mine and have that data, giving them a clear advantage. It just takes that knowledge away from the more mid level players that may be pushing towards being more competitive.
It's kinda like the difference between top tier Poker players, who have all the stats memorized, to those who have some of the percentages down but don't have it 100%. It just increases the gap in the odds of winning between those skill level of players, which in turn turns people off from trying to push into the game competitively. It also reminds me over the recent controversy with Fantasy Daily drafting, and how that top one percent of players win most of the money, and most of that one percent are data miners. Is it more fun when the data miners are the only winners, or when that data is public and you can choose to use it?
Sure not all players care about being competitive or doing well at even the local tournament level, bBut I feel that increasing the knowledge gap between pros and non pros by privatizing that kind of statistic is a big negative to consider.
I think the problem instead is: when the metagame is solved, there is only one "correct" option. In limited, the format is inherently self-correcting - if one deck is stronger than other options, more players try to go into that deck, spreading the resources thin and leaving the other options stronger since their resources are easier to consolidate. For standard/constructed, I think the solution to strive for involves managing tools for counterplay so that when the format is inevitably solved, there are still multiple correct options (rock/paper/scissors/lizard/spock) so the metagame can thrive and be sustainably healthy.
The next obvious argument is to question whether or not public metagame statistics lead to a more quickly solved metagame. I would argue that it isn't. To say that a metagame is solved is to say that the best deck(s) have discovered. In other words, there exists no unexplored deck builds that are capable of competing with the top tier decks. The question of whether or not a metagame is solved is a question about whether or not there exists a competitive deck that hasn't been discovered. In order to know if a metagame has been solved, one must look at decks that do not already exist. Because Metagame statistics only give us information about decks that already exist, they fundementally don't tell us anything about decks that do not yet exist. Because Metagame statistics tell us nothing about decks that don't yet exist, they can have no impact on the question of whether or not the metagame has been solved.
- Manite
Firstly there is very little evidence that this is happening currently as far as I'm aware. Secondly if this is an issue this is a separate issue that can be solved at far smaller cost that the alternative, simply by delaying or manipulating the release of MTGO data, or warning or punishing pros who abuse this.
In the end the gap between pros and non-pros is another issue with many different aspects and solutions. Sacrificing the potential fun of the entire format for the majority of the standard playerbase by having it be solved more easily doesn't seem like the best solution especially as removing this data from the public is most likely making the data unavailable to the majority of pro players already.
Also this data is never available before the pro tour and thus any "pro" using this data as a crutch is going to find themselves having difficulty at those tournaments.
Are you designing commons? Check out my primer on NWO.
Interested in making a custom set? Check out my Set skeleton and archetype primer.
I also write articles about getting started with custom card creation.
Go and PLAYTEST your designs, you will learn more in a single playtests than a dozen discussions.
My custom sets:
Dreamscape
Coins of Mercalis [COMPLETE]
Exodus of Zendikar - ON HOLD
Of course the gap between pros and non-pros is a different issue, though one that I feel decisions such as the goldfish one plays into. The fun of the format is integral, and perhaps asking goldfish to stop posting that data is the best solution. All I'm saying is that there is another downside to consider.
Giving widespread data information to all players definitely lowers the divide between pros and non-pros. Because suddenly there's less of a puzzle to solve. If not having access to this the data creates a better format experience, and the pool of people doing the work themselves to datamine is extremely small (and they were probably already way better than you) this seems like a net win. Of course there are definitely positives and negatives, like with everything.
@Ber_F - There's currently no amount of money you can spend to have R&D outperform the hivemind. Even if you hired every pro-level player in the world and had them test constantly, they wouldn't be able to build a large enough sample size to counter the MTGO datamining. They COULD hire more people to make and release sets even faster, so there's a new format out before the old one grows stale on this accelerated timescale, but that's extremely inelegant and would lead to bad consumer impact - forcing people to buy even more stuff to stay current.
@Piar - They already do create self-correcting, shifting environments. That's the level they're already pulling, the one that allows them to keep things as interesting as they currently are despite the sheer power of the hivemind. It's unfortunately not enough to outperform it entirely, and deck-tuning to tournament-specific metas doesn't interest as many people as I wish it did. Fine-tuning often doesn't have the exploratory fun they desire.
Remaking Magic - A Podcast for those that love MTG and Game Design
The Dungeon Master's Guide - A Podcast for those that love RPGs and Game Design
Sig-Heroes of the Plane
Wouldn't reducing set size and increasing frequency of set releases prevent stale formats while not forcing people to buy more stuff to stay current?
Probably not. People don't usually buy ALL the cards, just the small number of top tier cards for their decks. If sets had 10,000 cards, the top 100 cards would still be the top 100 cards (roughly speaking) for tier 1 play. If the sets are smaller, the top cards are still the top cards too.
Also, if they DO make the format with a smaller pool of top-tier cards successfully, that just makes things solved even faster.
Remaking Magic - A Podcast for those that love MTG and Game Design
The Dungeon Master's Guide - A Podcast for those that love RPGs and Game Design
Sig-Heroes of the Plane
Remaking Magic - A Podcast for those that love MTG and Game Design
The Dungeon Master's Guide - A Podcast for those that love RPGs and Game Design
Sig-Heroes of the Plane
While its true that small changes are easier to figure out than larger changes, many small changes may be harder (read more time consuming) to figure out than a single large change. I say may here because the truth of this claim is entirely dependant on the way magic players explore the space of all possible decks. To demonstrate this, lets start with some real values. Lets imagine a current environment (E) with 400 cards and that wizards has the option to release either a single set A of 200 cards or two sets (B, and C) of 100 cards. To make the math easier lets also limit decks to exactly 60 cards and force players to use at most 1 of any legal card. These changes to deck building rules just simplify discussion, they do not effect conclusions. You are correct in thinking that the total space of create able decks is the same regardless of whether wizards releases set A or releases sets B and C. In either case, the set of all explore able New decks is given by 600 choose 60 - 400 choose 60 decks. If players explore the format by testing all possible decks, then the format takes exactly the same amount of time to solve, however, players never acually test every possible deck. Players use modified search algorithms because they are faster.
So, the question isn't how large the space of all possible decks is, the question is how much time it takes players to explore the space using their fast search algorithms. So, if we imagine for a moment that players explore the space of all possible decks using a binary search, then the set of all decks that need to be tested becomes far smaller than the set of all possible decks. If wizards releases set A, then players need to test approximately 278 decks. However, if wizards releases sets B and C, then players need to test approximately 538 decks. By releasing smaller sets more often, players need to do more deck building and testing. The format remains unsolved a far larger portion of time.
To summarize, if wizards releases smaller sets more often, it will probably result in a format being solved a smaller percent of the time, depending upon how players move through the set of all possible decks while solving the format.
- Manite
Additionally, shrinking the scale of the changes is a pretty poor way to increase a player's sense of discovery and possibility within the format.
Remaking Magic - A Podcast for those that love MTG and Game Design
The Dungeon Master's Guide - A Podcast for those that love RPGs and Game Design
Sig-Heroes of the Plane
Small changes are easier to figure out than large changes, but a constantly changing set of cards is far harder to figure out than a small number of large changes in a set of cards.
Maybe. arguably, that is exactly the function that small sets serve.
- Manite
Smaller percentage changes happening more often create a constant state of SOMETHING changing, but make it much harder to create that sense of potential, depth and unknown exploration that Wizards is shooting for. That's why we use this exact format when developing. Cause it makes it easier to quickly get an understanding of how everything works. Not harder.
Remaking Magic - A Podcast for those that love MTG and Game Design
The Dungeon Master's Guide - A Podcast for those that love RPGs and Game Design
Sig-Heroes of the Plane
Remaking Magic - A Podcast for those that love MTG and Game Design
The Dungeon Master's Guide - A Podcast for those that love RPGs and Game Design
Sig-Heroes of the Plane