Cool. Wizards keeps making my life harder by not realizing no matter what they do the internet will make meta scores, but making me.always look for alternate sorting methods.
Cool. Wizards keeps making my life harder by not realizing no matter what they do the internet will make meta scores, but making me.always look for alternate sorting methods.
Do you maintain the tiers? It's a thankless job and somewhat arbitrary no matter how you slice it. Having said that, feel free to use any of my work if it helps. I'll be continuing to add to it and will maintain it for as long as I'm still playing Modern. It's very little work to add a new tournament so it's not a time sink for me once all the tools are in place.
Ktken did. I just shift the threads now based on others maths, since I'm not staty enough for it. But with the death of online results (well "true" online results) I likely need to shift to something like this. So thank you.
I made another update including the MTGO Modern Challenges. There are now three tabs:
Paper Meta Analysis
Online Meta Analysis
Combined Meta Analysis (Paper + Online)
There are 10 tournaments in the online analysis. The online tournaments don't seem to be as big as I hoped they would be. I say that because there are a lot of 4-3 records in the 32 results, sometimes as many as 15/32. I decided to only include the 5+ win results. Each tournament ends up with anywhere from 17 to 25 results.
There is also a Key tab now, which I may flesh out a bit more, but it's fine for now.
I'll add my thoughts about the meta to the other thread.
Noticed a minor issue and corrected it. B/W Eldrazi is the name I had been using, but I didn't change the "Eldrazi and Taxes" name variant when I ran the online data analysis. B/W Eldrazi got a nice boost from that as it's doing quite well online.
FYI I'll be away for the next week. There isn't a big paper tournament this weekend, but I'll update this weekend's online Modern Challenge toward the end of next week.
First, I fixed a bug with the Top8%. The percents were a lot higher than they should have been. I was double counting if a deck appeared in the Top 8 more than once in the same tournament. That's what I get for not unit testing that code.
Next, I added all the SCG Invitational Qualifier Top 8 data. The IQs are smaller, usually 50-60 people I think, but it's a lot of data and Top 8 in one is still a pretty darn good accomplishment. The data is in a separate tab from the "Big Paper" analysis, but it is also now included in the "Combined" analysis along with Online.
Online was updated with the 7-29 Challenge.
Finally, I added a bunch more data so we can start to see trends and also yearly totals. There is a 2017 total, rolling three month, Q2, and Q1. I'll add the other quarters as they close out.
I'm starting to get more tooling in place to make doing interesting things easier, and I should be able to do some more in-depth trend analysis soon.
I was double counting if a deck appeared in the Top 8 more than once in the same tournament
Isn't it how it should be?
There are two Top8 fields. "Top8" is a total count of appearances in the Top 8, and yes it counts multiple appearances in the Top 8 in a single tournament. "Top8%" is percentage of tournaments where the archetype appeared in the top 8. The % one was bugged.
There is now a Simple Tier List on the first tab, with just the Combined meta share data for the last 3 months. It's a simple, clean view to see the tiers without a lot of information overload.
I dropped the cutoff for Tier 1 from 4.5% to 4.0%.
I added a Tier 4, with the cutoff > 0.5% to differentiate a bit more due to the extremely large number of archetypes in the format. Tier 5 are basically the untiered decks.
Tier cutoffs are of course always going to be arbitrary when you just look at numbers, so you will see weird stuff like in the current Big Paper analysis where Tier 2 is only 4 decks, but 5 decks are at 1.96. Obviously a human being would probably consider those 5 also Tier 2, but you have to draw the line somewhere, and there will always be decks a little below the cutoff wherever you set it.
There is now a Simple Tier List on the first tab, with just the Combined meta share data for the last 3 months. It's a simple, clean view to see the tiers without a lot of information overload.
I dropped the cutoff for Tier 1 from 4.5% to 4.0%.
I added a Tier 4, with the cutoff > 0.5% to differentiate a bit more due to the extremely large number of archetypes in the format. Tier 5 are basically the untiered decks.
Tier cutoffs are of course always going to be arbitrary when you just look at numbers, so you will see weird stuff like in the current Big Paper analysis where Tier 2 is only 4 decks, but 5 decks are at 1.96. Obviously a human being would probably consider those 5 also Tier 2, but you have to draw the line somewhere, and there will always be decks a little below the cutoff wherever you set it.
Three things:
1. This whole thing is awesome and I hope the community uses it as a resource, especially now that the other metagame sites are useless due to MTGO stats.
2. I encourage you to select less arbitrary tier cutoffs. It looks like you are just eyeballing it based on gut instinct, which isn't sustainable in the long run and hurts the overall analysis. There are plenty of ways to do this, but as long as it's based on the dataset and not your intuition, any method would be fine.
3. I also encourage you to allow people to copy the data, which you can't currently do in the sheet. It makes it easier for people to run their own quick analyses on the data, which they can currently do anyway but it would take a lot longer to transcribe numbers directly.
There is now a Simple Tier List on the first tab, with just the Combined meta share data for the last 3 months. It's a simple, clean view to see the tiers without a lot of information overload.
I dropped the cutoff for Tier 1 from 4.5% to 4.0%.
I added a Tier 4, with the cutoff > 0.5% to differentiate a bit more due to the extremely large number of archetypes in the format. Tier 5 are basically the untiered decks.
Tier cutoffs are of course always going to be arbitrary when you just look at numbers, so you will see weird stuff like in the current Big Paper analysis where Tier 2 is only 4 decks, but 5 decks are at 1.96. Obviously a human being would probably consider those 5 also Tier 2, but you have to draw the line somewhere, and there will always be decks a little below the cutoff wherever you set it.
Three things:
1. This whole thing is awesome and I hope the community uses it as a resource, especially now that the other metagame sites are useless due to MTGO stats.
2. I encourage you to select less arbitrary tier cutoffs. It looks like you are just eyeballing it based on gut instinct, which isn't sustainable in the long run and hurts the overall analysis. There are plenty of ways to do this, but as long as it's based on the dataset and not your intuition, any method would be fine.
3. I also encourage you to allow people to copy the data, which you can't currently do in the sheet. It makes it easier for people to run their own quick analyses on the data, which they can currently do anyway but it would take a lot longer to transcribe numbers directly.
Thanks for the feedback. If I was doing tiers based on "eyeballing and gut instinct", the tiers would be different from what you see, so I think maybe you didn't read the Key. It's just hard-coded by % meta share. Tier cutoffs are going to be arbitrary no matter what, just seems like having it fixed based on meta share % is the only reasonable thing to do given the community's de facto reliance on meta share.
I'm not allowing people to copy the sheets because I'd rather not have anyone publishing stuff and not attributing me since I have put a bit of work into this. If someone has a different angle and wants access to the data, they can message me to discuss.
There is now a Simple Tier List on the first tab, with just the Combined meta share data for the last 3 months. It's a simple, clean view to see the tiers without a lot of information overload.
I dropped the cutoff for Tier 1 from 4.5% to 4.0%.
I added a Tier 4, with the cutoff > 0.5% to differentiate a bit more due to the extremely large number of archetypes in the format. Tier 5 are basically the untiered decks.
Tier cutoffs are of course always going to be arbitrary when you just look at numbers, so you will see weird stuff like in the current Big Paper analysis where Tier 2 is only 4 decks, but 5 decks are at 1.96. Obviously a human being would probably consider those 5 also Tier 2, but you have to draw the line somewhere, and there will always be decks a little below the cutoff wherever you set it.
Three things:
1. This whole thing is awesome and I hope the community uses it as a resource, especially now that the other metagame sites are useless due to MTGO stats.
2. I encourage you to select less arbitrary tier cutoffs. It looks like you are just eyeballing it based on gut instinct, which isn't sustainable in the long run and hurts the overall analysis. There are plenty of ways to do this, but as long as it's based on the dataset and not your intuition, any method would be fine.
3. I also encourage you to allow people to copy the data, which you can't currently do in the sheet. It makes it easier for people to run their own quick analyses on the data, which they can currently do anyway but it would take a lot longer to transcribe numbers directly.
Thanks for the feedback. If I was doing tiers based on "eyeballing and gut instinct", the tiers would be different from what you see, so I think maybe you didn't read the Key. It's just hard-coded by % meta share. Tier cutoffs are going to be arbitrary no matter what, just seems like having it fixed based on meta share % is the only reasonable thing to do given the community's de facto reliance on meta share.
I'm not allowing people to copy the sheets because I'd rather not have anyone publishing stuff and not attributing me since I have put a bit of work into this. If someone has a different angle and wants access to the data, they can message me to discuss.
Re: tiers
There are plenty of ways to make them less arbitrary than just picking them based on intuition. You can pick meta % cutoffs more strategically based on many different stat tools. Yes, those all have some degree of arbitrariness because stats are secretly a bit arbitrary, but just changing the tier 1 cutoff to 4% because it "feels right" is significantly more arbitrary than using formulas and the numbers to pick tier cutoffs.
Re: copying
As someone who ran an identical project to yours for years and made a website and money from it, I understand your worry but also assure you it isn't worth it. Anyone who wants to steal the data will, people who want to use it legitimately can't, and most people are happy to give credit for good work. If you produce quality work on a consistent basis, you'll get all the credit you want.
Update with the new data, also added a new feature. The Simple Tier List now has three levels, for the most recent 3, 2 and 1 months. Under 3 months, I think the data is really limited, especially the 1 month, so it's really not something that we should pivot around. A three month view is a more healthy look at the meta IMHO. But there it is.
There is now a Simple Tier List on the first tab, with just the Combined meta share data for the last 3 months. It's a simple, clean view to see the tiers without a lot of information overload.
I dropped the cutoff for Tier 1 from 4.5% to 4.0%.
I added a Tier 4, with the cutoff > 0.5% to differentiate a bit more due to the extremely large number of archetypes in the format. Tier 5 are basically the untiered decks.
Tier cutoffs are of course always going to be arbitrary when you just look at numbers, so you will see weird stuff like in the current Big Paper analysis where Tier 2 is only 4 decks, but 5 decks are at 1.96. Obviously a human being would probably consider those 5 also Tier 2, but you have to draw the line somewhere, and there will always be decks a little below the cutoff wherever you set it.
Three things:
1. This whole thing is awesome and I hope the community uses it as a resource, especially now that the other metagame sites are useless due to MTGO stats.
2. I encourage you to select less arbitrary tier cutoffs. It looks like you are just eyeballing it based on gut instinct, which isn't sustainable in the long run and hurts the overall analysis. There are plenty of ways to do this, but as long as it's based on the dataset and not your intuition, any method would be fine.
3. I also encourage you to allow people to copy the data, which you can't currently do in the sheet. It makes it easier for people to run their own quick analyses on the data, which they can currently do anyway but it would take a lot longer to transcribe numbers directly.
Thanks for the feedback. If I was doing tiers based on "eyeballing and gut instinct", the tiers would be different from what you see, so I think maybe you didn't read the Key. It's just hard-coded by % meta share. Tier cutoffs are going to be arbitrary no matter what, just seems like having it fixed based on meta share % is the only reasonable thing to do given the community's de facto reliance on meta share.
I'm not allowing people to copy the sheets because I'd rather not have anyone publishing stuff and not attributing me since I have put a bit of work into this. If someone has a different angle and wants access to the data, they can message me to discuss.
Re: tiers
There are plenty of ways to make them less arbitrary than just picking them based on intuition. You can pick meta % cutoffs more strategically based on many different stat tools. Yes, those all have some degree of arbitrariness because stats are secretly a bit arbitrary, but just changing the tier 1 cutoff to 4% because it "feels right" is significantly more arbitrary than using formulas and the numbers to pick tier cutoffs.
Re: copying
As someone who ran an identical project to yours for years and made a website and money from it, I understand your worry but also assure you it isn't worth it. Anyone who wants to steal the data will, people who want to use it legitimately can't, and most people are happy to give credit for good work. If you produce quality work on a consistent basis, you'll get all the credit you want.
I'm not really sweating tier cutoffs. They're just numbers. I view it more as a ranking.
I might do a website, but then I have to monetize it with ads to try and recover some of the compute and bandwidth costs, plus it's a lot more work. But I've been kind of wanting to do a frontend project, so I might take it on. Not in the short term though. Also, I'd like to be able to do customizable queries without having to tweak stuff on the backend. Having a JavaScript frontend would also open up a lot of other visualization types without having to do manual work to keep things updated, which is what's holding me back on some other types of analysis right now.
Have you considered using some coeffiecent of the standard deviation of the metashare or score fields to determine tier cut offs?
Off the top of my head (that is without compiling the information again), your cutoffs (which seem to be 4,2,1,and 0.5) could be translated into the following st.dev coeffients:
Where ? is a Constant Double up to your discretion.
EDIT: After plugging in the data you compiled and aggregated into and Excel Sheet, I assigned each tier a tier coefficient, partially based on your cutoffs as well as ktkenshinx's on Modern Nexus such that Tier 5 should consist of decks with a metashare less than the sample's mean metashare, Tier 4 should consist of decks with a metashare greater than Tier 5 but less than the sample's mean +1 stdev, Tier 3 should consist of decks with a metashare greater than Tier 4 but less than the sample's mean +2 stdev, Tier 2 should consist of decks with a metashare greater than Tier 3 but less than mean +4 stdev, Tier 1 should consist of decks with a metashare greater than Tier 2 but less than mean +8 stdev, and Tier 0 greater than Tier 1.
Currently, according to the most recent rolling month in the chart you provided, Tier 0 has 0% representation in the meta (one indication of a healthy format), Tier 1 has one archetype (DSG) with 10.02% total and average representation in the meta (a debatable indicator of format health; may need to reevaluate tier cutoffs), Tier 2 has four archetypes with 25.91% total representation and ~6.48% average representation, Tier 3 has four archetypes with 15.56% total representation and ~3.89% average representation, Tier 4 has fifteen archetypes with 31.02% total representation and ~2.07% average representation, and finally Tier 5 has a whopping fifty-five archetypes with a total representation of 17.49% and meager average representation of ~0.32%.
What I believe this data suggests is that the format is in a healthy state currently, although I should say the variance of archetypes in the bottom half of Tiers (3,4,5) and the lack thereof in the top half of Tiers (0,1,2) does suggest that the format is close to being solved (and thus stagnant).
Anyhow, my apologies for borrowing the data prior to asking for permission, but I look forward to corroboration if desired.
I've looked at doing stuff like that, but there are some problems. First, like you've seen, you end up with tiers that no human being would think look reasonable. There are 5 decks that are clearly, solidly tier 1 without any argument from anyone. If you end up with 1 deck in tier 1, then you need to go back and start tweaking constants until it starts to look reasonable, at which point you've pretty much invalidated the reasoning for doing it in the first place.
Maybe there is a different function that would make more sense? But the tiers are going to be arbitrary no matter how you slice it, because there will always be decks that just make or just miss the cut.
In the end, I guess I just don't think breaking into tiers is that interesting a problem worth spending time on. I'd rather look at other types of analysis.
To put this into the real world, if I was in charge of tiering decks on this site, I'd use the numbers as a guideline but then apply some heuristics to making the final call for the decks near the tier edges.
For people just trying to understand the meta from a competitive standpoint, the tier cutoffs as a data point aren't relevant at all.
Having said all the above, I'm not in love with the hard % cutoffs. It definitely could get weird under certain meta compositions, like if the shares really flattened out. If you came up with a function that I thought clearly made more sense, I'd be happy to code it up and see if it's an improvement.
Currently, according to the most recent rolling month in the chart you provided
I think I missed this the first read. If you are looking at the 1 Month list on the simple tier list, I'd recommend against doing that. It just isn't enough data to make me feel comfortable with it for deeper analysis. I'd use the 3 Month.
I just threw in the shorter term ones to see how they differ and also to get a peek at the "bleeding edge" even if it's not perfect.
An algorithm is requested, eh? *cracks code monkey knuckles* I'll give it a shot.
First, a similar Tier 5 to Tier 0 approach as before, except using ktkenshinx's Tier 0 cutoff from Modern Nexus as a basis for the cutoff for Tier 0 (using St.Dev isn't necessarily as strong of an indicator when considering an archetype's oppressiveness or lack thereof).
if( share >= mean + 1.5 * stdev )
tier = 1;
else if( share >= mean + stdev )
tier = 2;
else if( share >= mean + 0.5 * stdev )
tier = 3;
else if (share >= mean )
tier = 4;
else
tier = 5;
Note, the st.dev coefficients are mostly preferential; a nice linear curve helps me understand this much more efficiently
I'm short on time right now, but I'll take a look in more detail later.
My initial thought is that having any deck with share less than the mean be tier 5 (untiered) doesn't seem right to me. Having so many the decks in one tier means less differentiation, which makes the tiers less interesting/useful. Your current calculations put 24 decks in tiers 1-4 and 57 in tier 5. Basically 2/3 of the decks are untiered. Granted there is some jank in those lists due to the rogue nature of MTGO, but the balance is tipped too far IMHO.
Decks like these should be differentiated from the true jank at the bottom, but your function puts these in tier 5:
Naya
Amulet Titan
Humans
Esper Control
G/W Hate Bears
Auras
Turns
Skred Red
A potential solution to this issue is to clip the list to take the mean of the top 60 deck shares, then calculating cutoffs from there. That way, the jank will still be tier 5, but the known decks will have more differentiation. Just a thought. Also, the calculations will still hold no matter how much jank is present. The more one-off jank decks that factor into the mean, the less reliable it is... limiting the mean calculation to something like 60 decks makes the calculation less "fragile" as we might say in software terms.
I've updated the sheet to account for untiered archetypes, assuming 'untiered' means being in the bottom half of rankings in terms of represented archetypes. (Top [insert constant integer here] would mean very little in metagames with both a greater variety of decks and a lesser variety of decks).
Also, basing my cutoff coefficients on a linear progression from 0 to 2 * StDev for each of three sheets.
I've looked at doing stuff like that, but there are some problems. First, like you've seen, you end up with tiers that no human being would think look reasonable. There are 5 decks that are clearly, solidly tier 1 without any argument from anyone. If you end up with 1 deck in tier 1, then you need to go back and start tweaking constants until it starts to look reasonable, at which point you've pretty much invalidated the reasoning for doing it in the first place.
Maybe there is a different function that would make more sense? But the tiers are going to be arbitrary no matter how you slice it, because there will always be decks that just make or just miss the cut.
In the end, I guess I just don't think breaking into tiers is that interesting a problem worth spending time on. I'd rather look at other types of analysis.
To put this into the real world, if I was in charge of tiering decks on this site, I'd use the numbers as a guideline but then apply some heuristics to making the final call for the decks near the tier edges.
For people just trying to understand the meta from a competitive standpoint, the tier cutoffs as a data point aren't relevant at all.
Honestly, most people just care about tiers. It's THE parlance of competitive MTG analysis. I know this because I did it for years. You will be giving people what they want if you put work into a good tiering system; I don't really understand your aversion to it. It doesn't have to be a pure %-based system. There just needs to be some objective system, ideally a dynamic one that accounts for different metagames.
Private Mod Note
():
Rollback Post to RevisionRollBack
Over-Extended/Modern Since 2010
To post a comment, please login or register a new account.
https://docs.google.com/spreadsheets/d/1ZRLhtRToCI2VkJ-xYVwwvYuYtFmCLyMyRH1pbjCdBg8
I also added some new fields.
I have some thoughts on the results, will post them to the state of the meta thread later.
I'm also looking at what I can do with the online meta, will probably have an update later today.
Do you maintain the tiers? It's a thankless job and somewhat arbitrary no matter how you slice it. Having said that, feel free to use any of my work if it helps. I'll be continuing to add to it and will maintain it for as long as I'm still playing Modern. It's very little work to add a new tournament so it's not a time sink for me once all the tools are in place.
There are 10 tournaments in the online analysis. The online tournaments don't seem to be as big as I hoped they would be. I say that because there are a lot of 4-3 records in the 32 results, sometimes as many as 15/32. I decided to only include the 5+ win results. Each tournament ends up with anywhere from 17 to 25 results.
There is also a Key tab now, which I may flesh out a bit more, but it's fine for now.
I'll add my thoughts about the meta to the other thread.
I also added a Rank field.
I'm back and made some fairly significant updates.
https://docs.google.com/spreadsheets/d/1ZRLhtRToCI2VkJ-xYVwwvYuYtFmCLyMyRH1pbjCdBg8
First, I fixed a bug with the Top8%. The percents were a lot higher than they should have been. I was double counting if a deck appeared in the Top 8 more than once in the same tournament. That's what I get for not unit testing that code.
Next, I added all the SCG Invitational Qualifier Top 8 data. The IQs are smaller, usually 50-60 people I think, but it's a lot of data and Top 8 in one is still a pretty darn good accomplishment. The data is in a separate tab from the "Big Paper" analysis, but it is also now included in the "Combined" analysis along with Online.
Online was updated with the 7-29 Challenge.
Finally, I added a bunch more data so we can start to see trends and also yearly totals. There is a 2017 total, rolling three month, Q2, and Q1. I'll add the other quarters as they close out.
I'm starting to get more tooling in place to make doing interesting things easier, and I should be able to do some more in-depth trend analysis soon.
-Adam aka TCH
There are two Top8 fields. "Top8" is a total count of appearances in the Top 8, and yes it counts multiple appearances in the Top 8 in a single tournament. "Top8%" is percentage of tournaments where the archetype appeared in the top 8. The % one was bugged.
Tier cutoffs are of course always going to be arbitrary when you just look at numbers, so you will see weird stuff like in the current Big Paper analysis where Tier 2 is only 4 decks, but 5 decks are at 1.96. Obviously a human being would probably consider those 5 also Tier 2, but you have to draw the line somewhere, and there will always be decks a little below the cutoff wherever you set it.
Three things:
1. This whole thing is awesome and I hope the community uses it as a resource, especially now that the other metagame sites are useless due to MTGO stats.
2. I encourage you to select less arbitrary tier cutoffs. It looks like you are just eyeballing it based on gut instinct, which isn't sustainable in the long run and hurts the overall analysis. There are plenty of ways to do this, but as long as it's based on the dataset and not your intuition, any method would be fine.
3. I also encourage you to allow people to copy the data, which you can't currently do in the sheet. It makes it easier for people to run their own quick analyses on the data, which they can currently do anyway but it would take a lot longer to transcribe numbers directly.
Thanks for the feedback. If I was doing tiers based on "eyeballing and gut instinct", the tiers would be different from what you see, so I think maybe you didn't read the Key. It's just hard-coded by % meta share. Tier cutoffs are going to be arbitrary no matter what, just seems like having it fixed based on meta share % is the only reasonable thing to do given the community's de facto reliance on meta share.
I'm not allowing people to copy the sheets because I'd rather not have anyone publishing stuff and not attributing me since I have put a bit of work into this. If someone has a different angle and wants access to the data, they can message me to discuss.
Re: tiers
There are plenty of ways to make them less arbitrary than just picking them based on intuition. You can pick meta % cutoffs more strategically based on many different stat tools. Yes, those all have some degree of arbitrariness because stats are secretly a bit arbitrary, but just changing the tier 1 cutoff to 4% because it "feels right" is significantly more arbitrary than using formulas and the numbers to pick tier cutoffs.
Re: copying
As someone who ran an identical project to yours for years and made a website and money from it, I understand your worry but also assure you it isn't worth it. Anyone who wants to steal the data will, people who want to use it legitimately can't, and most people are happy to give credit for good work. If you produce quality work on a consistent basis, you'll get all the credit you want.
I'm not really sweating tier cutoffs. They're just numbers. I view it more as a ranking.
I might do a website, but then I have to monetize it with ads to try and recover some of the compute and bandwidth costs, plus it's a lot more work. But I've been kind of wanting to do a frontend project, so I might take it on. Not in the short term though. Also, I'd like to be able to do customizable queries without having to tweak stuff on the backend. Having a JavaScript frontend would also open up a lot of other visualization types without having to do manual work to keep things updated, which is what's holding me back on some other types of analysis right now.
Off the top of my head (that is without compiling the information again), your cutoffs (which seem to be 4,2,1,and 0.5) could be translated into the following st.dev coeffients:
Tier 0 => Mean + 16 * ? * StDev
Tier 1 => Mean + 8 * ? * StDev
Tier 2 => Mean + 4 * ? * StDev
Tier 3 => Mean + 2 * ? * StDev
Tier 4 => Mean + ? * StDev
Tier 5 => 0
Where ? is a Constant Double up to your discretion.
EDIT: After plugging in the data you compiled and aggregated into and Excel Sheet, I assigned each tier a tier coefficient, partially based on your cutoffs as well as ktkenshinx's on Modern Nexus such that Tier 5 should consist of decks with a metashare less than the sample's mean metashare, Tier 4 should consist of decks with a metashare greater than Tier 5 but less than the sample's mean +1 stdev, Tier 3 should consist of decks with a metashare greater than Tier 4 but less than the sample's mean +2 stdev, Tier 2 should consist of decks with a metashare greater than Tier 3 but less than mean +4 stdev, Tier 1 should consist of decks with a metashare greater than Tier 2 but less than mean +8 stdev, and Tier 0 greater than Tier 1.
Currently, according to the most recent rolling month in the chart you provided, Tier 0 has 0% representation in the meta (one indication of a healthy format), Tier 1 has one archetype (DSG) with 10.02% total and average representation in the meta (a debatable indicator of format health; may need to reevaluate tier cutoffs), Tier 2 has four archetypes with 25.91% total representation and ~6.48% average representation, Tier 3 has four archetypes with 15.56% total representation and ~3.89% average representation, Tier 4 has fifteen archetypes with 31.02% total representation and ~2.07% average representation, and finally Tier 5 has a whopping fifty-five archetypes with a total representation of 17.49% and meager average representation of ~0.32%.
What I believe this data suggests is that the format is in a healthy state currently, although I should say the variance of archetypes in the bottom half of Tiers (3,4,5) and the lack thereof in the top half of Tiers (0,1,2) does suggest that the format is close to being solved (and thus stagnant).
Anyhow, my apologies for borrowing the data prior to asking for permission, but I look forward to corroboration if desired.
Avatar and Signature by XenoNinja via Heroes of the Plane Studios
I've looked at doing stuff like that, but there are some problems. First, like you've seen, you end up with tiers that no human being would think look reasonable. There are 5 decks that are clearly, solidly tier 1 without any argument from anyone. If you end up with 1 deck in tier 1, then you need to go back and start tweaking constants until it starts to look reasonable, at which point you've pretty much invalidated the reasoning for doing it in the first place.
Maybe there is a different function that would make more sense? But the tiers are going to be arbitrary no matter how you slice it, because there will always be decks that just make or just miss the cut.
In the end, I guess I just don't think breaking into tiers is that interesting a problem worth spending time on. I'd rather look at other types of analysis.
To put this into the real world, if I was in charge of tiering decks on this site, I'd use the numbers as a guideline but then apply some heuristics to making the final call for the decks near the tier edges.
For people just trying to understand the meta from a competitive standpoint, the tier cutoffs as a data point aren't relevant at all.
I think I missed this the first read. If you are looking at the 1 Month list on the simple tier list, I'd recommend against doing that. It just isn't enough data to make me feel comfortable with it for deeper analysis. I'd use the 3 Month.
I just threw in the shorter term ones to see how they differ and also to get a peek at the "bleeding edge" even if it's not perfect.
First, a similar Tier 5 to Tier 0 approach as before, except using ktkenshinx's Tier 0 cutoff from Modern Nexus as a basis for the cutoff for Tier 0 (using St.Dev isn't necessarily as strong of an indicator when considering an archetype's oppressiveness or lack thereof).
tier = 1;
else if( share >= mean + stdev )
tier = 2;
else if( share >= mean + 0.5 * stdev )
tier = 3;
else if (share >= mean )
tier = 4;
else
tier = 5;
Note, the st.dev coefficients are mostly preferential; a nice linear curve helps me understand this much more efficiently
tier = 0;
else
tier = tier;
The aforementioned algorithms provide the following information for the current rolling month.
https://docs.google.com/spreadsheets/d/1r0Lbacw2oS8NChL04eRs66kOI2x3xGcOb4-snFeS-ng/edit?usp=sharing
Also included a sheet to represent the Tier 0 to Tier 4 representation as suggested by Modern Nexus.
Avatar and Signature by XenoNinja via Heroes of the Plane Studios
My initial thought is that having any deck with share less than the mean be tier 5 (untiered) doesn't seem right to me. Having so many the decks in one tier means less differentiation, which makes the tiers less interesting/useful. Your current calculations put 24 decks in tiers 1-4 and 57 in tier 5. Basically 2/3 of the decks are untiered. Granted there is some jank in those lists due to the rogue nature of MTGO, but the balance is tipped too far IMHO.
Decks like these should be differentiated from the true jank at the bottom, but your function puts these in tier 5:
Naya
Amulet Titan
Humans
Esper Control
G/W Hate Bears
Auras
Turns
Skred Red
A potential solution to this issue is to clip the list to take the mean of the top 60 deck shares, then calculating cutoffs from there. That way, the jank will still be tier 5, but the known decks will have more differentiation. Just a thought. Also, the calculations will still hold no matter how much jank is present. The more one-off jank decks that factor into the mean, the less reliable it is... limiting the mean calculation to something like 60 decks makes the calculation less "fragile" as we might say in software terms.
Also, basing my cutoff coefficients on a linear progression from 0 to 2 * StDev for each of three sheets.
Avatar and Signature by XenoNinja via Heroes of the Plane Studios
Man, no deck above 7% last month? That's crazy. I didn't think a meta this even could ever be engineered.
Honestly, most people just care about tiers. It's THE parlance of competitive MTG analysis. I know this because I did it for years. You will be giving people what they want if you put work into a good tiering system; I don't really understand your aversion to it. It doesn't have to be a pure %-based system. There just needs to be some objective system, ideally a dynamic one that accounts for different metagames.