Ooh, brilliant work yet again! Very tickled to see a natural w^2 construction.
@CaptainMarcia: I would really advise against having multiple bases in your estimates, the average reader is really not going to be able to figure out which is bigger between Knuth expressions with different numbers in the expressions other than at the very end. So they're not going to know that 2^^^ and 3^^^ are pretty much indistinguishable other than at low numbers, so that 2^^^^ and 3^^^^ will differ by at most a small offset. Everyone seems to like using 2 as the base whenever possible, and to use 3 if 2 doesn't work... because the smallest numbers are the best when pursuing large numbers? I don't really agree though, and the thing that I was worried about did appear in discussions of Varhey's 3 card challenge, when the record was about 2^^2^^7; a lot of commenters had a hard time understanding how much that was (understandably), and were unsure how it compared to even numbers like googol or googolplex. I'm not saying they would have necessarily figured that out if it were 10^^10^^4 or 10^^10^^5 (not sure if the latter still works as an upper bound for that particular deck), but it's definitely easier to see with 10 instead of 2, and maybe at least one person in the thread would have figured that those numbers are bigger than a googolplex and could have explained it to the rest.
Of course, replacing "use the smallest number possible as the base" with "use the biggest number possible as the base" doesn't work since you can go very very high, and also using numbers like googolplex or Graham's number as the base might seem a bit gratuitous... so I'm fine with using 10 as the default base, that's the base for scientific notation anyway.
So that's my spiel for using 10 as the base, but really the most important thing is to use a consistent base for all the estimates.
Converting bases is an option, but I'm hesitant to use conversions that would reduce the reported score in situations where it doesn't make it shorter to write. The writeup does make a point of contextualizing the numbers in terms of more familiar ones like googolplex, and I can add examples of how they might translate to other bases, but for this specific deck, the topic of how its score translates to base 3 is particularly relevant due to that being the base used for Graham's Number.
Which doesn't necessarily rule out using a consistent base for the final scoring, but I'm not sure how significant it would be.
While it's true that (3^^^^(3^^^(3^^3^^3^^18))) is larger than (2^^^^(2^^^(2^^2^^3^^18))), the notion that the change between the early 3's and 2's has any significance is far more misleading than just thinking of them as identical. For example, (3^^^^(3^^^(3^^3^^3^^18))) is less than (2^^^^(2^^^(2^^2^^([3^^18)+2])), and is much closer to (2^^^^(2^^^(2^^2^^3^^18))) than tp (2^^^^(2^^^(2^^2^^([3^^18)+2])). So making those early bases higher in this situation, feels fraudulent; if you want a better lower bound, start with the most significant part of the expression, which in case is the power tower 3^^18. While expanding 3^^18 into an exponential tower is kind of a pain, that's the only way you'll get a legitimately good estimate, in the sense that the best lower bound is somewhere between 3^^18 = 3^3^3^...^3, and 3^^19 = 3^3^3^...^27, so there is some number 3^3^3^...^x where the choice of x gives the highest integral value of the power tower that still makes the full expression a lower bound. So figuring out that x is between, say, 16.454 and 16.455, gives you a significantly better estimate. On the other hand, increasing the earlier bases from 2 to 3, or even from 2 to googolplex, will be the same as increasing 3^3^3^...^3 to 3^3^3^...^(3.0000000....), where if we wanted to get to something other than 0 in the 3.00000... we would need more digits than could be stored in with the computational capacity of the observable universe. So REALLY doesn't get you anywhere from 3 to the best value... but if you use your expression with increased bases, people will not know that this is the case, rather they will expect that it makes some kind of difference if you deliberately put in larger bases.
Anyway that's my position. The biggest point for me is that readers will be confused by all the different bases, and how that will affect the size of the estimates. The most important thing to understand is that generally the value of the bases don't make a difference, but probably the best way to handle that is to simply use the same bases.
Edit: Oh and I also argued against having 2 be the preferred base in the previous post. I think 10 is fine, and of course the estimate will be *larger* with 10's than with 3's, if you consider that important.
While it's true that (3^^^^(3^^^(3^^3^^3^^18))) is larger than (2^^^^(2^^^(2^^2^^3^^18))), the notion that the change between the early 3's and 2's has any significance is far more misleading than just thinking of them as identical. For example, (3^^^^(3^^^(3^^3^^3^^18))) is less than (2^^^^(2^^^(2^^2^^([3^^18)+2])), and is much closer to (2^^^^(2^^^(2^^2^^3^^18))) than tp (2^^^^(2^^^(2^^2^^([3^^18)+2])). So making those early bases higher in this situation, feels fraudulent; if you want a better lower bound, start with the most significant part of the expression, which in case is the power tower 3^^18. While expanding 3^^18 into an exponential tower is kind of a pain, that's the only way you'll get a legitimately good estimate, in the sense that the best lower bound is somewhere between 3^^18 = 3^3^3^...^3, and 3^^19 = 3^3^3^...^27, so there is some number 3^3^3^...^x where the choice of x gives the highest integral value of the power tower that still makes the full expression a lower bound. So figuring out that x is between, say, 16.454 and 16.455, gives you a significantly better estimate. On the other hand, increasing the earlier bases from 2 to 3, or even from 2 to googolplex, will be the same as increasing 3^3^3^...^3 to 3^3^3^...^(3.0000000....), where if we wanted to get to something other than 0 in the 3.00000... we would need more digits than could be stored in with the computational capacity of the observable universe. So REALLY doesn't get you anywhere from 3 to the best value... but if you use your expression with increased bases, people will not know that this is the case, rather they will expect that it makes some kind of difference if you deliberately put in larger bases.
Anyway that's my position. The biggest point for me is that readers will be confused by all the different bases, and how that will affect the size of the estimates. The most important thing to understand is that generally the value of the bases don't make a difference, but probably the best way to handle that is to simply use the same bases.
Edit: Oh and I also argued against having 2 be the preferred base in the previous post. I think 10 is fine, and of course the estimate will be *larger* with 10's than with 3's, if you consider that important.
Okay, yeah, from that perspective I can see it. I've edited to base 2, with an explanation of the logic behind it as well as the applicability of base 10 as well.
I've continued the 12-card writeup to the hyperstage section, but looking back over what I've written - we fuel the hyperstage with -1/1 Exalted Sunborn tokens, but playing Saw in Half on a -1/1 makes more -1/1s. Doesn't that go infinite?
Oh, I'll also point out that Black Lotus + {c]Ball Lightning[/c] was not possible since Alpha, since Ball Lighting came out in The Dark.
Huh. That's been there since the first version years ago, but I guess the error became a lot more obvious once I switched the image to the Masters 25 printing.
Wow, that 12 card hyperstage is cool! And the new writeup is looking great
I think one small correction is necessary: To repeat the final poison layer we cannot use a leftover 0/X, as those don't have a +1/+1 counter to gain the doubling ability. Instead we have to use the last -1/X we get from the stage above and do the grafted skeleton moving again.
The writeup could make it a bit more clear that we keep our high value tokens with massive negative power around as long as possible (without using poison), so they see as many Sunborns as possible when they are eventually sawed. Same for the other tokens from the hyperstage itself.
Wow, that 12 card hyperstage is cool! And the new writeup is looking great
I think one small correction is necessary: To repeat the final poison layer we cannot use a leftover 0/X, as those don't have a +1/+1 counter to gain the doubling ability. Instead we have to use the last -1/X we get from the stage above and do the grafted skeleton moving again.
The writeup could make it a bit more clear that we keep our high value tokens with massive negative power around as long as possible (without using poison), so they see as many Sunborns as possible when they are eventually sawed. Same for the other tokens from the hyperstage itself.
Thanks, fixed.
Say, do you have the notes on how to get 440 million tokens out of the small computation in the 13-card deck?
Looking through my old files I didn't find a good explanation for 440 million. Not sure how I got to that number. But I found this:
1x Coat of Arms
--we sacrifice a 1/3 bishop to get 1 damage arcbond ticks.
1x Bishop 1:Ape 2:Ape Control
1x Vanilla Swap
--every tick of arcbond damage the heartbeat angel will die and be replaced by one of the bishops listening for that.
--the second bishop produces a human every tick. With Coat of Arms that keeps the bishops alife as long as the hearbeat keeps going.
1x Vanilla 1:Angel Arcbond Blast <Heartbeat
1x Bishop 4:Angel Arcbond <Program >Heartbeat
1x Bishop 4:Human Arcbond <Output >Program
I couldn't get my old programs to run on my new pc, so I didn't check simulation and cost again.
With 12 Precursors from the setup and I think 2 more generated to have the vanillas in this list we could probably get 14 arcbonds on each bishop. That gets us to ~293,601,280 output golems. (The notes seem to assume all the dragons die at the end and we need to convert them to golem then to get ready for the trigger. Not sure if we could use the dragons directly? I guess they would be damaged and might die early)
To maxmize Arcbonds we would cast it from hand sometime between precursor triggers on the second artificial evolution. We need to be careful to not get it on anything that survives, so finish modifying the types of everything that doesn't get arcbond, cast it, then finish modifying the types of everything else. Probably easier to not use human for the computation because of that.
We could get another multiplier if we cast arcbond more often. Using a swap for that might be better than using 2 swaps to get another type in the doubling chain. But I'm not sure if the timing with AE works out if we do that.
Edit: got my programs to run and it says this about the cost:
So it seems to be one swap under what we can afford.
It also reminded we why the extra conversion from dragons to golems is there: dragons are on opponents side, so we need to scrambleverse them to us and sacrifice everything to get golems that we "own" to be targeted by swaps.
Edit 2: To get all the Bishops we want to modify we might need to resolve a few swaps on existing precursor golems before we cast the arcbond. So we probably get only ~10 copies of arcbond per bishop. Please double check any numbers I provided here, mistakes are likely
Edit3: Thought about it some more, and I think we can get full arcbonds by not casting it in between Precursor triggers on AE, but instead casting it before creating the bishops that are not supposed to get it. So the steps become:
- use swaps to create any extra Precursors
- use AE to turn golems that don't get arcbond into dragons, the rest into humans
- create all the bishops that get arcbond
- cast arcbond, everything gets max amount
- create the rest of the bishops
- cast second ae to give everything the creature types we want
I also updated the list to use the 53rd swap and include another improvement: Turns out using a multiplier of 3 is more resource efficient than 2. This lets us get almost a billion golems
1x Coat of Arms
--we sacrifice a 1/3 bishop to get 1 damage arcbond ticks.
1x Bishop 1:Ape 2:Ape Control
1x Vanilla Swap
--every tick of arcbond damage the heartbeat angel will die and be replaced by one of the bishops listening for that.
--the second bishop produces a human every tick. With Coat of Arms that keeps the bishops alife as long as the hearbeat keeps going.
1x Vanilla 1:Angel Arcbond Blast <Heartbeat
1x Bishop 4:Angel Arcbond <Program >Heartbeat
1x Bishop 4:Human Arcbond <Output >Program
--input
8x Vanilla 1:type1 Arcbond <Program
3x Bishop 3:type1 4:type2 Arcbond <Program
3x Bishop 3:type2 4:type3 Arcbond <Program
3x Bishop 3:type3 4:type4 Arcbond <Program
3x Bishop 3:type4 4:type5 Arcbond <Program
3x Bishop 3:type5 4:type6 Arcbond <Program
3x Bishop 3:type6 4:type7 Arcbond <Program
3x Bishop 3:type7 4:type8 Arcbond <Program
3x Bishop 3:type8 4:type9 Arcbond <Program
3x Bishop 3:type9 4:type10 Arcbond <Program
1x Bishop 3:type10 4:Angel Arcbond <Program >Heartstopper
-- Should produce 157464 Angels after a total of >236192 damage ticks
-- There are 30 Bishops with Arcbond around that die now
-- So there were ~15*30*236192 arcbond triggers produced
-- where 15 is the number of arcbonds on each bishop
--profit
4x Vanilla 1:Dragon VIP <Output
3x Bishop 1:Dragon 3:Dragon 4:Golem VIP <Output
-- each dragon we produced turns into 3 golems we own
-- total ~15*30*236192*3*3 = 956577600
Looking through my old files I didn't find a good explanation for 440 million. Not sure how I got to that number. But I found this:
1x Coat of Arms
--we sacrifice a 1/3 bishop to get 1 damage arcbond ticks.
1x Bishop 1:Ape 2:Ape Control
1x Vanilla Swap
--every tick of arcbond damage the heartbeat angel will die and be replaced by one of the bishops listening for that.
--the second bishop produces a human every tick. With Coat of Arms that keeps the bishops alife as long as the hearbeat keeps going.
1x Vanilla 1:Angel Arcbond Blast <Heartbeat
1x Bishop 4:Angel Arcbond <Program >Heartbeat
1x Bishop 4:Human Arcbond <Output >Program
I couldn't get my old programs to run on my new pc, so I didn't check simulation and cost again.
With 12 Precursors from the setup and I think 2 more generated to have the vanillas in this list we could probably get 14 arcbonds on each bishop. That gets us to ~293,601,280 output golems. (The notes seem to assume all the dragons die at the end and we need to convert them to golem then to get ready for the trigger. Not sure if we could use the dragons directly? I guess they would be damaged and might die early)
To maxmize Arcbonds we would cast it from hand sometime between precursor triggers on the second artificial evolution. We need to be careful to not get it on anything that survives, so finish modifying the types of everything that doesn't get arcbond, cast it, then finish modifying the types of everything else. Probably easier to not use human for the computation because of that.
We could get another multiplier if we cast arcbond more often. Using a swap for that might be better than using 2 swaps to get another type in the doubling chain. But I'm not sure if the timing with AE works out if we do that.
Edit: got my programs to run and it says this about the cost:
So it seems to be one swap under what we can afford.
It also reminded we why the extra conversion from dragons to golems is there: dragons are on opponents side, so we need to scrambleverse them to us and sacrifice everything to get golems that we "own" to be targeted by swaps.
Edit 2: To get all the Bishops we want to modify we might need to resolve a few swaps on existing precursor golems before we cast the arcbond. So we probably get only ~10 copies of arcbond per bishop. Please double check any numbers I provided here, mistakes are likely
Edit3: Thought about it some more, and I think we can get full arcbonds by not casting it in between Precursor triggers on AE, but instead casting it before creating the bishops that are not supposed to get it. So the steps become:
- use swaps to create any extra Precursors
- use AE to turn golems that don't get arcbond into dragons, the rest into humans
- create all the bishops that get arcbond
- cast arcbond, everything gets max amount
- create the rest of the bishops
- cast second ae to give everything the creature types we want
I also updated the list to use the 53rd swap and include another improvement: Turns out using a multiplier of 3 is more resource efficient than 2. This lets us get almost a billion golems
1x Coat of Arms
--we sacrifice a 1/3 bishop to get 1 damage arcbond ticks.
1x Bishop 1:Ape 2:Ape Control
1x Vanilla Swap
--every tick of arcbond damage the heartbeat angel will die and be replaced by one of the bishops listening for that.
--the second bishop produces a human every tick. With Coat of Arms that keeps the bishops alife as long as the hearbeat keeps going.
1x Vanilla 1:Angel Arcbond Blast <Heartbeat
1x Bishop 4:Angel Arcbond <Program >Heartbeat
1x Bishop 4:Human Arcbond <Output >Program
--input
8x Vanilla 1:type1 Arcbond <Program
3x Bishop 3:type1 4:type2 Arcbond <Program
3x Bishop 3:type2 4:type3 Arcbond <Program
3x Bishop 3:type3 4:type4 Arcbond <Program
3x Bishop 3:type4 4:type5 Arcbond <Program
3x Bishop 3:type5 4:type6 Arcbond <Program
3x Bishop 3:type6 4:type7 Arcbond <Program
3x Bishop 3:type7 4:type8 Arcbond <Program
3x Bishop 3:type8 4:type9 Arcbond <Program
3x Bishop 3:type9 4:type10 Arcbond <Program
1x Bishop 3:type10 4:Angel Arcbond <Program >Heartstopper
-- Should produce 157464 Angels after a total of >236192 damage ticks
-- There are 30 Bishops with Arcbond around that die now
-- So there were ~15*30*236192 arcbond triggers produced
-- where 15 is the number of arcbonds on each bishop
--profit
4x Vanilla 1:Dragon VIP <Output
3x Bishop 1:Dragon 3:Dragon 4:Golem VIP <Output
-- each dragon we produced turns into 3 golems we own
-- total ~15*30*236192*3*3 = 956577600
Thanks! I think the extra Swap explains the discrepancy - a third output Bishop should multiply the tokens created by 1.5, hitting 440 million.
I've continued the writeup through the mini-computation, making some minor tweaks to get the Golem output to 956,973,504 - assuming I did everything right.
The rest is the UTM itself, which I'm still a bit fuzzy on. What construction are we using for that?
I've also been thinking about a possible companion writeup for decks that don't fit into the current 1-13 card lineup but are notable for other reasons. It'd be a good place to put things like the Heartless Summoning and Jadzi, Oracle of Arcavios decks - ones with some cool tech that are no longer in any of the current winners. What would be especially fun to include is strategies involving a traditional stage and maybe even a hyperstage, but I'm not entirely sure where to start on trying to fit those into a relatively small number of cards. Are there any existing ones on the more compact end?
Edit: Okay, I calculated termination based on base toughness 4 rather than 1, so that part is definitely wrong. (Although that suggests the extra player-side token could add a tick by being a Human rather than a Dragon.)
One more improvement for the setup computation: split up the last 3x of the type chain into a 2x and a 1x. That's better in this situation where we care about the sum of the created creatues, not just the final number. This way we crack the billion golems
The best known UTM we can build with our conversion methods is the 15 state 2 symbol TM from
T. Neary and D. Woods. Four small universal Turing machines. Fundamenta Informaticae, 91(1):123–144, 2009
Convert that into the Waterfall Model and then again into a flooding waterfall and we ended up with the requirement for 14 million bishops.
Since it would require a huge number of vanilla for this construction to do anything useful we would first use those 14 million bishops to run a BB champion directly. Converting the BB(15) champion should require roughly the same amount of bishops and gets us f_w+1(...) vanillas. Presumably that takes roughly the same amount of bishops. Producing more bishops than we need for universal computation wouldn't be worth it.
After that initial computation we swap the bishops over to the UTM configuration and use all those vanillas for the huge input. Repeat 100 million times or so. Actually figuring out exactly which casts we need for repeated computation and narrowing down this number is still open.
Of course any construction we actually give here is certainly suboptimal. The smallest UTM is likely still undiscovered. Out conversion methods are inefficient. It would be better to have a native universal computation in the MTG language. The known champion for BB(15) will be outclassed by the true champion. So I don't think going into too much detail about all that is worth it.
One more improvement for the setup computation: split up the last 3x of the type chain into a 2x and a 1x. That's better in this situation where we care about the sum of the created creatues, not just the final number. This way we crack the billion golems
The best known UTM we can build with our conversion methods is the 15 state 2 symbol TM from
T. Neary and D. Woods. Four small universal Turing machines. Fundamenta Informaticae, 91(1):123–144, 2009
Convert that into the Waterfall Model and then again into a flooding waterfall and we ended up with the requirement for 14 million bishops.
Since it would require a huge number of vanilla for this construction to do anything useful we would first use those 14 million bishops to run a BB champion directly. Converting the BB(15) champion should require roughly the same amount of bishops and gets us f_w+1(...) vanillas. Presumably that takes roughly the same amount of bishops. Producing more bishops than we need for universal computation wouldn't be worth it.
After that initial computation we swap the bishops over to the UTM configuration and use all those vanillas for the huge input. Repeat 100 million times or so. Actually figuring out exactly which casts we need for repeated computation and narrowing down this number is still open.
Of course any construction we actually give here is certainly suboptimal. The smallest UTM is likely still undiscovered. Out conversion methods are inefficient. It would be better to have a native universal computation in the MTG language. The known champion for BB(15) will be outclassed by the true champion. So I don't think going into too much detail about all that is worth it.
Hmm. Even if the known methods are suboptimal, I want to be able to explain exactly what we can do with 14 million Bishops and 296 creature types to know that those are sufficient for our purposes, to continue trying to keep things as grounded as possible. And yeah, identifying a number of times we can actually run the computation is what tells us the current lower bound on our score.
That BB champion chart does look useful, though also challenging to substantiate.
Explaining the details of compilation methods has nothing to do with MTG anymore, so I don't know how relevant that would be. Also it's hard
But yeah, some justification that 14 million bishops and 300 types are sufficient should be there, so I hope you manage to explain it a bit. Do you have any specific questions?
My TM to TWM compiler can be found on Github, information on the flooding waterfall model lives on esolangs, with the TWM to FWM compiler linked in the external ressources. The small UTMs can be found in computer science papers.
The best source for explanations or more information on the recent BB champions is probably the BBChallenge Discord. IIRC they were constructed using efficient TM mechanisms that were found in the wild and are not very clean. So they are extremely hard to understand. We also just assume we "know" the true champion for all of the millions repetitions after the first, so I would just stick with that and say we reach BB(15). Pointing to the chart to show that it is easily big enough to get started with the UTM.
Explaining the details of compilation methods has nothing to do with MTG anymore, so I don't know how relevant that would be. Also it's hard
But yeah, some justification that 14 million bishops and 300 types are sufficient should be there, so I hope you manage to explain it a bit. Do you have any specific questions?
My TM to TWM compiler can be found on Github, information on the flooding waterfall model lives on esolangs, with the TWM to FWM compiler linked in the external ressources. The small UTMs can be found in computer science papers.
The best source for explanations or more information on the recent BB champions is probably the BBChallenge Discord. IIRC they were constructed using efficient TM mechanisms that were found in the wild and are not very clean. So they are extremely hard to understand. We also just assume we "know" the true champion for all of the millions repetitions after the first, so I would just stick with that and say we reach BB(15). Pointing to the chart to show that it is easily big enough to get started with the UTM.
The first thing I'm trying to follow is what exactly is the math that gets us to 296 types as the requirement - how many get tied up in each kind of program component and how many of each component we need. Then from there, how many of the 14 million Bishops needed are for each of those components.
From some looking around the other resources, my sticking point has been trying to figure out how to translate them into the terms of the MTG cards we're using.
Of course, we have to assume we "know" the champions, or else we can't claim to beat BB(n) for n greater than 5, and maybe never for anything greater than 5.
I was wondering if it was worth considering comparing the flooding waterfall model to some other Turing-complete language or computational model besides Turing machines, but I suppose we would need to find a better bound using that model for it to be worth anything, and it seems likely that we have better bounds for the Turing machines Busy Beavers than for anything else.
Iijil, do you have a general bound comparing the deck computation at a given size, to Busy Beavers? When I analyzed this, I figured out that we could get Busy Beaver lower bound that lost only an exponentiation, t.e. if the highest number we can get starting a computation with n monsters is MTG(n), than we can make a bound like 2^MTG(cn) > BB(n) for some constant c. But I was pretty handwavey about it, so for example I don't really know the size of c, although I'm pretty sure it doesn't need to be huge, for example 1000 should work easily. Did you have any results for lower bounding by BB?
So we use a 15 state 2 symbol UTM. To compile that to the waterfall model we use 30 clocks that each encode one of the TM transitions. Everytime one of those empties it corresponds to the TM taking a step with that transition. 1 clock encodes the left half tape in base 2. We need 7 clocks to help manipulating the tape: multiply the half tape by 2 when we move right and write the correct symbol onto the tape afterwards. Or divide with rest when we move left and read the current symbol. Those 8 clocks are also needed for the right half tape. We get a total of 46 clocks. In general we need 8+4m+nm clocks for n state m symbol TMs.
Then the conversion to the flooding waterfall model happens. There are comments in ais523's compiler that explain how it works. To be honest I don't fully understand the magic that is happening in there. It looks like for the bulk of the simulation each TWM clock is represented by 4 FWM clocks, split into two clusters that alternate emptying. Both clusters also have a fixed number of helper clocks. When those clocks can possibly empty is strictly managed and the clocks balance each other out in a delicate construction to manage useful comparisions despite the inherently exponential growth of the numbers in the model. We also need additional flooding clocks to get the FWM program into the correct starting position. Depending on which clocks need input this might double the number of required clocks. (Maybe one of the clusters can always start empty? That would give the roughly 6 flooding clocks per original clock I see in practice)
Running the TWM program for the 15x2 UTM through the compiler gives me a required 300 clocks. This is different from the 296 I got before, since I included a sizeable input to the right half tape clock. It looks like without that input some startup clocks could be skipped, but for our operations we definitely need that input.
In MTG terms we need 1 creature type per flooding clock. Wall is not usable. We also need 1 additional type that always stays alive so we don't fizzle our swaps. This is best accomplished by using 1 more type to add a heartbeat. It looks like the creature type count has grown to 315 by now, so using a couple types more than we though won't be a problem. The number of bishops required stays at just under 14 million (13944268)
Apparently the 15x2 UTM simulates bi-tag systems, so I can't say how we would encode a normal TM into input for that. I don't remember any bounds for comparing MTG(n) with BB(n). I just assume the input conversions are doable with a few exponentiations, and are outscaled by the repetitions anyway.
Edit: Minor correction for the writeup: We don't need the opponent to control a creature for the last Soulblast before the final precursor trigger. We can just target any of our creatures when declaring the targets of the spell. Only after that do we pay the additional cost and sacrifice all creatures. We sacrifice everything, Soulblast goes on the stack, we get all our golems. Then Soulblast fizzles because we sacrificed the target, but we don't care about that.
Also the paragraph about us controling a dragon is wrong now after you changed the types of our initial sacrificed bishop.
In the writeup, you have "Spend the last mana to have Exalted Sunborn activate Crypt Rats’ ability, dealing 1 damage to each of the other 29 creatures and both players, and putting us at 31 life.", but that ability does not self-exclude.
What would be especially fun to include is strategies involving a traditional stage and maybe even a hyperstage, but I'm not entirely sure where to start on trying to fit those into a relatively small number of cards. Are there any existing ones on the more compact end?
Well, we had the previous incarnation of the writeup, which went to 20+ cards and and went up to a bunch of stages (I know you guys already know this, just bringing it up to talk about it ), but I presume we can improve on this, maybe a bunch? Well, I guess the 12 card hyperstage deck will pretty much obselete anything we've done before. We could probably build off of that efficient hyperstage to get a new record megastage, and even a gigastage that we never got quite working? And we could also address the w^w construction that plopfill described earlier, and try to come up with both the smallest deck that can make it work, and also the highest damage 60 card deck that uses it.
In terms of the category where these would become relevant, the obvious one would be to use the same rules and just forbid anything Busy Beaver-like. For more natural rules that don't just forbid Busy Beaver specifically, we could require not just the deck but also the specific play sequence (well, we could still do this by precisely describing the conversion from a Turing machine to an exact monster creature setup, then referring to a specific Turing machine to convert - I think there's like a 45 state, 2 symbol machine that gets up to epsilon_0, which is gonna be very hard to beat with our current computable methods, and also 150-ish machine that can run the full Bashicu Matrix System, which is supposed to go incredibly far, although it hasn't been proven exactly how strong it is. So maybe not this one.) We could also say the deck must be "computable" in nature, although this isn't formally defined; computability refers to functions not numbers, so while we can say the entire setup of having a Flooding Waterfall Machine implemented and considering all possible starting permanents is uncomputable, a deck that can only run the FWM under a finite collection of starting permanents, well that well just be finitely many numbers assigned to a finite collection of starting permanents, so uncomputability doesn't enter into it. There is the obvious thing where we don't actually know what the Busy Beaver setups are, and there is no general method for figuring out what those are... but that depends on our state of knowledge, and not the actual deck, and I don't think there is a way to limit what Busy Beaver numbers we can figure out, other than each arithmetically sound axiom system will have their own maximum Busy Beaver number that they can prove, so for instance, there is a maximum n for which ZFC can decide the value of BB(n)... however, #1: we'll probably never learn that number, #2 ZFC is just one axiom system of infinitely many possibilities, #3 dealing with the formal system of mathematics that we assume seems going too far away from the initial challenge :D. Dang, maybe the best way to define this challenge is just to rule out Busy Beaver methods.
Whoops, I kinda just rambled off topic there! But I'll leave it since it's about the non-Busy Beaver challenge which I find interesting, and is now more interesting with plopfill's w^w construction. Also still interested in the possibility of higher-order Busy Beavers, if we're ready to tackle that one!
Private Mod Note
():
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
Which doesn't necessarily rule out using a consistent base for the final scoring, but I'm not sure how significant it would be.
Anyway that's my position. The biggest point for me is that readers will be confused by all the different bases, and how that will affect the size of the estimates. The most important thing to understand is that generally the value of the bases don't make a difference, but probably the best way to handle that is to simply use the same bases.
Edit: Oh and I also argued against having 2 be the preferred base in the previous post. I think 10 is fine, and of course the estimate will be *larger* with 10's than with 3's, if you consider that important.
I spotted several errors in the writeup:
Thanks, fixed.
I think one small correction is necessary: To repeat the final poison layer we cannot use a leftover 0/X, as those don't have a +1/+1 counter to gain the doubling ability. Instead we have to use the last -1/X we get from the stage above and do the grafted skeleton moving again.
The writeup could make it a bit more clear that we keep our high value tokens with massive negative power around as long as possible (without using poison), so they see as many Sunborns as possible when they are eventually sawed. Same for the other tokens from the hyperstage itself.
Say, do you have the notes on how to get 440 million tokens out of the small computation in the 13-card deck?
1x Coat of Arms
--we sacrifice a 1/3 bishop to get 1 damage arcbond ticks.
1x Bishop 1:Ape 2:Ape Control
1x Vanilla Swap
--every tick of arcbond damage the heartbeat angel will die and be replaced by one of the bishops listening for that.
--the second bishop produces a human every tick. With Coat of Arms that keeps the bishops alife as long as the hearbeat keeps going.
1x Vanilla 1:Angel Arcbond Blast <Heartbeat
1x Bishop 4:Angel Arcbond <Program >Heartbeat
1x Bishop 4:Human Arcbond <Output >Program
1x Vanilla 1:Bug <OutputHeartbeat
1x Bishop 1:Dragon 3:Bug 4:Bug <Output >OutputHeartbeat
2x Bishop 1:Dragon 3:Bug 4:Dragon <Output >Output
-- produces 2 dragons for every resolved arcbond
-- keeps the dragons alive forever
--input
5x Vanilla 1:type1 Arcbond <Program
2x Bishop 3:type1 4:type2 Arcbond <Program
2x Bishop 3:type2 4:type3 Arcbond <Program
2x Bishop 3:type3 4:type4 Arcbond <Program
2x Bishop 3:type4 4:type5 Arcbond <Program
2x Bishop 3:type5 4:type6 Arcbond <Program
2x Bishop 3:type6 4:type16 Arcbond <Program
2x Bishop 3:type7 4:type8 Arcbond <Program
2x Bishop 3:type8 4:type9 Arcbond <Program
2x Bishop 3:type9 4:type10 Arcbond <Program
2x Bishop 3:type10 4:type11 Arcbond <Program
2x Bishop 3:type11 4:type12 Arcbond <Program
2x Bishop 3:type12 4:type13 Arcbond <Program
2x Bishop 3:type13 4:type14 Arcbond <Program
2x Bishop 3:type14 4:type15 Arcbond <Program
2x Bishop 3:type15 4:type16 Arcbond <Program
1x Bishop 3:type16 4:Angel Arcbond <Program >Heartstopper
-- Should produce 327680 million Angels in about that many damage ticks
-- There are 33 Bishops with Arcbond around that die now
-- So there are ~n*32*327680 > n*10000000 arcbonds on the stack
-- where n is the number of arcbonds on each bishop
--profit
4x Vanilla 1:Dragon VIP <Output
1x Bishop 1:Dragon 3:Dragon 4:Golem VIP <Output
With 12 Precursors from the setup and I think 2 more generated to have the vanillas in this list we could probably get 14 arcbonds on each bishop. That gets us to ~293,601,280 output golems. (The notes seem to assume all the dragons die at the end and we need to convert them to golem then to get ready for the trigger. Not sure if we could use the dragons directly? I guess they would be damaged and might die early)
To maxmize Arcbonds we would cast it from hand sometime between precursor triggers on the second artificial evolution. We need to be careful to not get it on anything that survives, so finish modifying the types of everything that doesn't get arcbond, cast it, then finish modifying the types of everything else. Probably easier to not use human for the computation because of that.
We could get another multiplier if we cast arcbond more often. Using a swap for that might be better than using 2 swaps to get another type in the doubling chain. But I'm not sure if the timing with AE works out if we do that.
Edit: got my programs to run and it says this about the cost:
So it seems to be one swap under what we can afford.
It also reminded we why the extra conversion from dragons to golems is there: dragons are on opponents side, so we need to scrambleverse them to us and sacrifice everything to get golems that we "own" to be targeted by swaps.
Edit 2: To get all the Bishops we want to modify we might need to resolve a few swaps on existing precursor golems before we cast the arcbond. So we probably get only ~10 copies of arcbond per bishop. Please double check any numbers I provided here, mistakes are likely
Edit3: Thought about it some more, and I think we can get full arcbonds by not casting it in between Precursor triggers on AE, but instead casting it before creating the bishops that are not supposed to get it. So the steps become:
- use swaps to create any extra Precursors
- use AE to turn golems that don't get arcbond into dragons, the rest into humans
- create all the bishops that get arcbond
- cast arcbond, everything gets max amount
- create the rest of the bishops
- cast second ae to give everything the creature types we want
I also updated the list to use the 53rd swap and include another improvement: Turns out using a multiplier of 3 is more resource efficient than 2. This lets us get almost a billion golems
--we sacrifice a 1/3 bishop to get 1 damage arcbond ticks.
1x Bishop 1:Ape 2:Ape Control
1x Vanilla Swap
--every tick of arcbond damage the heartbeat angel will die and be replaced by one of the bishops listening for that.
--the second bishop produces a human every tick. With Coat of Arms that keeps the bishops alife as long as the hearbeat keeps going.
1x Vanilla 1:Angel Arcbond Blast <Heartbeat
1x Bishop 4:Angel Arcbond <Program >Heartbeat
1x Bishop 4:Human Arcbond <Output >Program
1x Vanilla 1:Bug <OutputHeartbeat
1x Bishop 1:Dragon 3:Bug 4:Bug <Output >OutputHeartbeat
3x Bishop 1:Dragon 3:Bug 4:Dragon <Output >Output
-- produces 3 dragons for every resolved arcbond
-- keeps the dragons alive forever
--input
8x Vanilla 1:type1 Arcbond <Program
3x Bishop 3:type1 4:type2 Arcbond <Program
3x Bishop 3:type2 4:type3 Arcbond <Program
3x Bishop 3:type3 4:type4 Arcbond <Program
3x Bishop 3:type4 4:type5 Arcbond <Program
3x Bishop 3:type5 4:type6 Arcbond <Program
3x Bishop 3:type6 4:type7 Arcbond <Program
3x Bishop 3:type7 4:type8 Arcbond <Program
3x Bishop 3:type8 4:type9 Arcbond <Program
3x Bishop 3:type9 4:type10 Arcbond <Program
1x Bishop 3:type10 4:Angel Arcbond <Program >Heartstopper
-- Should produce 157464 Angels after a total of >236192 damage ticks
-- There are 30 Bishops with Arcbond around that die now
-- So there were ~15*30*236192 arcbond triggers produced
-- where 15 is the number of arcbonds on each bishop
--profit
4x Vanilla 1:Dragon VIP <Output
3x Bishop 1:Dragon 3:Dragon 4:Golem VIP <Output
-- each dragon we produced turns into 3 golems we own
-- total ~15*30*236192*3*3 = 956577600
Chancellor of the Forge + Blazing Shoal for 8 damage. Idk if is was already discussed here
The rest is the UTM itself, which I'm still a bit fuzzy on. What construction are we using for that?
I've also been thinking about a possible companion writeup for decks that don't fit into the current 1-13 card lineup but are notable for other reasons. It'd be a good place to put things like the Heartless Summoning and Jadzi, Oracle of Arcavios decks - ones with some cool tech that are no longer in any of the current winners. What would be especially fun to include is strategies involving a traditional stage and maybe even a hyperstage, but I'm not entirely sure where to start on trying to fit those into a relatively small number of cards. Are there any existing ones on the more compact end?
Edit: Okay, I calculated termination based on base toughness 4 rather than 1, so that part is definitely wrong. (Although that suggests the extra player-side token could add a tick by being a Human rather than a Dragon.)
The best known UTM we can build with our conversion methods is the 15 state 2 symbol TM from
Convert that into the Waterfall Model and then again into a flooding waterfall and we ended up with the requirement for 14 million bishops.
Since it would require a huge number of vanilla for this construction to do anything useful we would first use those 14 million bishops to run a BB champion directly. Converting the BB(15) champion should require roughly the same amount of bishops and gets us f_w+1(...) vanillas. Presumably that takes roughly the same amount of bishops. Producing more bishops than we need for universal computation wouldn't be worth it.
After that initial computation we swap the bishops over to the UTM configuration and use all those vanillas for the huge input. Repeat 100 million times or so. Actually figuring out exactly which casts we need for repeated computation and narrowing down this number is still open.
Of course any construction we actually give here is certainly suboptimal. The smallest UTM is likely still undiscovered. Out conversion methods are inefficient. It would be better to have a native universal computation in the MTG language. The known champion for BB(15) will be outclassed by the true champion. So I don't think going into too much detail about all that is worth it.
That BB champion chart does look useful, though also challenging to substantiate.
But yeah, some justification that 14 million bishops and 300 types are sufficient should be there, so I hope you manage to explain it a bit. Do you have any specific questions?
My TM to TWM compiler can be found on Github, information on the flooding waterfall model lives on esolangs, with the TWM to FWM compiler linked in the external ressources. The small UTMs can be found in computer science papers.
The best source for explanations or more information on the recent BB champions is probably the BBChallenge Discord. IIRC they were constructed using efficient TM mechanisms that were found in the wild and are not very clean. So they are extremely hard to understand. We also just assume we "know" the true champion for all of the millions repetitions after the first, so I would just stick with that and say we reach BB(15). Pointing to the chart to show that it is easily big enough to get started with the UTM.
From some looking around the other resources, my sticking point has been trying to figure out how to translate them into the terms of the MTG cards we're using.
I was wondering if it was worth considering comparing the flooding waterfall model to some other Turing-complete language or computational model besides Turing machines, but I suppose we would need to find a better bound using that model for it to be worth anything, and it seems likely that we have better bounds for the Turing machines Busy Beavers than for anything else.
Iijil, do you have a general bound comparing the deck computation at a given size, to Busy Beavers? When I analyzed this, I figured out that we could get Busy Beaver lower bound that lost only an exponentiation, t.e. if the highest number we can get starting a computation with n monsters is MTG(n), than we can make a bound like 2^MTG(cn) > BB(n) for some constant c. But I was pretty handwavey about it, so for example I don't really know the size of c, although I'm pretty sure it doesn't need to be huge, for example 1000 should work easily. Did you have any results for lower bounding by BB?
Then the conversion to the flooding waterfall model happens. There are comments in ais523's compiler that explain how it works. To be honest I don't fully understand the magic that is happening in there. It looks like for the bulk of the simulation each TWM clock is represented by 4 FWM clocks, split into two clusters that alternate emptying. Both clusters also have a fixed number of helper clocks. When those clocks can possibly empty is strictly managed and the clocks balance each other out in a delicate construction to manage useful comparisions despite the inherently exponential growth of the numbers in the model. We also need additional flooding clocks to get the FWM program into the correct starting position. Depending on which clocks need input this might double the number of required clocks. (Maybe one of the clusters can always start empty? That would give the roughly 6 flooding clocks per original clock I see in practice)
Running the TWM program for the 15x2 UTM through the compiler gives me a required 300 clocks. This is different from the 296 I got before, since I included a sizeable input to the right half tape clock. It looks like without that input some startup clocks could be skipped, but for our operations we definitely need that input.
In MTG terms we need 1 creature type per flooding clock. Wall is not usable. We also need 1 additional type that always stays alive so we don't fizzle our swaps. This is best accomplished by using 1 more type to add a heartbeat. It looks like the creature type count has grown to 315 by now, so using a couple types more than we though won't be a problem. The number of bishops required stays at just under 14 million (13944268)
Apparently the 15x2 UTM simulates bi-tag systems, so I can't say how we would encode a normal TM into input for that. I don't remember any bounds for comparing MTG(n) with BB(n). I just assume the input conversions are doable with a few exponentiations, and are outscaled by the repetitions anyway.
Edit: Minor correction for the writeup: We don't need the opponent to control a creature for the last Soulblast before the final precursor trigger. We can just target any of our creatures when declaring the targets of the spell. Only after that do we pay the additional cost and sacrifice all creatures. We sacrifice everything, Soulblast goes on the stack, we get all our golems. Then Soulblast fizzles because we sacrificed the target, but we don't care about that.
Also the paragraph about us controling a dragon is wrong now after you changed the types of our initial sacrificed bishop.
I came up with Black Lotus, Channel, Chromatic Orrery, Thousand-Year Storm, Krark, the Thumbless, Renounce, Orcish Medicine; I haven't yet figured out the best way to complete this.
In terms of the category where these would become relevant, the obvious one would be to use the same rules and just forbid anything Busy Beaver-like. For more natural rules that don't just forbid Busy Beaver specifically, we could require not just the deck but also the specific play sequence (well, we could still do this by precisely describing the conversion from a Turing machine to an exact monster creature setup, then referring to a specific Turing machine to convert - I think there's like a 45 state, 2 symbol machine that gets up to epsilon_0, which is gonna be very hard to beat with our current computable methods, and also 150-ish machine that can run the full Bashicu Matrix System, which is supposed to go incredibly far, although it hasn't been proven exactly how strong it is. So maybe not this one.) We could also say the deck must be "computable" in nature, although this isn't formally defined; computability refers to functions not numbers, so while we can say the entire setup of having a Flooding Waterfall Machine implemented and considering all possible starting permanents is uncomputable, a deck that can only run the FWM under a finite collection of starting permanents, well that well just be finitely many numbers assigned to a finite collection of starting permanents, so uncomputability doesn't enter into it. There is the obvious thing where we don't actually know what the Busy Beaver setups are, and there is no general method for figuring out what those are... but that depends on our state of knowledge, and not the actual deck, and I don't think there is a way to limit what Busy Beaver numbers we can figure out, other than each arithmetically sound axiom system will have their own maximum Busy Beaver number that they can prove, so for instance, there is a maximum n for which ZFC can decide the value of BB(n)... however, #1: we'll probably never learn that number, #2 ZFC is just one axiom system of infinitely many possibilities, #3 dealing with the formal system of mathematics that we assume seems going too far away from the initial challenge :D. Dang, maybe the best way to define this challenge is just to rule out Busy Beaver methods.
Whoops, I kinda just rambled off topic there! But I'll leave it since it's about the non-Busy Beaver challenge which I find interesting, and is now more interesting with plopfill's w^w construction. Also still interested in the possibility of higher-order Busy Beavers, if we're ready to tackle that one!