So, I'm noticing that loss on the test set (which the network is forbidden to train on, so every time it sees the cards it's as if it has seen them for the first time), is increasing after epoch 20. As in, it's predictive powers get worse as it trains more, which is indicative of overfitting. Sure enough, at epoch 50 I'm definitely seeing some signs of overfitting.
I was thinking about this and the stack issue and had this random idea, based on an old neuroscience paper I've read a long time ago. The paper proposed that dreaming is the brain feeding itself white noise and then trying to "make sense" out of it, with the intent of exhausting short-term memory and weakening any partially made memory - a cleanup.
Is it possible that feeding the NN statistically-correct isotropic noise from time to time could help it generalize better to avoid overfitting and maybe, create some incentive for it to use the stack? It will probably make training longer, though.
Fun fact: I read the paper when I was taking a medicine that was known to interfere with dreaming patterns - i.e. I went for about 5 years pretty much without dreaming - and is now believed to vastly increase synaptic plasticity. It is possible that my brain was essentially overwriting itself with new information over and over and over during that time. Crazy stuff.
If you're still having problems forcing it to use the stack, how about using crippling dropout early on (50%, maybe) and then gradually easing back until reaching the desired dropout. For example, if the desired dropout is 10%, bring the dropout down by 2%/epoch until reaching 10% on epoch 21. That might force it to use the long term storage early on and not shift to using the neurons for storage as soon as they're reliable. If that doesn't force it to use the stack, there are a few of other ways you suggested (IIRC) which would force it to use the stack. If we could just start under those constraints, it shouldn't abandon the stack when it shifts to a more capable system.
My suggestion is to use "^^BB" instead. That way, colorless mana takes up two characters, just like colored mana.
Correct. Right now we're actually using just one caret for a colorless mana symbol.
This being said, recurrent nets are capable of counting, and as far as I can tell, the network interprets a lettered mana symbol as ~= 0.5 CMC. Some indirect evidence of this is that the network produces creature cards with a power to CMC ratio that approximates that of real cards.
So, I'm noticing that loss on the test set (which the network is forbidden to train on, so every time it sees the cards it's as if it has seen them for the first time), is increasing after epoch 20. As in, it's predictive powers get worse as it trains more, which is indicative of overfitting. Sure enough, at epoch 50 I'm definitely seeing some signs of overfitting.
I was thinking about this and the stack issue and had this random idea, based on an old neuroscience paper I've read a long time ago. The paper proposed that dreaming is the brain feeding itself white noise and then trying to "make sense" out of it, with the intent of exhausting short-term memory and weakening any partially made memory - a cleanup.
Is it possible that feeding the NN statistically-correct isotropic noise from time to time could help it generalize better to avoid overfitting and maybe, create some incentive for it to use the stack? It will probably make training longer, though.
Fun fact: I read the paper when I was taking a medicine that was known to interfere with dreaming patterns - i.e. I went for about 5 years pretty much without dreaming - and is now believed to vastly increase synaptic plasticity. It is possible that my brain was essentially overwriting itself with new information over and over and over during that time. Crazy stuff.
Ooh, clever idea, I like! A lot, actually. I hadn't thought of that.
LSTM networks have a habit of getting distracted by shiny objects and forgetting whatever it was they were thinking. We could use that to our advantage. Injecting noise during training could encourage the network to write things down because the noising would put a strain on its short-term memory. It wouldn't be difficult to implement either, it'd just be some extra forward passes on the network thrown in randomly.
Here's the problem though, and it's more of a technical issue than anything else: if the network sees the noise as being real or relevant, then it would be tempted to use the stack to record information about it... but I think I have a fix for that. So we feed it a character at time T-1, and then we have the current state...
<LSTM cell contents at time T, stack contents at time T>
We feed the network noise and get
<LSTM cell contents at time T + noise, stack contents at time T + noise>
but then we roll back the stack contents, so at time T+1 it starts with
<LSTM cell contents at time T + noise, stack contents at time T>
The stack now provides a reliable frame of reference that isn't subject to the hallucinations. It's possible that the network would learn to trust the data structure(s) and develop a policy for using it correctly. I suppose it all comes down to how much noise we'd need to add and how frequently we'd need to add it. Too much and we'd ruin everything, and too little would have no effect at all.
There are several different versions of this plan that I could see working. I'd have to think more on it, but it's worth looking into.
If you're still having problems forcing it to use the stack, how about using crippling dropout early on (50%, maybe) and then gradually easing back until reaching the desired dropout. For example, if the desired dropout is 10%, bring the dropout down by 2%/epoch until reaching 10% on epoch 21. That might force it to use the long term storage early on and not shift to using the neurons for storage as soon as they're reliable. If that doesn't force it to use the stack, there are a few of other ways you suggested (IIRC) which would force it to use the stack. If we could just start under those constraints, it shouldn't abandon the stack when it shifts to a more capable system.
A good idea! Earlier I had tried to do a fixed dropout rate, but I was having issues getting good results. I didn't think to decay the dropout over time. That's definitely a possibility.
---
By the way, I did a small test last night and found that GRU networks, which are designed to be similar to LSTM networks but are simpler in terms of their architecture, can give comparable results when it comes to generative tasks like ours. Just an interesting observation.
Also, I'm running another test with a small amount of dropout and multiple small stacks to get some more numbers. The dropout should help prevent the kind of overfitting that we saw last time. I'll check back in on that later today.
EDIT: Yes, the text-clone/noise problem went away with the dropout this time, so that's good. Card names go from being things like "Eschreaching Barr" to "Cathedral of the World Tree" and "Deadly Prayer", and the text is consistently reasonable, which implies that it's actually learning and not just mimicking. On the other hand, it doesn't want to use the stacks. But I'm going to rerun things now that I know I have a stable baseline with some tweaks and see what I can get. By the way, a few results:
Elvish Paladin WW
Creature - Human Knight (Uncommon)
First strike
When Elvish Paladin enters the battlefield, you may search your library for an Elf card, reveal it, then shuffle your library and put that card on top of it.
2/2
#It's not an elf. But it is an "elf-ish" card, lol.
Fires of Athrey 3
Artifact (Mythic Rare) 1,T, put a blood counter on Fires of Athrey: Draw a card, then you win the game.
Tidal Caller 2W
Creature - Kor Soldier (Rare)
Landfall - Whenever a land enters the battlefield under your control, you may have Ally creatures you control gain "T: This creature fights another target creature." until end of turn.
2/2
#How very fitting for Zendikar. A Kor creature with a landfall ability that cares about allies. Pity it's not an ally itself.
Temporal in a Tide 1RR
Enchantment (Rare)
At the beginning of your upkeep, target opponent reveals his or her hand. You choose a nonland card from it. You may play that card this turn.
#Has this effect been done before in this way?
Ah, Prolog, my old friend. We are reunited at last.
Thank you for the update, and for sharing your code. I'll definitely take a look it and I'll let you know what I think!
EDIT: It works! Now to tinker with it.
EDIT(2): It's looking very good so far! I will warn that if the input is too large, you can easily exhaust the capacity of the global stack, though I suspect there's a way of increasing the limit. I was doing some stress tests to see how much it would take to break your code, lol.
I'll need to do more evaluations on corner cases (and there are many corner cases), but the results I am seeing look very solid, so that's great.
Quick question: do you or do you plan to break up text like "nonblack"? Instances of possessive markers like "'s"? Or do you leave that sort of stuff untouched? I was wondering what sorts of normalization you plan on doing to the text during pre-processing. I say this because it's a common trick in NLP tasks when you're working with word-level representations to break up text like that, because it helps to sidestep morphology-blindness.
By the way, if graphviz is dying on you because the graphs are too large, consider rendering the graphs in SVG format.
EDIT(3): I was also doing some baseline tests to get a feel for how things functioned. For example,
So I did random initialization of the weights so that the network, from the very start, would already be reading from the stack. That appears to have done some good. I attempted to train a 2-layer LSTM network with 512 cells per layer, connected to 4 stacks, each with a maximum depth of 50. By epoch 120, training loss had gone down to 0.038038848085562 (that's an improvement by a factor of 10 compared to our plain LSTM approach), but unlike the time that we had overfitting and noise, the loss on the test set steadily decreased over that time rather than exploding, so that means the gains are real. Now, this doesn't mean that the network doesn't make mistakes. Believe me, it does. And I do see some overfitting on names, like a wizard named "Venser, Shaper Savant", but the card is completely different.
At epoch 10, the network only uses stacks #2 and #3. Wiping the contents of stack #1 and #4 at every timestep have no effect, which means that while it writes to these stacks, it never reads from them. By epoch 50, the network has decided that it only needs one stack, stack #2. From then on out, the network just uses stack #2 (I did tests all the way out to epoch 120). Here's an example of how the output changes depending on what stack I enable or disable (given the same random seed):
All stacks enabled:
|escaped nell||enchantment|||O||{^BB}|sacrifice @: return target black card from your graveyard to your hand.|
Stack #1 Disabled:
|escaped nell||enchantment|||O||{^BB}|sacrifice @: return target black card from your graveyard to your hand.|
Stack #2 Disabled:
|escaped nell||enchantment||aura|O||{RR^RR}|enchant creature\enchanted creature gets +&^^^/+& as long as it's attacking. otherwise, it gets -&^^/-&^.|
Stack #3 Disabled:
|escaped nell||enchantment|||O||{^BB}|sacrifice @: return target black card from your graveyard to your hand.|
Stack #4 Disabled:
|escaped nell||enchantment|||O||{^BB}|sacrifice @: return target black card from your graveyard to your hand.|
Notice that the name, the type, and the rarity of the card stays the same, so stack #2 is used for some parts of the card but is not used for others. That implies that the network learns a policy for when and when not to use the stack. That's a good sign; the data is only being written/read from the stack when it's relevant to some specific purpose.
Now, like I said, the loss is low but the still loses track of what its doing from time to time. The good news is that it recovers better from its mistakes. For example, if I prime the start of the body text with kicker, I think I get more "intervening if" clauses. It doesn't always happen, but I'm getting the impression that it's working more often than it did before. It's not always "if kicked" though.
So I can get a card like...
|steelflame wurm||creature||wurm|A|&^^^^^^^/&^^^^^^^|{^GG^^^^GG}|kicker {RR} \when @ enters the battlefield, if it was kicked, destroy target noncreature artifact.|
But I can also get output like...
|war~trace force||creature||human|O|&^/&^|{GG}|kicker {^^GG} \flying\when @ enters the battlefield, if its prowl cost was paid, draw a card.|
And sometimes I get stuff like...
|serenity||enchantment|||A||{^WW}|kicker {WW} \when you cycle @, you may discard your hand.|
What's most impressive though is when I get results like this:
|grim disciple||creature||zombie wizard|N|&^^/&^|{^BB^}|kicker {^BB^} and/or {BB} \when @ enters the battlefield, if it was kicked with its {WW} kicker, you gain &^^^ life.\when @ enters the battlefield, if it was kicked with its {^^UU} kicker, target player discards two cards.|
I have never, ever seen the network do this before. Notice that the kicker costs are completely wrong. But it remembered that there were two of them!
So that leads me to speculate that the stack played a role in that. I think that a special "marker" is being pushed onto the stack when the network wants to remember that it has a clause that it needs to add to the card, and it relies on its short-term memory to recall exactly what kind of clause it wants. When it wants to add in the clause, it removes the marker from the stack. I'd have to dig deeper to prove that, but I'm fairly certain that that is what's happening. I think that cycling, prowl, monstrosity, kicker, et cetera all trigger the creation of the same stack "marker", which is why we're priming with kicker but getting stuff like "if its prowl cost was paid".
Now, the question remains: why just one stack? Why not use all four? Early on, the network decided that having four stacks was more trouble than it was worth, so it shut the connections off so that it only had to worry about one stack. I think we could get better results from the network if it had better command of the data structures. I just need to figure out how to do make that happen.
I'm going to generate a dump of cards so I can get a feel for the quality of the output. I'll provide y'all with a link whenever that finishes.
TL;DR: Progress! I knew it could work. Now we just need to find out how to make it work even better.
And so on- which are not valid Prolog terms. I'm doing some sanity-checking before printing rules out, but there's still a few bits I haven't caught yet. I'm pretty sure they'll crop out when you work with larger examples sets. Sorry about that :/
Naw, you're fine! It's a work in progress, after all.
By the way, a word of caution about the learning repetition part: try the set of all sequences of the letter A of odd length. I found that I had trouble getting the right grammar for a small amount of input, I ended up with a grammar for {A^n : n >= 1}. For regular and context-free languages where there is an alternation of symbols, I had a much easier time getting the right grammar.
Steelflame Wurm 5GG
Creature - Wurm (Rare)
Kicker R
When Steelflame Wurm enters the battlefield, if it was kicked, destroy target noncreature artifact.
7/7
This is an oddly consistent and satisfying design. Even the name fits perfectly with the effect. By the way, the kicker cost/effect is a Crush.
At moderate temperatures, I am seeing some near-clones. For example,
Lim-Dul's High Guard 1BB
Creature - Skeleton (Common)
First strike
When Lim-Dul's High Guard enters the battlefield, you may reveal X black cards in your hand. If you do, target creature gets -X/-X until end of turn.
Madness 1B
2/1
This is a merger of Nightshade Assassin and Lim-Dul's High Guard. So while the dropout of 15% discouraged it from copying everything word for word, it does take liberally from cards now and then. I could probably force it to give more original results if I turned up the dropout slightly more. Mind you, at higher temperatures, this happens less often anyway.
And of course there's still... "creative" cards like
Ixh of the New Dawn 3W
Enchantment (Rare)
All lands have shadow as long as you control three or more artifacts.
And the high temperature dump has some really weird cards like
Sib1 Manolith 3G
Creature - Elemental (Rare) 2: Sib1 Manolith loses this ability and becomes an aura enchantment with enchant creature. Attach it to target creature. You may pay U to end this effect.
You control enchanted creature.
3/3
Others in the high temperature dump are unusual but not totally implausible, such as
Drullent Healer 3B
Creature - Zombie Treefolk (Uncommon)
Defender 2B, T: Exile all creatures with power 1 or less. Caravans travelling through Kaldheim's lowlands know to avoid the dark groves untouched by snow. To rest upon the warm soil is to anger the ancient guardians.
1/5
Desert that Artificer 2W
Creature - Human Wizard (Uncommon) T: Tap target creature. It doesn't untap during its controller's next untap step. "I have slowed him down, now is your chance to run! Leave him! You deserve a better man!"
2/1
Necrogen Specter 2B
Creature - Specter (Uncommon)
Flying
Threshold - As long as seven or more cards are in your graveyard, Necrogen Specter gets +3/+3 and has "when this creature dies, you lose 4 life."
2/2
Anyway, more to be done, but I think this is a step in the right direction. Some more tweaking of the parameters should take care of the clone issues, and I'll be thinking about ways to get more use out of the stacks.
EDIT: I think the clone issue started showing up sometime after epoch 50 or 60, because the test loss starts trending upwards (with some ups and downs). I think the training loss would be healthier if it were slightly higher, but I can fix that.
I'm taking a look through the 0.6-temperature dump and I am really loving the results from the new Stacked RNN. For one, most cards with Evoke have a corresponding ETB ability. Some don't, but it's over 50% success rate. Kicker is looking alright as well; definitely better than stackless. My favourite result has to be 'choose a color', which previously was always an abysmal failure. Now, it's a success almost every time. Check some of these out:
Ion Sliver 4W
Creature - Sliver (Uncommon)
As @ enters the battlefield, choose a color.
Whenever a source of the chosen color deals damage to you, you may put that many % counters on @.
At the beginning of your upkeep, if @ has ten or more % counters on it, you win the game.
2/2
###Coherent, and a staggeringly powerful ability too. Drop this against a mono-red deck and watch them weep.
Archfennet's Ring 1
Artifact (Uncommon)
As @ enters the battlefield, choose a color. T: add one mana of the chosen color to your mana pool.
###A very flexible mana rock.
Shower if the Minds 3UU
Enchantment (Mythic Rare)
As @ enters the battlefield, choose a color.
Whenever a player casts a spell of the chosen color, you draw a card.
###This effect came up on quite a few 'choose a colour' cards, exactly duplicated. Another often-found one was 'Whenever a player casts a spell of the chosen colour, you may pay 2. If you do, gain 2 life.'
Marantic Clusher 5
Artifact (Rare)
As @ enters the battlefield, choose a color.
Creatures of the chosen color get +1/+1.
Whenever a basic land is tapped for mana of the chosen color, sacrifice @.
###Fascinating design. I really like how it checks only basic lands for this, and it clearly 'knows' that basic lands produce coloured mana.
Faerie Mask 2
Artifact - Equipment (Uncommon)
Equip 3
Equipped creature gets +1/+0 and has flying.
#Overcosted, but simple and neat.
Glint Elemental 3GG
Creature - Elemental (Uncommon)
At the beginning of your upkeep, sacrifice a creature.
Sacrifice a creature: destroy target artifact.
4/4
#Oozes flavor. It's an elemental that shines with things it smashed.
Guardian of the Gates 4B
Creature - Demon (Rare)
Devour 1
Flying
When Guardian of the Gates enters the battlefield, draw a card for each creature it devoured.
5/5
#Lacked the Devour instance, but it's a pretty neat design nonetheless.
Sphinx of Lost Truths 3UU
Creature - Sphinx (Rare)
Kicker 1U
Flying
When Sphinx of Lost Truths enters the battlefield, if it was kicked, return all other creatures to their owners' hands and you skip your next turn.
3/5
#Name, costs, P/T are cloned, but the kicker ability is entirely new and... interesting.
Mana Scioners G
Creature - Human Soldier (Common)
Mana Scioners enters the battlefield with two +1/+1 counters on it. XX, T: remove all +1/+1 counters from Mana Scioners and put X +1/+1 counters on it.
0/0
#That's actually pretty clever. Wouldn't ever appear at common, though.
By the way, a word of caution about the learning repetition part: try the set of all sequences of the letter A of odd length. I found that I had trouble getting the right grammar for a small amount of input, I ended up with a grammar for {A^n : n >= 1}. For regular and context-free languages where there is an alternation of symbols, I had a much easier time getting the right grammar.
Thanks- that's invaluable feedback. I need to test the algorithm against various formal problems like that, but I'm not quite sure where to go looking for them- possibly the Abbadingo archives.
Automata/formal language/computational complexity theories are my bread and butter. I have several textbooks on the subject that I like to lecture from that are filled with good examples if you're interested.
I think I can sort of grok why it's hard to derive the correct grammar from non-alternating symbols and it has to do with production construction and my current restricted-Greibach Normal Form rules. I'll have to experiment a bit before I can explain my intuition. Again, thanks for pointing this out, seriously, that's great stuff, I wish I had ten Talcoses (Talcoi?) kicking my toy'till it breaks.
Talcoi if it's a Greek second declension noun. But I'm more of a Latin man myself, so Talcoses or Talces will do. lol
And no problem! Feel free to hit me up if you need me to take a look at anything or check any proofs, that sort of thing.
About the small amount of input. Now that I understand my algorithm a little better I'm afraid I need to retract my claim about how well it does with little data (and I'd better remove it from the README on my github also!). I guess I misunderstood good recall for good overall accuracy- my bad and that should teach me to Never Validate On the Training Set (not even "just for development"). Damn but I'm a n00b. :/
Naw! This is just part of the learning experience, lol.
One thing I would like to try is to train on sequences of characters, in the way you train your networks. I'm curious to see what will that turn up.
My first guess is that I'd imagine you'd get nested trie-like structures, but who knows?
---
I promised maplesmall that I'd upload some MSE versions of card dumps so they'd be more easily searched. And I will... but wow, I'm tired, lol. Must be all the constant deadlines, haha. I'll look into that tomorrow morning.
That stuff is not super impressive, being actual cards, just hyped up a bit more.
Right. To reiterate my earlier word of caution, this last network ended up overfitting somewhat. It rarely copies any card completely, but it's definitely borrowing heavily now and then. For instance, this card...
Arcanis the Omnipotent 3UUU
Legendary Creature - Wizard (Rare) T: Draw a card. Activate this ability only during your turn, before attackers are declared.
3/4
is a nearly complete clone of Arcanis the Omnipotent. Now, the network demonstrates that it understands the text much better than it lets on because of the mistake it made. It remembered that Arcanis had an activated ability that tapped to draw cards, but was fuzzy on the details, so it substituted in different card-drawing text. You see the same thing happening with other near clones like Pyromancer's Gauntlet, where "a red instant or sorcery spell you control or a red planeswalker you control" gets substituted with "a red source".
One way of filtering out near clones would be to compute the vector representations for the card texts and to compare their distances to the real cards. However, I expect that the next network I train won't have this problem.
A training loss of 0.03 is pretty much the floor of the loss (in other words, the ceiling on the accuracy). It's funny to say this, because we've been waging a war to get the losses down, but we actually don't want to hit the floor, because near perfect accuracy implies that the network is doing memorization rather than real rule learning. Now, it's not just doing pure memorization if it can do correct substitutions because that takes real knowledge, but it does suggest that this last network is a little too conservative.
That's not to say that this latest network isn't a step above the previous ones. It most definitely is. But now our concern is about getting test loss down. Fortunately, we have lots of ways to go about that. The ideal network, I think, will have a higher training loss than this one, at least in some areas. If you want creativity, you have to allow for mistakes. Fortunately we have ways of doing syntactic and semantic filtering/correction now, so I'm not too worried about that.
---
As promised, here are links to the mse-set versions of the latest dumps.
I've also attached a render of Azure Mage as done in the style of a child's crayon drawing, and I'm waiting for a Rebecca Guay render of Liliana to come off the presses before I leave for work.
EDIT: Hmm.. Liliana will need some more tweaking, but we'll get there.
EDIT(2): By the way, on the side, I've been looking into image generation techniques. There was a paper that came out a few months ago on language-driven image generation, and another last month on using spatial LSTM networks to generate images. Extracting a text summary of a card is trivial. Then all you have to do is find a suitable mapping from text descriptions to decent image representations. Pass the result through the style transfer network, and then we're in business, lol.
EDIT(3): I took a closer look at that second paper during my lunch break. I have a suspicion that I'm going to end up with misshapen horrors, but the authors of the work have provided source code and I'm tempted to take a look at that in the near future. The lead author works in the same lab as the people who brought us the neural style transfer stuff.
EDIT(4): Oh, one more thing. Earlier I saw a fun video lecture on generative neural network models. Here's a link.. I've skipped ahead to a part you might find fun: a network trained to play Atari games dreaming about playing Atari games. The whole talk is interesting though.
That stuff is not super impressive, being actual cards, just hyped up a bit more.
Naturally the two cards i do like are reprints ehh you win some and you loose some. Anyway this stuff here gets cooler by the minute. Also i realy like the crayon stuff. Could we have some Beats in cave-painting style?
Sure. Here's a cave painting version of Deathmist Raptor. Oh, and while we're at it, a randomized pop art version of Liliana.
Sure. Here's a cave painting version of Deathmist Raptor. Oh, and while we're at it, a randomized pop art version of Liliana.
Oh, pop art Lili is really cool, if a bit random. Is the version of the network you're using here the one that is able to hybridize style sources? If so, how many sources can it accept? Also, is the style-extraction process more like training - i.e. some form of weight building, decay etc - or is it more like building a huge feature map library for content classification and each subsequent image adds to the library?
Since we talked about Lili and pop art, could you do a Mike Mignola Lili (using these images 12) if and when you get the chance :D?
Sure. Here's a cave painting version of Deathmist Raptor. Oh, and while we're at it, a randomized pop art version of Liliana.
Oh, pop art Lili is really cool, if a bit random. Is the version of the network you're using here the one that is able to hybridize style sources? If so, how many sources can it accept? Also, is the style-extraction process more like training - i.e. some form of weight building, decay etc - or is it more like building a huge feature map library for content classification and each subsequent image adds to the library?
Since we talked about Lili and pop art, could you do a Mike Mignola Lili (using these images 12) if and when you get the chance :D?
First, yes, we can hybridize style sources. So far I've tried up to three, though I'm sure I could do more.
As for your question, it's actually training. What happens is that an image recognition network studies the content image and gets an impression of what it should look like, where all the pieces are. It says "There's a woman standing with her arms stretched out in the middle of the frame."
The image generation network then pours in a mess of features from the style images. This is the unworked block of stone.
It reshapes the image at each iteration, and then studies its own creation. It asks, "Does this image give me the same impression as the content image? Can I see the woman?". It then reweights itself so that the impression it got will be closer to the target impression next time around. In this way, it's a sculptor, trying to reveal the woman hidden within the stone (well, sorta, it's also pouring in more style as it goes, but you get the idea). I let it run for about a thousand iterations this way, and then you get the results you see. I've uploaded intermediate results of iteration 100 and 400 and the final image at iteration 1000.
Now, the results, like pop art Liliana, can be highly random. Sometimes, depending on what I give the network to work with style-wise, it might drop or obscure some elements from the content. Features are preserved from the content image only if they are recognized and emphasized by the image recognition network. For whatever reason, the recognition network didn't react very strongly to Liliana's legs, so the image generation network didn't make a concerted effort to preserve them in the pop art image.
However, just as I can whisper to the card generating network, I can prime the image generating network in much the same way. If I give it a brief glimpse of the target image as it starts working, an outline of what I'm looking for. I find that this helps to preserve fine details better.
I ran a low-res version just so I could show you the process and the intermediates. I'm letting it generate a high-res version for you now. I'll upload it when it finishes. Oh, and by the way, the images you see borrow 60% of their style from the first Mike Mignola Lili art and 40% from the second. I wanted to make sure the bright colors dominated as the original image was already very dark.
EDIT: I've attached a larger version.
Anyway, as you can see, all we really need at this point is a way of hallucinating the content. For example, if we could take a text description, and have a system churn out a rough sketch of what we want, we can use approaches like style transfer to fill in the details. There's a lot of ongoing work in that area, and I'm excited to see what comes out in the near future.
EDIT(2): By the way, if you want more Mike Mignola Lili in the image and less Aleksi Briclot, I can change the weights so that the style dominates more strongly. The images you see were generated with the default settings, which try to balance the incorporation of new style with preservation the original content.
Oh, I think I get it. The samples you posted were very illustrative. So, instead of organizing a noise field, you lay out a bunch of random feature maps from the style sources and then iterate, changing stuff around and adding new things as you go (I'm guessing you go from larger features to smaller features? That's how stuff like 3D rendering / procedural terrain generation is usually done, in passes that increase in detail frequency/resolution), trying to find an error minimum when compared to the target. Pretty clever.
Heh, that just gave me ideas. You could get height maps from Earth, train a NN on them and then "style" a crudely generated procedural terrain to get much more interesting terrain. Or you could train it on a bunch of leaves (say, a few hundred) and then use it to generate infinite leaf textures, all different, all done during runtime, procedurally. You could also probably use it to supersample textures by creating detail out of similar stuff - a neat way to get SD gaming up to HD standards.
By the way, thanks for the picture. It looks amazing, exactly how I was expecting it to turn out. I think Mignola's style is a pretty good fit for this stuff, because it's very distinctive in its use of stark contrast and flat-ish colors. It's very bold, just like Gogh's brush strokes or Afremov's knife strokes.
By the way, thanks for the picture. It looks amazing, exactly how I was expecting it to turn out. I think Mignola's style is a pretty good fit for this stuff, because it's very distinctive in its use of stark contrast and flat-ish colors. It's very bold, just like Gogh's brush strokes or Afremov's knife strokes.
No problem! Thank you for offering the suggestion. I'll have to look into doing more comic book style artworks in the future, lol.
Actually terrain and such you get with things like perlin noise, "diamond-square" algorithms and rules based systems if you want to get fancy with platetectonics. This stuff is governed by "simple" math and much less computing intensiv. Even leaves etc. are ruled by up to a few dozen variables and creating an "imperfect factory" for those doesnt need a computing inrtensiv network were a mere "mersene Twister" and a few lines of code work wonders.
Not too long ago, a friend/colleague of mine was consulting with me about a terrain generation algorithm, and we discussed those very topics. He was able to come up with a good solution while working along those lines. That project ended up being part of a portfolio that landed him a job at a video game company.
Where a neural network could make a difference is thought with "Aesthetics", were do i put a ruin to look "pleasing"? What style has the carving on this bow from "The elder scroll 14"? A handdrawn picture from the Eyes of an NPC in a particular style. Evolving buildingdesigns for alien worlds to fake a suffiecently long history with influences and such. How did Turian Gothic influence Mon calamarie "Neo Jugendstil"?
I agree completely.
Neural network approaches are great for doing tasks that a human can do but cannot articulate. For example, we know what artistic style is and we can, for the most part, distinguish the style of an art from its content. But if I had to formally explain the process by which I understand art, I couldn't do it. If I can't do that, then I can't write a program to do that for me. That is, of course, unless I set serious restrictions on what I mean by art, style, content, etc. But then the solution would be frail and brittle.
That's what makes machine learning so exciting for me. I'm a formalist by training: I break everything down into clean, ornate logic. I like to think that I'm good at what I do. But I've come to realize that not all problems can be solved that way; I can't draw a circle with a finite number of straight lines. That's why I dove into the subject of machine learning in the past year and a half.
---
For the record, I have a number of deadlines coming up that'll keep me pretty busy (conference submissions, peer review work, letters of recommendation to write, and I might be giving a guest lecture some time soon), but I promise that I'll look into rolling another network, as well as releasing my modified version of the sampling script and other related code that you'll need to have to work with the data-structure augmented networks. In the meantime, if anyone else has any special requests for art or anything like that, just let me know. Right now there's a lot of downtime for the big machine in the evenings and early mornings which gives me an opportunity to run the style transfer code to my heart's content.
Off the topic of Magic, how feasible is it to train a NN to compose music? Take something like the Star Wars soundtracks (perfect because they use extensive leitmotif and are written exclusively in the key of A major), expressed in a virtual orchestra program, then train a network on it with some kind of input as to the emotional context of the scene and the characters, organizations, and concepts in the scene and how they're interacting with one another, so it can figure out that, for instance, an ascending motif on brass indicates triumph while a slow strong piece usually indicates sadness.
Has anyone done work on this? I figure folks in this thread would know.
Though it's just "listen to this and generate something similar" apparently, nothing as advanced as "write something for a fast-paced action scene involving Han Solo, Luke Skywalker, and Princess Leia vs. the Empire, with a brief romantic interlude between Han and Leia."
I guess what I'm looking for from y'all is less pointing me at this stuff and more explaining it in layman's terms.
What parameters are you using on the style network? Just the defaults?
Sometimes. But if I'm really keen on preserving the fine details, I do "-init image" to prime with the original image. Sometimes I also lower the content weight to 2.0-2.5 or so (about half the default) if I want to bring in more detail from the style image. And when blending multiple style images, choosing the right weights can be important to get the look I want. Usually I do a low res render and then, if I like what I see, I go for a high resolution one.
Off the topic of Magic, how feasible is it to train a NN to compose music? Take something like the Star Wars soundtracks (perfect because they use extensive leitmotif and are written exclusively in the key of A major), expressed in a virtual orchestra program, then train a network on it with some kind of input as to the emotional context of the scene and the characters, organizations, and concepts in the scene and how they're interacting with one another, so it can figure out that, for instance, an ascending motif on brass indicates triumph while a slow strong piece usually indicates sadness.
Has anyone done work on this? I figure folks in this thread would know.
Yes! Someone used the char-rnn implementation and fed it scores of Irish folk tunes.
There has been a decent amount of research ongoing on the topic as well. I've seen a ton of different papers while doing my research. Some came out in just the past few months, so it's a topic that's being actively researched.
A theme I'm noticing amongst the approaches you see above is the inability to maintain clear themes in the music over long periods of time. A data-structure augmented approach like what I've been investigating might actually be very helpful there because the network could use external memory to record stylistic elements. Or so I would imagine. I would look into that myself if I had the time, but I do not, haha.
Though it's just "listen to this and generate something similar" apparently, nothing as advanced as "write something for a fast-paced action scene involving Han Solo, Luke Skywalker, and Princess Leia vs. the Empire, with a brief romantic interlude between Han and Leia."
I guess what I'm looking for from y'all is less pointing me at this stuff and more explaining it in layman's terms.
Just saw this bit. Yes, well, it depends on the kind of representation we're working with. Let's say we have a text-based representation of the score (one instrument, just a piano, and for simplicity's sake let's assume that only one finger can be touching the piano at one point in time).
The prediction problem is this: I hit a key. Based on that key I struck, and those that I struck before it, Which key do I hit next?
Now, if we're only concerned with one piece of music, then you can just memorize the score. But if the training set contains thousands of pieces, there's no way that you could memorize all of them by heart. So you have to come up with rules. What elements of the immediate past do you commit to memory and what elements do you ignore? What notes would "sound right" in the given context? Can you determine the chord structure of the piece? And so on, and so on..
Well, that's what the network is doing when you're having it predict the music. It's figuring out thousands of little rules that it can use to predict the note that "sounds right" given the notes it has heard before.
When we use a predictive model to do generation, what we're doing is having the network come up with a note, and then we feed that note back in as input. So in effect it's hallucinating an entire piece of music, one note at a time, making predictions upon its own predictions. It's not unlike how you and I dream, with one idea effortlessly flowing into the next.
Does that help?
---
Also slightly off-topic: I've been using the style transfer for purposes other than just Magic. Self-portraits, for instance. As you can see, I can take a selfie after shaving and make it look like I have artistic talent (I can assure you I do not).
What parameters are you using on the style network? Just the defaults?
Sometimes. But if I'm really keen on preserving the fine details, I do "-init image" to prime with the original image. Sometimes I also lower the content weight to 2.0-2.5 or so (about half the default) if I want to bring in more detail from the style image. And when blending multiple style images, choosing the right weights can be important to get the look I want. Usually I do a low res render and then, if I like what I see, I go for a high resolution one.
Off the topic of Magic, how feasible is it to train a NN to compose music? Take something like the Star Wars soundtracks (perfect because they use extensive leitmotif and are written exclusively in the key of A major), expressed in a virtual orchestra program, then train a network on it with some kind of input as to the emotional context of the scene and the characters, organizations, and concepts in the scene and how they're interacting with one another, so it can figure out that, for instance, an ascending motif on brass indicates triumph while a slow strong piece usually indicates sadness.
Has anyone done work on this? I figure folks in this thread would know.
Yes! Someone used the char-rnn implementation and fed it scores of Irish folk tunes.
There has been a decent amount of research ongoing on the topic as well. I've seen a ton of different papers while doing my research. Some came out in just the past few months, so it's a topic that's being actively researched.
A theme I'm noticing amongst the approaches you see above is the inability to maintain clear themes in the music over long periods of time. A data-structure augmented approach like what I've been investigating might actually be very helpful there because the network could use external memory to record stylistic elements. Or so I would imagine. I would look into that myself if I had the time, but I do not, haha.
Though it's just "listen to this and generate something similar" apparently, nothing as advanced as "write something for a fast-paced action scene involving Han Solo, Luke Skywalker, and Princess Leia vs. the Empire, with a brief romantic interlude between Han and Leia."
I guess what I'm looking for from y'all is less pointing me at this stuff and more explaining it in layman's terms.
Just saw this bit. Yes, well, it depends on the kind of representation we're working with. Let's say we have a text-based representation of the score (one instrument, just a piano, and for simplicity's sake let's assume that only one finger can be touching the piano at one point in time).
The prediction problem is this: I hit a key. Based on that key I struck, and those that I struck before it, Which key do I hit next?
Now, if we're only concerned with one piece of music, then you can just memorize the score. But if the training set contains thousands of pieces, there's no way that you could memorize all of them by heart. So you have to come up with rules. What elements of the immediate past do you commit to memory and what elements do you ignore? What notes would "sound right" in the given context? Can you determine the chord structure of the piece? And so on, and so on..
Well, that's what the network is doing when you're having it predict the music. It's figuring out thousands of little rules that it can use to predict the note that "sounds right" given the notes it has heard before.
When we use a predictive model to do generation, what we're doing is having the network come up with a note, and then we feed that note back in as input. So in effect it's hallucinating an entire piece of music, one note at a time, making predictions upon its own predictions. It's not unlike how you and I dream, with one idea effortlessly flowing into the next.
Does that help?
---
Also slightly off-topic: I've been using the style transfer for purposes other than just Magic. Self-portraits, for instance. As you can see, I can take a selfie after shaving and make it look like I have artistic talent (I can assure you I do not).
But can a network be trained to know the difference between "happy", " sad", "fighty", "sexy" and "tense" music?
Could it be taught about harmonies and use that information to generate multiple single-instrument pieces that fit together to create proper orchestration?
I'm especially thinking of the applications for online gaming; imagine an MMO of some sort where the score is dynamically generated in real time, based on what's happening in your vicinity. You'd have a soundtrack tailor-made for your play experience.
Just finished going through a 64 page card-dump, and found some really interesting cards. For those paying attention at home, I'm sorted them down below for future reference in the RNN Cube Draft.
Stronghold Rats [B1]
Creature - Rat
T: Add B to your mana pool.
B Sacrifice Stronghold Rats: place two -1/-1 counters on target creature.
1/1
Karina's Needle [1B]
Sorcery
Tap target permanent. That permanent's controller may choose to untap it, if they do, you gain life equal to that card's converted mana cost.
Wolf Specter [1B]
Wolf Specter cannot be blocked.
When Wolf Specter would deal combat damage to a player, prevent that damage, Wolf Specter instead deals 3 damage to target creature that player controls.
1/1
Waste of Mastery [B]
Sorcery
Return target creature to its owners hand.
Baloth of Corruption [2BB]
Creature - Elemental
Baloth of Corruption gets +1/+0 for each creature card in your graveyard.
4 / 1
Ghostly Sanctum [1BB]
Enchantment
Whenever a player taps a land for mana, it deals 1 damage to that player, and you gain 1 life. If it's a mountain, exile it and Ghostly Sanctum.
Blood Prowler [BBB]
Creature - Serpent
At the beginning of each player's upkeep, that player sacrifices a land or a creature.
2 / 2
Snarefang Familiar [1B]
Creature - Rat
Whenever Snarefang Familiar deals damage to a player, each player discards a card, then you may draw a card.
2 / 2
Wight of Inquisitor Island [1BB]
Legendary Enchantment Creature - Spirit
Fear, Protection from White
When Wight of Inquisitor Island deals combat damage to a player, that player sacrifices a creature.
Whenever a player cast a white spell, that player sacrifices a creature.
3 / 3
Shadow-Water Minion [1U]
Creature - Fish
Flash
T: Tap target creature, return Shadow-Water Minion to its owner's hand.
1/3
Mindshield Bandit [1UU]
Creature - Human Rogue
Mindshield Bandit cannot be blocked except by walls.
2: Target creature you control cannot be blocked until end of turn. That creature does not untap during your next untap step.
1 / 2
Fiery Wanderer [1U]
Creature - Elemental
Flash
At the beginning of your upkeep, place a +1/+1 counter on Fiery Wanderer.
1 / 1
Astral Shadow [1U]
Sorcery
Reveal the top three cards of target opponent's library and put them back in any order.
Willful Presence [XUU]
Sorcery
Tap X target creatures, target creature you control gets +X/+X until end of turn.
Manifest Secrets [2U]
Enchantment - Enchant Creature
Enchanted creature becomes a 4/4 red elemental shaman creature with flying.
Warrior's Folly [1UU]
Enchantment
2: Tap target creature.
Deep Shade [1G]
Creature - Beast
1GG: When Deep Shade attacks it gets +1/+0 until end of turn. Target other creature gets +2/+0 until end of turn.
2 / 1
Keeper of Turn [2GG]
Creature - Elf Druid
Whenever another non-token creature enters the battlefield under your control, if an opponent controls more creatures than you, you may place a 3/3 green beast creature token into play under your control.
1/3
Walking Spirit [1G]
Sorcery
Target land becomes a 2/2 beast creature, it is still a land. Untap that creature.
Savage Hisokan [3G]
Legendary Creature - Human Shaman
When Savage Hisokan becomes blocked, it gains double-strike until end of turn.
R: Target creature blocks this turn, if able.
4/3
Oozing Hate [2G]
Enchantment - Aura
Whenever enchanted creature deals combat damage to a player, its controller sacrifices that many creatures.
Reaving Essence [1GG]
You gain life equal to target creature's toughness, it deals damage equal to its toughness to target creature.
Wandering Growth [4GG]
Creature - Beast
Wandering Growth enters the battlefield with X +1/+1 counters on it where X is the number of creatures you control.
4 / 2
Blackroot Guardian [1GG]
Creature - Plant
Wither - (It deals damage to creatures in the form of -1/-1 counters.)
2/7
Avenging Ritual [GG]
Sorcery
As an additional cost to cast Avenging Ritual, Discard a card.
Destroy target creature.
Returned Watcher [2G]
Creature - Spirit Warrior
When Returned Watcher enters the battlefield, reveal the top card of your library. Target opponent loses life equal to that card's converted mana cost.
2 / 2
Tindersoul Wanderer [6G]
Creature - Treefolk
Tindersoul Wanderer's power and toughness are each equal to the number of permanents you control.
*/*
Rhash, Blood of Oaks [3G]
Legendary Creature - Elf Soldier
Whenever Rhash, Blood of Oaks attacks, place a +1/+1 counter on it.
Whenever Rhash, Blood of Oaks becomes blocked, you may remove a +1/+1 from it. If you do, put four 1/1 green saporling creature tokens onto the battlefield.
3 / 2
Slayerweaver [GG]
Creature - Elf Monk
Bushido 1
When Slayerweaver enters the battlefield, sacrifice it unless you return a creature card to your hand.
2 / 2
Skok the Countless [4GG]
Legendary Creature - Elf Shaman
Whenever another creature enters the battlefield under your control, each creature gains lifelink until end of turn.
2 / 4
Earthly Purge [3GG]
Destroy all creatures. You may search your library for up to two basic land cards and then put them into play tapped, then shuffle your library.
Pyrominator [3R]
Creature - Cleric
Each time you cast an instant or sorcery spell, target player loses 1 life for each charge counter on Pyrominator.
3R: Place a charge counter on Pyrominator.
2 / 2
Barashin Elemental [R]
Creature - Elemental
Flying
Whenever an opponent draws a card, discard a card.
When you have three or fewer cards in your hand, sacrifice Barashin Elemental.
4 / 2
Sage Assassin [R]
Creature - Goblin Rogue
At the beginning of your upkeep, target opponent discards a card.
1 / 1
Snaring Starfish [R]
Creature - Fish
T: Tap target creature, it remains tapped as long as Snaring Starfish is tapped.
1 / 1
Goblin Bladehunter [1R]
Creature - Goblin Warrior
When Goblin Bladehunter is blocked by a creature with power greater than Goblin Bladehunter's, destroy that creature, it can't be regenerated.
2 / 1
Crushing Challenge [1R]
Instant
Creatures your opponents control get -1/-1 until end of turn. Scry 2.
Hell Squid [3R]
Creature - Fish
Fear (This creature can't be blocked except by artifact creatures and/or creatures that share a color with it.)
When Hell Squid attacks, defending player sacrifices an Island.
3 / 4
Emberblood Ritual [2RR]
Enchantment
Whenever you cast a red spell, creatures you control get +2/+2 until end of turn.
Archshot [1R]
Instant
Archshot deals 2 damage to each creature and each player.
Spinstrike [RR]
Instant
Until end of turn, whenever a creature you control becomes blocked by two or more creatures this turn, destroy those creatures. If you do, Draw a card.
Brass Flamesinger [2R]
Creature - Elemental Warrior
First strike
When Brass Flamesinger attacks or blocks, target creature gets +1/+0 and first strike until end of turn.
2 / 1
Charging Border Gang [2RR]
Creature - Human Bandit
Haste
Charging Border Gang's power and toughness are equal to the number of creatures opponents control.
*/*
Overhead Archers [1R]
Creature - Goblin Soldier
T: Overhead Archers deals 1 damage to each attacking and blocking creature.
1 / 2
Banner Runners [1RR]
Creature - Goblin Warrior
Banner Runners can't attack or block.
Whenever a creature you control becomes blocked it gets +1/+2 until end of turn.
2 / 2
Ice Djinn Mastery [5R]
Enchantment - Aura
Whenever enchanted creature deals damage to a player, prevent that damage. Its controller puts an X/X red elemental creature token onto the battlefield, where X is equal to the damage prevented this way.
Deadshot Ray [XRR]
Instant
Deadshot Ray deals X damage to target blocking creature and that creature's controller.
Catapult Plague [RR]
Sorcery
Target creature gets -5/-5 until end of turn.
Angelic Warding [3W]
Enchantment- Aura
Totem armor (If enchanted creature would be destroyed, instead remove all damage from it and destroy this Aura.)
Enchanted creature has first-strike, flying.
Fury of the Harvest [2W]
Enchantment - Enchant Land
At the beginning of your upkeep, place a 0/1 colorless plant creature token into play under your control, it has "Sacrifice this creature: tap target creature an opponent controls."
Pest Shaman [W]
Creature - Human Shaman
W: Look at the top two cards of your library, then put them back in any order.
G: Exile the top card of your library, then place a 1/1 green insect creature token into play under your control.
1 / 1
Torgan Dawnwalker [4W]
Creature - Human Warrior
Double Strike
Torgan Dawnwalker enters the battlefield with a -1/-1 counter for each other creature you control.
5 / 5
Pulse of the Damned [2WW]
Instant
Until end of turn, whenever a player taps a non-white permanent, destroy it.
Sand Spinner [2WW]
Creature - Elemental
Flying, Lifelink
Whenever Sand Spinner attacks or blocks, you may flip a coin. If you win the flip, you may put target artifact, creature, or land on the bottom of its owner's library.
2/3
Heroic Betrayal [2W]
Instant
Attacking creatures get -3/-0 until end of turn and do not untap on their controller's next untap step.
Honorguard of Kauki [4W]
Legendary Creature - Human Samurai
Bushido 2
Whenever Honorguard of Kauki blocks, you may have it deal 2 damage to each creature blocking it.
4 / 5
Angelic Burden [3WW]
Sorcery
Target player puts the top ten cards of his or her library into his or her graveyard. Then each player reveals a card at random from his or her hand and may choose to place them onto the battlefield.
Set Earth [4W]
Enchantment
As long as seven or more cards are in your graveyard, each land you control is a 2/2 creature, it is still a land.
Tap an untapped creature you control: add G, B, or W to your mana pool.
Attendant Tutor [1W]
Enchantment
Whenever a player casts a spell, you may put a Lore counter on Attendant Tutor.
Remove X Lore counters from Attendant Tutor: Sacrifice Attendant Tutor, you may search your library for a card with converted mana cost X or less and put it into your hand, then shuffle your library.
Soulflame Chanter [2W]
Creature - Human Shaman
T: @ deals damage equal to its power to target creature, then put that many 1/1 white spirit creature tokens with haste onto the battlefield
2/2
Sandsoul Mentor [1WW]
Creature - Human Soldier
Whenever a land enters the battlefield under your control, you may search your library for a creature card with converted mana cost 2 or less and put it into your hand, then shuffle your library.
2 / 2
Call of the Wood [W]
Sorcery
Search your library for a land card, reveal it, put it into your hand, then shuffle your library.
Winged Ending [1W]
Instant
Creatures opponents control lose flying until end of turn. Prevent all combat damage that would be dealt by creatures your opponents control this turn.
Shimmering Bladecraft [1WW]
Sorcery
Place a +1/+1 counter on each creature you control.
Worm Scrappers [1W]
Creature - Human
T: Add G to your mana pool for each creature card in your graveyard.
1/1
Mindshiver Hunter [3W]
Creature - Human Samurai
Hexproof
Heroic ~ whenever you cast a spell that targets Mindshiver Hunter, it deals damage equal to its power to each other blue creature are on the battlefield.
2 / 2
Shadow Hammer [2]
Artifact - Equipment
Equip [2]
Whenever an opponent casts a black spell, that player discards a card.
Whenever you cast a black spell, equipped creature gets +1/+0 and lifelink until end of turn.
Dismal Underling [2]
Artifact Creature - Golem
Dismal Underling does not untap during your untap step.
T: Choose an artifact, creature, or land you control, then choose an artifact, creature, or land target opponent controls, then exchange control of those permanents.
0/4
Stronghold Stranger [3]
Artifact Creature - Cat
As long as a player has 7 or 3 cards in their hand, Stronghold Stranger is indestructible and has deathtouch.
2 / 1
Augur Servant [3]
Artifact Creature - Golem
Whenever you draw a card, each player may put a 1/1 black zombie creature token onto the battlefield.
2/2
Nillabostor [3]
Artifact Creature - Basilisk
Whenever a creature is dealt damage by Nillabostor, it deals that much damage to that creature's controller.
1 / 3
Wand of Affliction [4]
Artifact
[6] T, Sacrifice Wand of Affliction: destroy target nonblack creature, it can't be regenerated. Wand of Affliction deals damage equal to its power to target creature or player.
Toran's Musicbox [2]
Artifact
[3]: Put the top card of your library into your graveyard, then return a card at random from your graveyard to your hand.
Flamewood Cavalier [1RG]
Creature - Elf Warrior
Whenever Flamewood Cavalier becomes blocked, it deals 2 damage to each creature blocking it.
2/2
Season of Leaches [1WB]
Enchantment
Whenever a creature attacks you, you may put a Leach counter on it. Whenever a land enters the battlefield under you control, you may put a -1/-1 counter on each creature with a Leach counter.
Chaotic Bloodshed [1WB]
Instant
Attacking creatures your opponents control each deal damage equal to their power to target creature.
Earnest Blessing [XRW]
Sorcery
Destroy X target creatures.
Sunikar, Dreaded Soul [5RB]
Legendary Creature - Vampire Dragon
Flying, deathtouch
Whenever Sunikar, Dreaded Soul deals combat damage, you may search your library for a permanent and exile it. You may put that card onto the battlefield at the beginning of the next end step.
6 / 5
Bloodlost Soldier [RW]
Creature - Cat Rebel
Whenever Bloodlost Soldier attacks, it gets +2/+4 until end of turn.
1 / 1
Honorable Mentions:
Tuikle's Briber [2R]
Creature - Ass
Sacrifice Tuikle's Briber: deal 1 damage to target creature or player.
2 / 2
Spiked Warbread [3]
Artifact
When Spiked Warbread enters the battlefield, creatures your opponents control block this turn if able.
Attendant Tutor [1W]
Enchantment
Whenever a player casts a spell, you may put a Lore counter on Attendant Tutor.
Remove X Lore counters from Attendant Tutor: Sacrifice Attendant Tutor, you may search your library for a card with converted mana cost X or less and put it into your hand, then shuffle your library.
But can a network be trained to know the difference between "happy", " sad", "fighty", "sexy" and "tense" music?
Well, an unsupervised approach, like what I just suggested to you, would definitely arrive at some nebulous concept of mood, insofar as a tense song should never suddenly become a happy one, but a supervised approach might work best here. That is, we have tagged pieces of music and we develop a network that deconstructs that music to predict the tag. That then gives us a way of critiquing or guiding the work of a generative model, sort of like what we're doing with the style transfer algorithm.
A search on Google Scholar reveals that there were quite a number of papers on the topic of music classification from the 1990s and 2000s, and a paper from 2011, "A Survey of Audio-Based Music Classification and Annotation" by Fu et al., has been cited 150 times since its publication. Audio-based music information retrieval is useful for any music-streaming service, so I'd imagine that's driving a lot of the research.
Google even released a (probably unenforceable) patent on the idea itself entitled "Artificial Neural Network Based System for Classification of the Emotional Content of Digital Music" (link here: https://www.google.com/patents/US20140058735 ) and that patent contains a fairly detailed design of how such a system would work.
Could it be taught about harmonies and use that information to generate multiple single-instrument pieces that fit together to create proper orchestration?
I'd imagine so, yes. If we generate the single-instrument pieces independently, then a trained system could bend and reshape them to make them harmonious. Having a system that generates each of the single-instrument pieces with the goal of harmony in mind would be trickier, but I'm sure that's also do-able.
I'm especially thinking of the applications for online gaming; imagine an MMO of some sort where the score is dynamically generated in real time, based on what's happening in your vicinity. You'd have a soundtrack tailor-made for your play experience.
There was one short paper that came out in 2014 entitled "Procedural Generation of Music-guided Weapons" by Cachia et al. which presents a proof of concept implementation of a neural-network-directed music generation for a game. It cites a few recent papers on the topic; I'll let you take a look at the names:
David Plans and Davide Morelli. Experience-driven procedural music
generation for games. IEEE Trans. Comput. Intellig. and AI in Games,
4(3):192–198, 2012.
Annika Jordan, Dimitri Scheftelowitsch, Jan Lahni, Jannic Hartwecker,
Matthias Kuchem, Mirko Walter-Huber, Nils Vortmeier, T Delbrugger,
U Guler, Igor Vatolkin, et al. Beatthebeat music-based procedural content
generation in a mobile game. In Computational Intelligence and Games
(CIG), 2012 IEEE Conference on, pages 320–327. IEEE, 2012.
Nils Iver Holtar, Mark J. Nelson, and Julian Togelius. Audioverdrive:
Exploring bidirectional communication between music and gameplay.
In Proceedings of the 2013 International Computer Music Conference,
2013.
What would probably work best is a kind of blend between scripted, procedural generation and a neural-network-based generative model. A hand-crafted system would track and report key actions that take place, creating a reduced, real-time scene representation that could then be interpreted and acted upon by the generative model. The generative model could draw upon a pre-existing repertoire of musical themes and motifs, and use those as a backdrop for its own creations that are synchronized with the actions in the game.
Give me three months and adequate funding and it could be done, lol.
Just finished going through a 64 page card-dump, and found some really interesting cards. For those paying attention at home, I'm sorted them down below for future reference in the RNN Cube Draft.
LASture, these cards are awesome! Thank you for sharing, and for your dedication.
Two that caught my eye:
Pulse of the Damned [2WW]
Instant
Until end of turn, whenever a player taps a non-white permanent, destroy it.
It eliminates the impure whenever they try to take action against you. I'd probably add "non-land" to that, but the idea is fun all the same.
Slayerweaver [GG]
Creature - Elf Monk
Bushido 1
When Slayerweaver enters the battlefield, sacrifice it unless you return a creature card to your hand.
2 / 2
I was thinking about this and the stack issue and had this random idea, based on an old neuroscience paper I've read a long time ago. The paper proposed that dreaming is the brain feeding itself white noise and then trying to "make sense" out of it, with the intent of exhausting short-term memory and weakening any partially made memory - a cleanup.
Is it possible that feeding the NN statistically-correct isotropic noise from time to time could help it generalize better to avoid overfitting and maybe, create some incentive for it to use the stack? It will probably make training longer, though.
Fun fact: I read the paper when I was taking a medicine that was known to interfere with dreaming patterns - i.e. I went for about 5 years pretty much without dreaming - and is now believed to vastly increase synaptic plasticity. It is possible that my brain was essentially overwriting itself with new information over and over and over during that time. Crazy stuff.
Correct. Right now we're actually using just one caret for a colorless mana symbol.
This being said, recurrent nets are capable of counting, and as far as I can tell, the network interprets a lettered mana symbol as ~= 0.5 CMC. Some indirect evidence of this is that the network produces creature cards with a power to CMC ratio that approximates that of real cards.
Ooh, clever idea, I like! A lot, actually. I hadn't thought of that.
LSTM networks have a habit of getting distracted by shiny objects and forgetting whatever it was they were thinking. We could use that to our advantage. Injecting noise during training could encourage the network to write things down because the noising would put a strain on its short-term memory. It wouldn't be difficult to implement either, it'd just be some extra forward passes on the network thrown in randomly.
Here's the problem though, and it's more of a technical issue than anything else: if the network sees the noise as being real or relevant, then it would be tempted to use the stack to record information about it... but I think I have a fix for that. So we feed it a character at time T-1, and then we have the current state...
<LSTM cell contents at time T, stack contents at time T>
We feed the network noise and get
<LSTM cell contents at time T + noise, stack contents at time T + noise>
but then we roll back the stack contents, so at time T+1 it starts with
<LSTM cell contents at time T + noise, stack contents at time T>
The stack now provides a reliable frame of reference that isn't subject to the hallucinations. It's possible that the network would learn to trust the data structure(s) and develop a policy for using it correctly. I suppose it all comes down to how much noise we'd need to add and how frequently we'd need to add it. Too much and we'd ruin everything, and too little would have no effect at all.
There are several different versions of this plan that I could see working. I'd have to think more on it, but it's worth looking into.
A good idea! Earlier I had tried to do a fixed dropout rate, but I was having issues getting good results. I didn't think to decay the dropout over time. That's definitely a possibility.
---
By the way, I did a small test last night and found that GRU networks, which are designed to be similar to LSTM networks but are simpler in terms of their architecture, can give comparable results when it comes to generative tasks like ours. Just an interesting observation.
Also, I'm running another test with a small amount of dropout and multiple small stacks to get some more numbers. The dropout should help prevent the kind of overfitting that we saw last time. I'll check back in on that later today.
EDIT: Yes, the text-clone/noise problem went away with the dropout this time, so that's good. Card names go from being things like "Eschreaching Barr" to "Cathedral of the World Tree" and "Deadly Prayer", and the text is consistently reasonable, which implies that it's actually learning and not just mimicking. On the other hand, it doesn't want to use the stacks. But I'm going to rerun things now that I know I have a stable baseline with some tweaks and see what I can get. By the way, a few results:
Elvish Paladin
WW
Creature - Human Knight (Uncommon)
First strike
When Elvish Paladin enters the battlefield, you may search your library for an Elf card, reveal it, then shuffle your library and put that card on top of it.
2/2
#It's not an elf. But it is an "elf-ish" card, lol.
Fires of Athrey
3
Artifact (Mythic Rare)
1,T, put a blood counter on Fires of Athrey: Draw a card, then you win the game.
Tidal Caller
2W
Creature - Kor Soldier (Rare)
Landfall - Whenever a land enters the battlefield under your control, you may have Ally creatures you control gain "T: This creature fights another target creature." until end of turn.
2/2
#How very fitting for Zendikar. A Kor creature with a landfall ability that cares about allies. Pity it's not an ally itself.
Temporal in a Tide
1RR
Enchantment (Rare)
At the beginning of your upkeep, target opponent reveals his or her hand. You choose a nonland card from it. You may play that card this turn.
#Has this effect been done before in this way?
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Thank you for the update, and for sharing your code. I'll definitely take a look it and I'll let you know what I think!
EDIT: It works! Now to tinker with it.
EDIT(2): It's looking very good so far! I will warn that if the input is too large, you can easily exhaust the capacity of the global stack, though I suspect there's a way of increasing the limit. I was doing some stress tests to see how much it would take to break your code, lol.
I'll need to do more evaluations on corner cases (and there are many corner cases), but the results I am seeing look very solid, so that's great.
Quick question: do you or do you plan to break up text like "nonblack"? Instances of possessive markers like "'s"? Or do you leave that sort of stuff untouched? I was wondering what sorts of normalization you plan on doing to the text during pre-processing. I say this because it's a common trick in NLP tasks when you're working with word-level representations to break up text like that, because it helps to sidestep morphology-blindness.
By the way, if graphviz is dying on you because the graphs are too large, consider rendering the graphs in SVG format.
EDIT(3): I was also doing some baseline tests to get a feel for how things functioned. For example,
example_string([a]).
example_string([a,+,a]).
example_string([a,-,a]).
example_string([a,+,a,+,a]).
example_string([a,+,a,-,a]).
example_string([a,-,a,+,a]).
example_string([a,-,a,-,a]).
example_string([a,+,a,+,a,+,a]).
example_string([a,+,a,+,a,-,a]).
example_string([a,+,a,-,a,-,a]).
example_string([a,-,a,-,a,-,a]).
example_string([a,-,a,-,a,+,a]).
example_string([a,-,a,+,a,+,a]).
example_string([a,+,a,+,a,+,a,+,a]).
gives me...
(+)-->[+], a.
(-)-->[-], a.
a-->[a], (+).
a-->[a], (-).
a-->[a].
ability-->a.
The resulting grammar can handle strings of arbitrary length and correctly generates the language. Excellent!
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
At epoch 10, the network only uses stacks #2 and #3. Wiping the contents of stack #1 and #4 at every timestep have no effect, which means that while it writes to these stacks, it never reads from them. By epoch 50, the network has decided that it only needs one stack, stack #2. From then on out, the network just uses stack #2 (I did tests all the way out to epoch 120). Here's an example of how the output changes depending on what stack I enable or disable (given the same random seed):
All stacks enabled:
|escaped nell||enchantment|||O||{^BB}|sacrifice @: return target black card from your graveyard to your hand.|
Stack #1 Disabled:
|escaped nell||enchantment|||O||{^BB}|sacrifice @: return target black card from your graveyard to your hand.|
Stack #2 Disabled:
|escaped nell||enchantment||aura|O||{RR^RR}|enchant creature\enchanted creature gets +&^^^/+& as long as it's attacking. otherwise, it gets -&^^/-&^.|
Stack #3 Disabled:
|escaped nell||enchantment|||O||{^BB}|sacrifice @: return target black card from your graveyard to your hand.|
Stack #4 Disabled:
|escaped nell||enchantment|||O||{^BB}|sacrifice @: return target black card from your graveyard to your hand.|
Notice that the name, the type, and the rarity of the card stays the same, so stack #2 is used for some parts of the card but is not used for others. That implies that the network learns a policy for when and when not to use the stack. That's a good sign; the data is only being written/read from the stack when it's relevant to some specific purpose.
Now, like I said, the loss is low but the still loses track of what its doing from time to time. The good news is that it recovers better from its mistakes. For example, if I prime the start of the body text with kicker, I think I get more "intervening if" clauses. It doesn't always happen, but I'm getting the impression that it's working more often than it did before. It's not always "if kicked" though.
So I can get a card like...
|steelflame wurm||creature||wurm|A|&^^^^^^^/&^^^^^^^|{^GG^^^^GG}|kicker {RR} \when @ enters the battlefield, if it was kicked, destroy target noncreature artifact.|
But I can also get output like...
|war~trace force||creature||human|O|&^/&^|{GG}|kicker {^^GG} \flying\when @ enters the battlefield, if its prowl cost was paid, draw a card.|
And sometimes I get stuff like...
|serenity||enchantment|||A||{^WW}|kicker {WW} \when you cycle @, you may discard your hand.|
What's most impressive though is when I get results like this:
|grim disciple||creature||zombie wizard|N|&^^/&^|{^BB^}|kicker {^BB^} and/or {BB} \when @ enters the battlefield, if it was kicked with its {WW} kicker, you gain &^^^ life.\when @ enters the battlefield, if it was kicked with its {^^UU} kicker, target player discards two cards.|
I have never, ever seen the network do this before. Notice that the kicker costs are completely wrong. But it remembered that there were two of them!
So that leads me to speculate that the stack played a role in that. I think that a special "marker" is being pushed onto the stack when the network wants to remember that it has a clause that it needs to add to the card, and it relies on its short-term memory to recall exactly what kind of clause it wants. When it wants to add in the clause, it removes the marker from the stack. I'd have to dig deeper to prove that, but I'm fairly certain that that is what's happening. I think that cycling, prowl, monstrosity, kicker, et cetera all trigger the creation of the same stack "marker", which is why we're priming with kicker but getting stuff like "if its prowl cost was paid".
Now, the question remains: why just one stack? Why not use all four? Early on, the network decided that having four stacks was more trouble than it was worth, so it shut the connections off so that it only had to worry about one stack. I think we could get better results from the network if it had better command of the data structures. I just need to figure out how to do make that happen.
I'm going to generate a dump of cards so I can get a feel for the quality of the output. I'll provide y'all with a link whenever that finishes.
TL;DR: Progress! I knew it could work. Now we just need to find out how to make it work even better.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Naw, you're fine! It's a work in progress, after all.
By the way, a word of caution about the learning repetition part: try the set of all sequences of the letter A of odd length. I found that I had trouble getting the right grammar for a small amount of input, I ended up with a grammar for {A^n : n >= 1}. For regular and context-free languages where there is an alternation of symbols, I had a much easier time getting the right grammar.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
5GG
Creature - Wurm (Rare)
Kicker R
When Steelflame Wurm enters the battlefield, if it was kicked, destroy target noncreature artifact.
7/7
This is an oddly consistent and satisfying design. Even the name fits perfectly with the effect. By the way, the kicker cost/effect is a Crush.
Low: https://drive.google.com/file/d/0BxF7G2b8kigCNzZqdDVCbXZEcnc/view?usp=sharing
High: https://drive.google.com/file/d/0BxF7G2b8kigCVWdYQkxYRS1HeVU/view?usp=sharing
At moderate temperatures, I am seeing some near-clones. For example,
Lim-Dul's High Guard
1BB
Creature - Skeleton (Common)
First strike
When Lim-Dul's High Guard enters the battlefield, you may reveal X black cards in your hand. If you do, target creature gets -X/-X until end of turn.
Madness 1B
2/1
This is a merger of Nightshade Assassin and Lim-Dul's High Guard. So while the dropout of 15% discouraged it from copying everything word for word, it does take liberally from cards now and then. I could probably force it to give more original results if I turned up the dropout slightly more. Mind you, at higher temperatures, this happens less often anyway.
And of course there's still... "creative" cards like
Ixh of the New Dawn
3W
Enchantment (Rare)
All lands have shadow as long as you control three or more artifacts.
And the high temperature dump has some really weird cards like
Sib1 Manolith
3G
Creature - Elemental (Rare)
2: Sib1 Manolith loses this ability and becomes an aura enchantment with enchant creature. Attach it to target creature. You may pay U to end this effect.
You control enchanted creature.
3/3
Others in the high temperature dump are unusual but not totally implausible, such as
Drullent Healer
3B
Creature - Zombie Treefolk (Uncommon)
Defender
2B, T: Exile all creatures with power 1 or less.
Caravans travelling through Kaldheim's lowlands know to avoid the dark groves untouched by snow. To rest upon the warm soil is to anger the ancient guardians.
1/5
Desert that Artificer
2W
Creature - Human Wizard (Uncommon)
T: Tap target creature. It doesn't untap during its controller's next untap step.
"I have slowed him down, now is your chance to run! Leave him! You deserve a better man!"
2/1
Necrogen Specter
2B
Creature - Specter (Uncommon)
Flying
Threshold - As long as seven or more cards are in your graveyard, Necrogen Specter gets +3/+3 and has "when this creature dies, you lose 4 life."
2/2
Anyway, more to be done, but I think this is a step in the right direction. Some more tweaking of the parameters should take care of the clone issues, and I'll be thinking about ways to get more use out of the stacks.
EDIT: I think the clone issue started showing up sometime after epoch 50 or 60, because the test loss starts trending upwards (with some ups and downs). I think the training loss would be healthier if it were slightly higher, but I can fix that.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Ion Sliver 4W
Creature - Sliver (Uncommon)
As @ enters the battlefield, choose a color.
Whenever a source of the chosen color deals damage to you, you may put that many % counters on @.
At the beginning of your upkeep, if @ has ten or more % counters on it, you win the game.
2/2
###Coherent, and a staggeringly powerful ability too. Drop this against a mono-red deck and watch them weep.
Archfennet's Ring 1
Artifact (Uncommon)
As @ enters the battlefield, choose a color.
T: add one mana of the chosen color to your mana pool.
###A very flexible mana rock.
Shower if the Minds 3UU
Enchantment (Mythic Rare)
As @ enters the battlefield, choose a color.
Whenever a player casts a spell of the chosen color, you draw a card.
###This effect came up on quite a few 'choose a colour' cards, exactly duplicated. Another often-found one was 'Whenever a player casts a spell of the chosen colour, you may pay 2. If you do, gain 2 life.'
Marantic Clusher 5
Artifact (Rare)
As @ enters the battlefield, choose a color.
Creatures of the chosen color get +1/+1.
Whenever a basic land is tapped for mana of the chosen color, sacrifice @.
###Fascinating design. I really like how it checks only basic lands for this, and it clearly 'knows' that basic lands produce coloured mana.
Faerie Mask
2
Artifact - Equipment (Uncommon)
Equip 3
Equipped creature gets +1/+0 and has flying.
#Overcosted, but simple and neat.
Glint Elemental
3GG
Creature - Elemental (Uncommon)
At the beginning of your upkeep, sacrifice a creature.
Sacrifice a creature: destroy target artifact.
4/4
#Oozes flavor. It's an elemental that shines with things it smashed.
Guardian of the Gates
4B
Creature - Demon (Rare)
Devour 1
Flying
When Guardian of the Gates enters the battlefield, draw a card for each creature it devoured.
5/5
#Lacked the Devour instance, but it's a pretty neat design nonetheless.
Sphinx of Lost Truths
3UU
Creature - Sphinx (Rare)
Kicker 1U
Flying
When Sphinx of Lost Truths enters the battlefield, if it was kicked, return all other creatures to their owners' hands and you skip your next turn.
3/5
#Name, costs, P/T are cloned, but the kicker ability is entirely new and... interesting.
Mana Scioners
G
Creature - Human Soldier (Common)
Mana Scioners enters the battlefield with two +1/+1 counters on it.
XX, T: remove all +1/+1 counters from Mana Scioners and put X +1/+1 counters on it.
0/0
#That's actually pretty clever. Wouldn't ever appear at common, though.
Automata/formal language/computational complexity theories are my bread and butter. I have several textbooks on the subject that I like to lecture from that are filled with good examples if you're interested.
Talcoi if it's a Greek second declension noun. But I'm more of a Latin man myself, so Talcoses or Talces will do. lol
And no problem! Feel free to hit me up if you need me to take a look at anything or check any proofs, that sort of thing.
Naw! This is just part of the learning experience, lol.
My first guess is that I'd imagine you'd get nested trie-like structures, but who knows?
---
I promised maplesmall that I'd upload some MSE versions of card dumps so they'd be more easily searched. And I will... but wow, I'm tired, lol. Must be all the constant deadlines, haha. I'll look into that tomorrow morning.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Brain Freeze
That stuff is not super impressive, being actual cards, just hyped up a bit more.
Right. To reiterate my earlier word of caution, this last network ended up overfitting somewhat. It rarely copies any card completely, but it's definitely borrowing heavily now and then. For instance, this card...
Arcanis the Omnipotent
3UUU
Legendary Creature - Wizard (Rare)
T: Draw a card. Activate this ability only during your turn, before attackers are declared.
3/4
is a nearly complete clone of Arcanis the Omnipotent. Now, the network demonstrates that it understands the text much better than it lets on because of the mistake it made. It remembered that Arcanis had an activated ability that tapped to draw cards, but was fuzzy on the details, so it substituted in different card-drawing text. You see the same thing happening with other near clones like Pyromancer's Gauntlet, where "a red instant or sorcery spell you control or a red planeswalker you control" gets substituted with "a red source".
One way of filtering out near clones would be to compute the vector representations for the card texts and to compare their distances to the real cards. However, I expect that the next network I train won't have this problem.
A training loss of 0.03 is pretty much the floor of the loss (in other words, the ceiling on the accuracy). It's funny to say this, because we've been waging a war to get the losses down, but we actually don't want to hit the floor, because near perfect accuracy implies that the network is doing memorization rather than real rule learning. Now, it's not just doing pure memorization if it can do correct substitutions because that takes real knowledge, but it does suggest that this last network is a little too conservative.
That's not to say that this latest network isn't a step above the previous ones. It most definitely is. But now our concern is about getting test loss down. Fortunately, we have lots of ways to go about that. The ideal network, I think, will have a higher training loss than this one, at least in some areas. If you want creativity, you have to allow for mistakes. Fortunately we have ways of doing syntactic and semantic filtering/correction now, so I'm not too worried about that.
---
As promised, here are links to the mse-set versions of the latest dumps.
Low temp mse-set: https://drive.google.com/file/d/0BxF7G2b8kigCTG14UVRZTmdsM1k/view?usp=sharing
High temp mse-set: https://drive.google.com/file/d/0BxF7G2b8kigCbTFMSkNVVFhiQ1E/view?usp=sharing
I've also attached a render of Azure Mage as done in the style of a child's crayon drawing, and I'm waiting for a Rebecca Guay render of Liliana to come off the presses before I leave for work.
EDIT: Hmm.. Liliana will need some more tweaking, but we'll get there.
EDIT(2): By the way, on the side, I've been looking into image generation techniques. There was a paper that came out a few months ago on language-driven image generation, and another last month on using spatial LSTM networks to generate images. Extracting a text summary of a card is trivial. Then all you have to do is find a suitable mapping from text descriptions to decent image representations. Pass the result through the style transfer network, and then we're in business, lol.
EDIT(3): I took a closer look at that second paper during my lunch break. I have a suspicion that I'm going to end up with misshapen horrors, but the authors of the work have provided source code and I'm tempted to take a look at that in the near future. The lead author works in the same lab as the people who brought us the neural style transfer stuff.
EDIT(4): Oh, one more thing. Earlier I saw a fun video lecture on generative neural network models. Here's a link.. I've skipped ahead to a part you might find fun: a network trained to play Atari games dreaming about playing Atari games. The whole talk is interesting though.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Sure. Here's a cave painting version of Deathmist Raptor. Oh, and while we're at it, a randomized pop art version of Liliana.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Oh, pop art Lili is really cool, if a bit random. Is the version of the network you're using here the one that is able to hybridize style sources? If so, how many sources can it accept? Also, is the style-extraction process more like training - i.e. some form of weight building, decay etc - or is it more like building a huge feature map library for content classification and each subsequent image adds to the library?
Since we talked about Lili and pop art, could you do a Mike Mignola Lili (using these images 1 2) if and when you get the chance :D?
First, yes, we can hybridize style sources. So far I've tried up to three, though I'm sure I could do more.
As for your question, it's actually training. What happens is that an image recognition network studies the content image and gets an impression of what it should look like, where all the pieces are. It says "There's a woman standing with her arms stretched out in the middle of the frame."
The image generation network then pours in a mess of features from the style images. This is the unworked block of stone.
It reshapes the image at each iteration, and then studies its own creation. It asks, "Does this image give me the same impression as the content image? Can I see the woman?". It then reweights itself so that the impression it got will be closer to the target impression next time around. In this way, it's a sculptor, trying to reveal the woman hidden within the stone (well, sorta, it's also pouring in more style as it goes, but you get the idea). I let it run for about a thousand iterations this way, and then you get the results you see. I've uploaded intermediate results of iteration 100 and 400 and the final image at iteration 1000.
Now, the results, like pop art Liliana, can be highly random. Sometimes, depending on what I give the network to work with style-wise, it might drop or obscure some elements from the content. Features are preserved from the content image only if they are recognized and emphasized by the image recognition network. For whatever reason, the recognition network didn't react very strongly to Liliana's legs, so the image generation network didn't make a concerted effort to preserve them in the pop art image.
However, just as I can whisper to the card generating network, I can prime the image generating network in much the same way. If I give it a brief glimpse of the target image as it starts working, an outline of what I'm looking for. I find that this helps to preserve fine details better.
I ran a low-res version just so I could show you the process and the intermediates. I'm letting it generate a high-res version for you now. I'll upload it when it finishes. Oh, and by the way, the images you see borrow 60% of their style from the first Mike Mignola Lili art and 40% from the second. I wanted to make sure the bright colors dominated as the original image was already very dark.
EDIT: I've attached a larger version.
Anyway, as you can see, all we really need at this point is a way of hallucinating the content. For example, if we could take a text description, and have a system churn out a rough sketch of what we want, we can use approaches like style transfer to fill in the details. There's a lot of ongoing work in that area, and I'm excited to see what comes out in the near future.
EDIT(2): By the way, if you want more Mike Mignola Lili in the image and less Aleksi Briclot, I can change the weights so that the style dominates more strongly. The images you see were generated with the default settings, which try to balance the incorporation of new style with preservation the original content.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Heh, that just gave me ideas. You could get height maps from Earth, train a NN on them and then "style" a crudely generated procedural terrain to get much more interesting terrain. Or you could train it on a bunch of leaves (say, a few hundred) and then use it to generate infinite leaf textures, all different, all done during runtime, procedurally. You could also probably use it to supersample textures by creating detail out of similar stuff - a neat way to get SD gaming up to HD standards.
By the way, thanks for the picture. It looks amazing, exactly how I was expecting it to turn out. I think Mignola's style is a pretty good fit for this stuff, because it's very distinctive in its use of stark contrast and flat-ish colors. It's very bold, just like Gogh's brush strokes or Afremov's knife strokes.
No problem! Thank you for offering the suggestion. I'll have to look into doing more comic book style artworks in the future, lol.
Not too long ago, a friend/colleague of mine was consulting with me about a terrain generation algorithm, and we discussed those very topics. He was able to come up with a good solution while working along those lines. That project ended up being part of a portfolio that landed him a job at a video game company.
I agree completely.
Neural network approaches are great for doing tasks that a human can do but cannot articulate. For example, we know what artistic style is and we can, for the most part, distinguish the style of an art from its content. But if I had to formally explain the process by which I understand art, I couldn't do it. If I can't do that, then I can't write a program to do that for me. That is, of course, unless I set serious restrictions on what I mean by art, style, content, etc. But then the solution would be frail and brittle.
That's what makes machine learning so exciting for me. I'm a formalist by training: I break everything down into clean, ornate logic. I like to think that I'm good at what I do. But I've come to realize that not all problems can be solved that way; I can't draw a circle with a finite number of straight lines. That's why I dove into the subject of machine learning in the past year and a half.
---
For the record, I have a number of deadlines coming up that'll keep me pretty busy (conference submissions, peer review work, letters of recommendation to write, and I might be giving a guest lecture some time soon), but I promise that I'll look into rolling another network, as well as releasing my modified version of the sampling script and other related code that you'll need to have to work with the data-structure augmented networks. In the meantime, if anyone else has any special requests for art or anything like that, just let me know. Right now there's a lot of downtime for the big machine in the evenings and early mornings which gives me an opportunity to run the style transfer code to my heart's content.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Has anyone done work on this? I figure folks in this thread would know.
Edit:
Of course they have.
There's this:
http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/
And a study from Stanford:
https://cs224d.stanford.edu/reports/NayebiAran.pdf
Same guy's Google site:
https://sites.google.com/site/anayebihomepage/cs224dfinalproject
Though it's just "listen to this and generate something similar" apparently, nothing as advanced as "write something for a fast-paced action scene involving Han Solo, Luke Skywalker, and Princess Leia vs. the Empire, with a brief romantic interlude between Han and Leia."
I guess what I'm looking for from y'all is less pointing me at this stuff and more explaining it in layman's terms.
Sometimes. But if I'm really keen on preserving the fine details, I do "-init image" to prime with the original image. Sometimes I also lower the content weight to 2.0-2.5 or so (about half the default) if I want to bring in more detail from the style image. And when blending multiple style images, choosing the right weights can be important to get the look I want. Usually I do a low res render and then, if I like what I see, I go for a high resolution one.
Yes! Someone used the char-rnn implementation and fed it scores of Irish folk tunes.
Another person took a similar approach here.
And yet another in a similar vein as the previous two.
And then another.
There has been a decent amount of research ongoing on the topic as well. I've seen a ton of different papers while doing my research. Some came out in just the past few months, so it's a topic that's being actively researched.
A theme I'm noticing amongst the approaches you see above is the inability to maintain clear themes in the music over long periods of time. A data-structure augmented approach like what I've been investigating might actually be very helpful there because the network could use external memory to record stylistic elements. Or so I would imagine. I would look into that myself if I had the time, but I do not, haha.
Just saw this bit. Yes, well, it depends on the kind of representation we're working with. Let's say we have a text-based representation of the score (one instrument, just a piano, and for simplicity's sake let's assume that only one finger can be touching the piano at one point in time).
The prediction problem is this: I hit a key. Based on that key I struck, and those that I struck before it, Which key do I hit next?
Now, if we're only concerned with one piece of music, then you can just memorize the score. But if the training set contains thousands of pieces, there's no way that you could memorize all of them by heart. So you have to come up with rules. What elements of the immediate past do you commit to memory and what elements do you ignore? What notes would "sound right" in the given context? Can you determine the chord structure of the piece? And so on, and so on..
Well, that's what the network is doing when you're having it predict the music. It's figuring out thousands of little rules that it can use to predict the note that "sounds right" given the notes it has heard before.
When we use a predictive model to do generation, what we're doing is having the network come up with a note, and then we feed that note back in as input. So in effect it's hallucinating an entire piece of music, one note at a time, making predictions upon its own predictions. It's not unlike how you and I dream, with one idea effortlessly flowing into the next.
Does that help?
---
Also slightly off-topic: I've been using the style transfer for purposes other than just Magic. Self-portraits, for instance. As you can see, I can take a selfie after shaving and make it look like I have artistic talent (I can assure you I do not).
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
But can a network be trained to know the difference between "happy", " sad", "fighty", "sexy" and "tense" music?
Could it be taught about harmonies and use that information to generate multiple single-instrument pieces that fit together to create proper orchestration?
I'm especially thinking of the applications for online gaming; imagine an MMO of some sort where the score is dynamically generated in real time, based on what's happening in your vicinity. You'd have a soundtrack tailor-made for your play experience.
Stronghold Rats [B1]
Creature - Rat
T: Add B to your mana pool.
B Sacrifice Stronghold Rats: place two -1/-1 counters on target creature.
1/1
Karina's Needle [1B]
Sorcery
Tap target permanent. That permanent's controller may choose to untap it, if they do, you gain life equal to that card's converted mana cost.
Puresilver Blade [1B]
Creature - Vampire Warrior
First Strike
Persist
1/1
Wolf Specter [1B]
Wolf Specter cannot be blocked.
When Wolf Specter would deal combat damage to a player, prevent that damage, Wolf Specter instead deals 3 damage to target creature that player controls.
1/1
Waste of Mastery [B]
Sorcery
Return target creature to its owners hand.
Baloth of Corruption [2BB]
Creature - Elemental
Baloth of Corruption gets +1/+0 for each creature card in your graveyard.
4 / 1
Ghostly Sanctum [1BB]
Enchantment
Whenever a player taps a land for mana, it deals 1 damage to that player, and you gain 1 life. If it's a mountain, exile it and Ghostly Sanctum.
Blood Prowler [BBB]
Creature - Serpent
At the beginning of each player's upkeep, that player sacrifices a land or a creature.
2 / 2
Snarefang Familiar [1B]
Creature - Rat
Whenever Snarefang Familiar deals damage to a player, each player discards a card, then you may draw a card.
2 / 2
Wight of Inquisitor Island [1BB]
Legendary Enchantment Creature - Spirit
Fear, Protection from White
When Wight of Inquisitor Island deals combat damage to a player, that player sacrifices a creature.
Whenever a player cast a white spell, that player sacrifices a creature.
3 / 3
Shadow-Water Minion [1U]
Creature - Fish
Flash
T: Tap target creature, return Shadow-Water Minion to its owner's hand.
1/3
Mindshield Bandit [1UU]
Creature - Human Rogue
Mindshield Bandit cannot be blocked except by walls.
2: Target creature you control cannot be blocked until end of turn. That creature does not untap during your next untap step.
1 / 2
Fiery Wanderer [1U]
Creature - Elemental
Flash
At the beginning of your upkeep, place a +1/+1 counter on Fiery Wanderer.
1 / 1
Astral Shadow [1U]
Sorcery
Reveal the top three cards of target opponent's library and put them back in any order.
Willful Presence [XUU]
Sorcery
Tap X target creatures, target creature you control gets +X/+X until end of turn.
Manifest Secrets [2U]
Enchantment - Enchant Creature
Enchanted creature becomes a 4/4 red elemental shaman creature with flying.
Warrior's Folly [1UU]
Enchantment
2: Tap target creature.
Deep Shade [1G]
Creature - Beast
1GG: When Deep Shade attacks it gets +1/+0 until end of turn. Target other creature gets +2/+0 until end of turn.
2 / 1
Keeper of Turn [2GG]
Creature - Elf Druid
Whenever another non-token creature enters the battlefield under your control, if an opponent controls more creatures than you, you may place a 3/3 green beast creature token into play under your control.
1/3
Walking Spirit [1G]
Sorcery
Target land becomes a 2/2 beast creature, it is still a land. Untap that creature.
Savage Hisokan [3G]
Legendary Creature - Human Shaman
When Savage Hisokan becomes blocked, it gains double-strike until end of turn.
R: Target creature blocks this turn, if able.
4/3
Oozing Hate [2G]
Enchantment - Aura
Whenever enchanted creature deals combat damage to a player, its controller sacrifices that many creatures.
Reaving Essence [1GG]
You gain life equal to target creature's toughness, it deals damage equal to its toughness to target creature.
Wandering Growth [4GG]
Creature - Beast
Wandering Growth enters the battlefield with X +1/+1 counters on it where X is the number of creatures you control.
4 / 2
Blackroot Guardian [1GG]
Creature - Plant
Wither - (It deals damage to creatures in the form of -1/-1 counters.)
2/7
Avenging Ritual [GG]
Sorcery
As an additional cost to cast Avenging Ritual, Discard a card.
Destroy target creature.
Returned Watcher [2G]
Creature - Spirit Warrior
When Returned Watcher enters the battlefield, reveal the top card of your library. Target opponent loses life equal to that card's converted mana cost.
2 / 2
Tindersoul Wanderer [6G]
Creature - Treefolk
Tindersoul Wanderer's power and toughness are each equal to the number of permanents you control.
*/*
Rhash, Blood of Oaks [3G]
Legendary Creature - Elf Soldier
Whenever Rhash, Blood of Oaks attacks, place a +1/+1 counter on it.
Whenever Rhash, Blood of Oaks becomes blocked, you may remove a +1/+1 from it. If you do, put four 1/1 green saporling creature tokens onto the battlefield.
3 / 2
Slayerweaver [GG]
Creature - Elf Monk
Bushido 1
When Slayerweaver enters the battlefield, sacrifice it unless you return a creature card to your hand.
2 / 2
Skok the Countless [4GG]
Legendary Creature - Elf Shaman
Whenever another creature enters the battlefield under your control, each creature gains lifelink until end of turn.
2 / 4
Earthly Purge [3GG]
Destroy all creatures. You may search your library for up to two basic land cards and then put them into play tapped, then shuffle your library.
Pyrominator [3R]
Creature - Cleric
Each time you cast an instant or sorcery spell, target player loses 1 life for each charge counter on Pyrominator.
3R: Place a charge counter on Pyrominator.
2 / 2
Barashin Elemental [R]
Creature - Elemental
Flying
Whenever an opponent draws a card, discard a card.
When you have three or fewer cards in your hand, sacrifice Barashin Elemental.
4 / 2
Sage Assassin [R]
Creature - Goblin Rogue
At the beginning of your upkeep, target opponent discards a card.
1 / 1
Snaring Starfish [R]
Creature - Fish
T: Tap target creature, it remains tapped as long as Snaring Starfish is tapped.
1 / 1
Goblin Bladehunter [1R]
Creature - Goblin Warrior
When Goblin Bladehunter is blocked by a creature with power greater than Goblin Bladehunter's, destroy that creature, it can't be regenerated.
2 / 1
Crushing Challenge [1R]
Instant
Creatures your opponents control get -1/-1 until end of turn. Scry 2.
Hell Squid [3R]
Creature - Fish
Fear (This creature can't be blocked except by artifact creatures and/or creatures that share a color with it.)
When Hell Squid attacks, defending player sacrifices an Island.
3 / 4
Emberblood Ritual [2RR]
Enchantment
Whenever you cast a red spell, creatures you control get +2/+2 until end of turn.
Archshot [1R]
Instant
Archshot deals 2 damage to each creature and each player.
Spinstrike [RR]
Instant
Until end of turn, whenever a creature you control becomes blocked by two or more creatures this turn, destroy those creatures. If you do, Draw a card.
Brass Flamesinger [2R]
Creature - Elemental Warrior
First strike
When Brass Flamesinger attacks or blocks, target creature gets +1/+0 and first strike until end of turn.
2 / 1
Charging Border Gang [2RR]
Creature - Human Bandit
Haste
Charging Border Gang's power and toughness are equal to the number of creatures opponents control.
*/*
Overhead Archers [1R]
Creature - Goblin Soldier
T: Overhead Archers deals 1 damage to each attacking and blocking creature.
1 / 2
Banner Runners [1RR]
Creature - Goblin Warrior
Banner Runners can't attack or block.
Whenever a creature you control becomes blocked it gets +1/+2 until end of turn.
2 / 2
Ice Djinn Mastery [5R]
Enchantment - Aura
Whenever enchanted creature deals damage to a player, prevent that damage. Its controller puts an X/X red elemental creature token onto the battlefield, where X is equal to the damage prevented this way.
Deadshot Ray [XRR]
Instant
Deadshot Ray deals X damage to target blocking creature and that creature's controller.
Catapult Plague [RR]
Sorcery
Target creature gets -5/-5 until end of turn.
Angelic Warding [3W]
Enchantment- Aura
Totem armor (If enchanted creature would be destroyed, instead remove all damage from it and destroy this Aura.)
Enchanted creature has first-strike, flying.
Fury of the Harvest [2W]
Enchantment - Enchant Land
At the beginning of your upkeep, place a 0/1 colorless plant creature token into play under your control, it has "Sacrifice this creature: tap target creature an opponent controls."
Pest Shaman [W]
Creature - Human Shaman
W: Look at the top two cards of your library, then put them back in any order.
G: Exile the top card of your library, then place a 1/1 green insect creature token into play under your control.
1 / 1
Torgan Dawnwalker [4W]
Creature - Human Warrior
Double Strike
Torgan Dawnwalker enters the battlefield with a -1/-1 counter for each other creature you control.
5 / 5
Pulse of the Damned [2WW]
Instant
Until end of turn, whenever a player taps a non-white permanent, destroy it.
Sand Spinner [2WW]
Creature - Elemental
Flying, Lifelink
Whenever Sand Spinner attacks or blocks, you may flip a coin. If you win the flip, you may put target artifact, creature, or land on the bottom of its owner's library.
2/3
Heroic Betrayal [2W]
Instant
Attacking creatures get -3/-0 until end of turn and do not untap on their controller's next untap step.
Honorguard of Kauki [4W]
Legendary Creature - Human Samurai
Bushido 2
Whenever Honorguard of Kauki blocks, you may have it deal 2 damage to each creature blocking it.
4 / 5
Angelic Burden [3WW]
Sorcery
Target player puts the top ten cards of his or her library into his or her graveyard. Then each player reveals a card at random from his or her hand and may choose to place them onto the battlefield.
Set Earth [4W]
Enchantment
As long as seven or more cards are in your graveyard, each land you control is a 2/2 creature, it is still a land.
Tap an untapped creature you control: add G, B, or W to your mana pool.
Attendant Tutor [1W]
Enchantment
Whenever a player casts a spell, you may put a Lore counter on Attendant Tutor.
Remove X Lore counters from Attendant Tutor: Sacrifice Attendant Tutor, you may search your library for a card with converted mana cost X or less and put it into your hand, then shuffle your library.
Soulflame Chanter [2W]
Creature - Human Shaman
T: @ deals damage equal to its power to target creature, then put that many 1/1 white spirit creature tokens with haste onto the battlefield
2/2
Sandsoul Mentor [1WW]
Creature - Human Soldier
Whenever a land enters the battlefield under your control, you may search your library for a creature card with converted mana cost 2 or less and put it into your hand, then shuffle your library.
2 / 2
Call of the Wood [W]
Sorcery
Search your library for a land card, reveal it, put it into your hand, then shuffle your library.
Winged Ending [1W]
Instant
Creatures opponents control lose flying until end of turn. Prevent all combat damage that would be dealt by creatures your opponents control this turn.
Shimmering Bladecraft [1WW]
Sorcery
Place a +1/+1 counter on each creature you control.
Worm Scrappers [1W]
Creature - Human
T: Add G to your mana pool for each creature card in your graveyard.
1/1
Mindshiver Hunter [3W]
Creature - Human Samurai
Hexproof
Heroic ~ whenever you cast a spell that targets Mindshiver Hunter, it deals damage equal to its power to each other blue creature are on the battlefield.
2 / 2
Shadow Hammer [2]
Artifact - Equipment
Equip [2]
Whenever an opponent casts a black spell, that player discards a card.
Whenever you cast a black spell, equipped creature gets +1/+0 and lifelink until end of turn.
Dismal Underling [2]
Artifact Creature - Golem
Dismal Underling does not untap during your untap step.
T: Choose an artifact, creature, or land you control, then choose an artifact, creature, or land target opponent controls, then exchange control of those permanents.
0/4
Stronghold Stranger [3]
Artifact Creature - Cat
As long as a player has 7 or 3 cards in their hand, Stronghold Stranger is indestructible and has deathtouch.
2 / 1
Augur Servant [3]
Artifact Creature - Golem
Whenever you draw a card, each player may put a 1/1 black zombie creature token onto the battlefield.
2/2
Nillabostor [3]
Artifact Creature - Basilisk
Whenever a creature is dealt damage by Nillabostor, it deals that much damage to that creature's controller.
1 / 3
Wand of Affliction [4]
Artifact
[6] T, Sacrifice Wand of Affliction: destroy target nonblack creature, it can't be regenerated. Wand of Affliction deals damage equal to its power to target creature or player.
Cleansing Ice [1]
Artifact
[2], T, Sacrifice @: regenerate target permanent.
Toran's Musicbox [2]
Artifact
[3]: Put the top card of your library into your graveyard, then return a card at random from your graveyard to your hand.
Flamewood Cavalier [1RG]
Creature - Elf Warrior
Whenever Flamewood Cavalier becomes blocked, it deals 2 damage to each creature blocking it.
2/2
Season of Leaches [1WB]
Enchantment
Whenever a creature attacks you, you may put a Leach counter on it. Whenever a land enters the battlefield under you control, you may put a -1/-1 counter on each creature with a Leach counter.
Chaotic Bloodshed [1WB]
Instant
Attacking creatures your opponents control each deal damage equal to their power to target creature.
Earnest Blessing [XRW]
Sorcery
Destroy X target creatures.
Sunikar, Dreaded Soul [5RB]
Legendary Creature - Vampire Dragon
Flying, deathtouch
Whenever Sunikar, Dreaded Soul deals combat damage, you may search your library for a permanent and exile it. You may put that card onto the battlefield at the beginning of the next end step.
6 / 5
Bloodlost Soldier [RW]
Creature - Cat Rebel
Whenever Bloodlost Soldier attacks, it gets +2/+4 until end of turn.
1 / 1
Honorable Mentions:
Tuikle's Briber [2R]
Creature - Ass
Sacrifice Tuikle's Briber: deal 1 damage to target creature or player.
2 / 2
Spiked Warbread [3]
Artifact
When Spiked Warbread enters the battlefield, creatures your opponents control block this turn if able.
That's amazing.
Well, an unsupervised approach, like what I just suggested to you, would definitely arrive at some nebulous concept of mood, insofar as a tense song should never suddenly become a happy one, but a supervised approach might work best here. That is, we have tagged pieces of music and we develop a network that deconstructs that music to predict the tag. That then gives us a way of critiquing or guiding the work of a generative model, sort of like what we're doing with the style transfer algorithm.
A search on Google Scholar reveals that there were quite a number of papers on the topic of music classification from the 1990s and 2000s, and a paper from 2011, "A Survey of Audio-Based Music Classification and Annotation" by Fu et al., has been cited 150 times since its publication. Audio-based music information retrieval is useful for any music-streaming service, so I'd imagine that's driving a lot of the research.
Google even released a (probably unenforceable) patent on the idea itself entitled "Artificial Neural Network Based System for Classification of the Emotional Content of Digital Music" (link here: https://www.google.com/patents/US20140058735 ) and that patent contains a fairly detailed design of how such a system would work.
In short, yes, it's definitely possible.
I'd imagine so, yes. If we generate the single-instrument pieces independently, then a trained system could bend and reshape them to make them harmonious. Having a system that generates each of the single-instrument pieces with the goal of harmony in mind would be trickier, but I'm sure that's also do-able.
There was one short paper that came out in 2014 entitled "Procedural Generation of Music-guided Weapons" by Cachia et al. which presents a proof of concept implementation of a neural-network-directed music generation for a game. It cites a few recent papers on the topic; I'll let you take a look at the names:
David Plans and Davide Morelli. Experience-driven procedural music
generation for games. IEEE Trans. Comput. Intellig. and AI in Games,
4(3):192–198, 2012.
Annika Jordan, Dimitri Scheftelowitsch, Jan Lahni, Jannic Hartwecker,
Matthias Kuchem, Mirko Walter-Huber, Nils Vortmeier, T Delbrugger,
U Guler, Igor Vatolkin, et al. Beatthebeat music-based procedural content
generation in a mobile game. In Computational Intelligence and Games
(CIG), 2012 IEEE Conference on, pages 320–327. IEEE, 2012.
Nils Iver Holtar, Mark J. Nelson, and Julian Togelius. Audioverdrive:
Exploring bidirectional communication between music and gameplay.
In Proceedings of the 2013 International Computer Music Conference,
2013.
What would probably work best is a kind of blend between scripted, procedural generation and a neural-network-based generative model. A hand-crafted system would track and report key actions that take place, creating a reduced, real-time scene representation that could then be interpreted and acted upon by the generative model. The generative model could draw upon a pre-existing repertoire of musical themes and motifs, and use those as a backdrop for its own creations that are synchronized with the actions in the game.
Give me three months and adequate funding and it could be done, lol.
LASture, these cards are awesome! Thank you for sharing, and for your dedication.
Two that caught my eye:
It eliminates the impure whenever they try to take action against you. I'd probably add "non-land" to that, but the idea is fun all the same.
That's a very interesting design.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.