Plus, setting aside perhaps a bit of wording/templating trouble, it suggests that any permanent spells like creatures and artifacts will never make it to the battlefield.
Hey I've been thinking; How difficult on a scale of one arbitrary number to another, would it be to use these or a similar neural network and associated algorithm to generate, Yugioh cards?
I'm going to have stab in the dark and say reasonably difficult.
People have already adapted it to do Hearthstone cards, so while it may be tricky depending on your level of coding expertise, it's definitely within the realm of possibility.
I just recently got cltorch working on my laptop, and messed around with a style transfer network that was last updated a few months ago. Iterations take between 4 and 6 seconds on my machine, at default settings.
This has me wanting a dedicated machine for this, but first there are a bunch of questions I have, now that I've actually played with it.
There are some undocumented features hanging around in the source code. How well do they work?
The code relies some on colorspaces besides RGB, for certain features. Does anything interesting happen if the whole thing is done in, say, CIELAB?
Is there any potential for modularity and reuse of the trained networks? (That is, could I train a set of style and content networks separately, and combine them cheaply? Would it be possible to manipulate style and content weights after-the-fact?) EDIT: Yes, sort of, and I'm still not sure.
What would it take to apply this to animated content, looping or non-looping? EDIT: Step 1 is to find a proper network architecture. Step 2 (the hard one, probably) is to integrate that with a static style network
It appears that the random initialization option uses white noise. What are the results from using a different form of noise, or from perturbing the input image with noise, rather than using it unaltered?
Some of my tests show persistent artifacts. I can't tell if this is a result of downscaling the input image, properties of the style that aren't apparent to me, or something else. One thing it's not is the choice of initialization. The artifacts appear even if I initialize with the input image. EDIT: I suspect this is either a consequence of the reconstruction itself, or because my content images differ somewhat from the training data. EDIT The former option does not actually make sense, so I'm going to suppose that the content images are just too different.
I'm sure some of this has papers on it already, plus I need to really thoroughly read over the source, but suddenly, I actually have a working network that does interesting things, I'd just like to look into tweaking it some.
EDIT: Okay, I'm finally getting some important insights into the basic details of how this works. The system detailed in A Neural Algorithm of Artistic Style is a classifier hooked into several levels of a convolutional neural network, and it has to be pre-trained separately, I think. The image is generated by separating the error (I call it error, I think the paper calls it loss) function into terms relating to different levels of the network, then feeding each part a vector from a different image. The output of the system is the input to the classifier, and the image must be trained using the error function.
Also, it looks like I'll only get so much insight from looking at the source of the net I found. If I understand right, it's extracting layers from a pre-trained net, and doesn't train the classifier at all.
EDIT: Eesh, self-teaching this stuff is a great way to end up with a patchwork of knowledge. Was looking into autoencoders, and just now noticed that tied weights are a thing. Just now. Eesh.
Okay, so, I've been messing around with some ideas involving convolutional autoencoders, and I'm trying to figure out if there's any sensible way forward.
It seems to me that a convolutional autoencoder is cool because it means you don't need to use training to get output. Furthermore, because that makes it quick, you can pipeline it into other things.
Now, I've read up on convolutional autoencoders, and it seems to me that it would be possible to implement them using, instead of pooling layers, a convolutional layer with a stride greater than 1. Because, for a specific image size, there's an equivalent fully-connected layer, it's possible to transpose a downsampling convolution, which I believe produces several smaller convolutions, but with the same number of parameters as the original layer.
That's a neat idea I think, but I'm like, I was originally thinking about style transfer; style transfer plus high-level feature editing sounds cool. The papers I've found (one is linked above) express style in terms of the Gram matrix of the layers before the pool layer. Now, if you're getting images through training, then it's not a problem to add more score metrics and weight appropriately, but I think it'd be really cool to be able to get this stuff "in one step" as it were, and I'm really struggling to figure out how to constrain the deconvolution to match an arbitrary positive semidefinite matrix.
I mean, it's clear to me that the specific fully-connected layer that we can imagine for a downsampling convolution of a specific size of image, that matrix will have a large kernel (a stride of 2 discards around 3/4 of the degrees of freedom!), and adding a vector from the kernel space won't change the downsampled value, but can alter the Gram matrix. So, "all" I need to do is find a vector in the kernel that, when added to the deconvolution result, alters the value of its Gram matrix to match an existing one.
Does anyone know about doing stuff like this, or should I just focus on messing with the autoencoder setup I thought of? Actually putting that into code and testing it out.
Hi! Great to see some activity on this thread -- I have Tab Snooze set to pull it up once a week. I'm not the professional (or even one of the skilled amateurs) here, but I can answer a couple of your questions.
Strided convolutions instead of pooling is definitely a thing. The DCGAN paper uses this for both the generator and discriminator networks.
There is definitely work on applying style transfer to moving images, and I might have one of those papers saved on arxiv-sanity. The big challenge is in defining a loss function that preserves continuity between frames in such a way that preserves relative correlations between image elements. The easy one to remember, though is https://arxiv.org/abs/1701.04928v1, on account of the famous co-author.
Style transfer has had some interesting progress since the initial paper, and there are methods that parameterize style instead of training one network per style. Again, I don't have those *right* at hand, but if you're interested I can look back.
Aliust Dassyr 3U
Creature - Elf Shaman
T: Draw a card
2/3
[i]Other than the creature types not fitting the ability/color, I could see this as a real card...[/i]
Orchose Gaster 4
Artifact
T: Put a -1-1 counter on target creature.
[i]Ouch[/i]
Marss of Mestakerings U
Creature - Human Shathan
H: You may put a +1/+1 counter on $this.
1/1
[i]If that was black, I'd say it was perfect...[/i]
Bead the Jessance 2B
Instant
Target player discards a card, then shuffle your library.
[i]Unnecessary shuffling strikes again![/i]
Starct to Rucbe 1GG
Creature - Giant Warrior
Whenever another creature dies, exile it instead.
3/3
[i]Screw you, graveyard retrieval decks![/i]
Bloodhift Rake 2G
Instant - Arcane
Return $this to its owner's hand.
[i]Can be interesting with splicing.[/i]
Gift's Trintsmit B/G
Creature - Human Samurai
Bushido 3
When $then enters the battlefield, look at the top two cards of your library.
1/1
[i]Other than being a bit on the cheap side and the wrong colors, nice.[/i]
Hexanflegion Sliek 4G
Creature - Spirit
1, Sacrifice $this: Destroy target artifact or enchantment.
2/2
[i]A disenchant on a stick...[/i]
Swichvange Gure 5GG
Creature - Faeli Ascridil
Whenever $THIS blocks, return $THIS to its owner's hand. Target player shuffles his or her library into his or her graveyard.
5/6
[i]So... I block, it comes back to my hand and then super mills target player? Ouch.[/i]
Hurbmant Hoonn 2B
Instant
Exile target nonland creature.
[i]Rather specific requirement, but okay.[/i]
Thout Prath 5G
Creature - Elf Whiture Eldrazi
Whenever $THIS deals combat damage to a player, that player discards a card.
5/4
[i]This one is mean.[/i]
Blust's Gliver 3WW
Sorcery
Look at the top five cards of your library.
[i]Okay./[i]
Tencholl Boodstycrown 1B
Creature - Human Wizard
W: $this gets -3/-0 until end of turn.
2/1
[i]Why would I *ever* use this ability? Legal though.[/i]
Oh, here's a shell script I wrote to help combine the results when done:
#!/bin/bash
rm all.txt
for f in cv/*.t7
do
echo Exporting $f...
th sample.lua $f -gpuid -1&>$f.txt
cat $f.txt>>all.txt
done
Marss of Mestakerings U
Creature - Human Shathan
H: You may put a +1/+1 counter on $this.
1/1 If that was black, I'd say it was perfect...
Congratulations on getting the network up. I look forwards to seeing more cards from you! may the network produce functional planeswalkers
What's H? This card stuck out to me as I'm not sure which symbol this is.
Marss of Mestakerings U
Creature - Human Shathan
H: You may put a +1/+1 counter on $this.
1/1 If that was black, I'd say it was perfect...
Congratulations on getting the network up. I look forwards to seeing more cards from you! may the network produce functional planeswalkers
What's H? This card stuck out to me as I'm not sure which symbol this is.
Nightos, Kruedhen U/UU/U
Creature Spirit T: Target player draws a card. $this gains First Strike until end of turn.
2/2
[i]Yes, the casting cost is blue or blue and another blue or blue...[/i]
Skillkition Dragons 1W
Creature - Human Shaman
Flash
2/2
[i]Boring but practical Human Shaman that is Dragons.[/i]
Ogel Lightcearcereatiop 4RG
Creature - Elemental
Wither T: Add R to your mana pool.
1/1
[i]If this was beefier or cheaper, I'd see it as a real, but off color, card.[/i]
Spoomic Ghread 1R
Creature - Shapeshifter 1U: $THIS gets +2/+2 until end of turn.
1/2
[i]A bit off color, but this can be a monstrously huge creature with enough mana.[/i]
Lumheston of Kituher 5BB
Creature - Rat
4/6
[i]That's... a big rat. Spendy too.[/i]
Death Quage 2BB
Sorcery
Until end of turn, return a land you control to its owner's hand.
[i]And then it comes back?[/i]
Nat of the Escilf
Land T: Add C t your mana pool. T: Tap target artifact or enchantment.
[i]That last part would be good against enchantment creatures I guess.[/i]
Griffloun Rage
Land T, Pay 1 life: Draw a card for each basic land type.
[i]Doesn't specify in play so, I guess it's always for five unless WotC changes the number of basics...(Or, do snow lands count?)[/i]
Touth 6RR
Sorcery
Destroy all permanent.
[i]But only the one. With the obvious fix, I like this.[/i]
Crean-Shounder of Geisite B
Creature - Goblin Rogue
$this's power and toughness are equal to its power.
1/1
[i]What?[i]
Ithel with Dispop 4BB
Sorcery
Creatures you control get +1/+1 and have indy.
[i]This card belongs in a museum![/i]
Rrowled Eguge 2U
Instant
Target opponent reveals his or her hand. Draw a card.
[i]Good job, rnn![/i]
Cliud's Fergeal G
Creature - Human Artificer
Whenever $this attacks, draw a card.
1/1
[i]I'll take four![/i]
So I've been doing some reading on computer vision and came across this. There was some discussion about trying to encode a card into a latent vector space which can also be used to generate an image that fits the card, and it seems like this would be a great way to do it- replace the encoder part of the generator with an RNN that simply reads the card text and encodes it into a vector which would be upscaled by the generator to create an image. If we can get card text associated with images, it shouldn't be *too* hard to train end-to-end, and the paper showed pretty good results with small sample sets (the architecture dataset). Folks with more experience than myself: does this seem reasonable? I might try to get this idea running in a few weeks if I can get the data, but no guarantees.
[Edit]
Also relevant: https://arxiv.org/abs/1511.02793 https://arxiv.org/abs/1605.05396
[Edit 2]
Also https://junyanz.github.io/CycleGAN/
I noticed that the sample_hs_v3.lua script in mtg-rnn did not have OpenCL support in it. So I added it and submitted a pull request.
A couple other things I noticed in the sample_hs_v.lua script (I might get to fixing these if I have time, or someone else can intervene and fix them):
-- The primetext positions are all wonky (for example, if I prime with -bodytext_prepend, then it stuffs the primetext into the name. This has to do with how the fields have been reordered in mtgencode, but evidently weren't updated here.
-- Additionally, the primetext overwrites the number following the "|" to indicate what field it would have been.
I need to look into this a touch more to see if I can handle the columns without too much code rearranging. If I can get it to use the column indexes after the pipes as indicators, then it will be the most flexible (as long as the index numbers don't change for the fields they represent, it would be agnostic with respect to column order).
EDIT: I just had to share this card from my first run. It's so beautiful:
Scraing Angee 4U (Mythic Rare)
Instant
You may sacrifice a land.
Destroy all players.
Day1RW
Enchantment (rare)
Whenever a creature attacks, Day deals 1 damage to that creature and 4 damage to you.
If you would leak, Day deals 4 damage to its controller.
Can anyone point me to a database of generated cards? I got some from roborosewater's twitter but I need more.
I'm trying to make an EDH deck using only computer generated cards...
Private Mod Note
():
Rollback Post to RevisionRollBack
EDH RRGrenzo plays your deck, GGYeva's mono green control, WW9-tails trys desperately for monowhite not to suck RWBUTymna and Kraum's saboteur tribal, UWG Kestia's Enchantress Aggro, RUB Jeleva casts big dumb spells, RGB Vaevictis' big critters can kill your critters hard
Looks like the make.girls.moe site has been updated - now haircolors and hairstyles are all individual sliders, and there's a third party script to upgrade it to produce batches of a hundred or a thousand.
I guess you could do something like have mana costs converted into hair and eye color to make the art match... use the rulestext as part of the random noise vector to try to make similar cards look similar too.
I'm not super well-versed on this specific thread, but I'm in Machine Learning right now and this is definitely something I could see myself working on for my open-ended final project this semester.
What I've seen for a LOT, LOT of people's networks is their frequency to incorrectly generate rules text within the context of the card type, plurality, ect.
If I were able to generate a table (perhaps a .csv) that showed specific phrases in 1 column, and the ONLY possible <correct> following rules in another column, would someone be able to use that to improve their network?
Well, you get the idea. If I made a .csv or excel table with phrase dependencies like this, does anyone think they could incorporate it well into their existing network?
Perhaps I will generate it by iterating through the dataset, perhaps by just using my knowledge of the game.
At the moment, we don't have a grading system to feed back to the RNNs. It's pure character-by-character training. There's a sample set pulled out to give us training loss data, but a CSV wouldn't improve anything as is.
So what we would basically want is a program that grades the network's output and if it's above a certain threshold in terms of correctness, we convulse it to make it a "correct" card, then add the new card to the training set? Then, over time, the network receives a more diverse training set, and will be more inclined to <eventually> hash out the grammar and conditions correctly.
So would a standalone grading program help anyone? Changing 1 line at the end could easily append the new, correctly-formatted cards to either their own list or the existing card data.
It certainly sounds worth a try! Based on the quietness of this thread, I don't think there's anyone actively developing anything to improve card generation right now, so an innovation like that could be quite a step forward.
I poked around with and and it's super fun. I have a question on the custom version made for the magic cards. I can intuit why the encoding method of turning numbers into symbols helps, so it doesn't seen to learn what every number represents from a tiny sample set. But I don't understand how the separated fields change things.
In the original char-rnn there's no text|text|text encoding on the input, it just takes any text at all in any format. But it's pretty good with it and if you give it JSON or XML for example it generally will spit out valid output. In the mtg-run version it puts the cards fields on separate lines separated by | symbols.
|5enchantment|4|6|7|8|9whenever a player casts a green or white spell, that player discards a card.|3{^^^^BB}|0N|1putrefaction|
So I'm guessing the vertical bars change the system from the original char-rnn plain text version in a more signficant way. The train.lua file itself is looking for them, they aren't being fed to the training network right? So this must produce a different output then if the magic cards had been in an XML or whatever format and run through the original char-rnn. I'm guessing the bars do something like hard code into the network the idea that these are different fields, and it doesn't have to learn to distinguish the different sections?
So would a standalone grading program help anyone? Changing 1 line at the end could easily append the new, correctly-formatted cards to either their own list or the existing card data.
It would certainly help me. I'm doing exploration in this space. There is a lot of information about MTG that informs whether cards are "correct" or not that's not written on the cards.
One of the things the network is pretty awful at is matching color pie. We have MaRo's article which tells us exactly which mechanics are in each color, https://magic.wizards.com/en/articles/archive/making-magic/mechanical-color-pie-2017-2017-06-05. So my first stab at this will be to try to classify cards as in-pie or out-of-pie (and if we want to feed the results back in to the network as an experiment, I'm all for experiments).
The following data sets would be useful for doing this.
1. For each card, tag it with all the abilities listed in Maro's article that are on that card.
We could use this to train a network that can tag newly-generated cards with their abilities in the same way.
2. For each card, provide some measure of whether it's in-pie or out-of-pie.
Even having one person read MaRo's article and give us a bunch of grades for random cards (doesn't have to be the whole corpus to start) could get us going here.
Hi everyone, this is my first time posting here. It looks like I've had the same idea as you using GANs for the roborosewater card generator. Honestly I'm not up to date on what the status of this is but maybe this is of use.
Training took about 24h on a GTX 1070. They all have a resolution of 400x300 which is close to the aspect ratio of the actual artwork. I used a high dropout rate meaning that the generator didn't overfit. So none of the results should look similar to any existing cards. The one drawback being that most of them won't make any sense at all.
I'll try a slightly different network architecture with a lower dropout rate next. Just to see what that would look like.
Let me know if anyone is interested in them. I can send you the model to generate more, or just upload as many pictures as you want.
ATTACHMENTS
epoch0575
epoch0549
Private Mod Note
():
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
People have already adapted it to do Hearthstone cards, so while it may be tricky depending on your level of coding expertise, it's definitely within the realm of possibility.
This has me wanting a dedicated machine for this, but first there are a bunch of questions I have, now that I've actually played with it.
There are some undocumented features hanging around in the source code. How well do they work?
The code relies some on colorspaces besides RGB, for certain features. Does anything interesting happen if the whole thing is done in, say, CIELAB?
Is there any potential for modularity and reuse of the trained networks? (That is, could I train a set of style and content networks separately, and combine them cheaply? Would it be possible to manipulate style and content weights after-the-fact?) EDIT: Yes, sort of, and I'm still not sure.
What would it take to apply this to animated content, looping or non-looping? EDIT: Step 1 is to find a proper network architecture. Step 2 (the hard one, probably) is to integrate that with a static style network
It appears that the random initialization option uses white noise. What are the results from using a different form of noise, or from perturbing the input image with noise, rather than using it unaltered?
Some of my tests show persistent artifacts. I can't tell if this is a result of downscaling the input image, properties of the style that aren't apparent to me, or something else. One thing it's not is the choice of initialization. The artifacts appear even if I initialize with the input image. EDIT: I suspect this is either a consequence of the reconstruction itself, or because my content images differ somewhat from the training data. EDIT The former option does not actually make sense, so I'm going to suppose that the content images are just too different.
I'm sure some of this has papers on it already, plus I need to really thoroughly read over the source, but suddenly, I actually have a working network that does interesting things, I'd just like to look into tweaking it some.
EDIT: Okay, I'm finally getting some important insights into the basic details of how this works. The system detailed in A Neural Algorithm of Artistic Style is a classifier hooked into several levels of a convolutional neural network, and it has to be pre-trained separately, I think. The image is generated by separating the error (I call it error, I think the paper calls it loss) function into terms relating to different levels of the network, then feeding each part a vector from a different image. The output of the system is the input to the classifier, and the image must be trained using the error function.
Also, it looks like I'll only get so much insight from looking at the source of the net I found. If I understand right, it's extracting layers from a pre-trained net, and doesn't train the classifier at all.
EDIT: Eesh, self-teaching this stuff is a great way to end up with a patchwork of knowledge. Was looking into autoencoders, and just now noticed that tied weights are a thing. Just now. Eesh.
It seems to me that a convolutional autoencoder is cool because it means you don't need to use training to get output. Furthermore, because that makes it quick, you can pipeline it into other things.
Now, I've read up on convolutional autoencoders, and it seems to me that it would be possible to implement them using, instead of pooling layers, a convolutional layer with a stride greater than 1. Because, for a specific image size, there's an equivalent fully-connected layer, it's possible to transpose a downsampling convolution, which I believe produces several smaller convolutions, but with the same number of parameters as the original layer.
That's a neat idea I think, but I'm like, I was originally thinking about style transfer; style transfer plus high-level feature editing sounds cool. The papers I've found (one is linked above) express style in terms of the Gram matrix of the layers before the pool layer. Now, if you're getting images through training, then it's not a problem to add more score metrics and weight appropriately, but I think it'd be really cool to be able to get this stuff "in one step" as it were, and I'm really struggling to figure out how to constrain the deconvolution to match an arbitrary positive semidefinite matrix.
I mean, it's clear to me that the specific fully-connected layer that we can imagine for a downsampling convolution of a specific size of image, that matrix will have a large kernel (a stride of 2 discards around 3/4 of the degrees of freedom!), and adding a vector from the kernel space won't change the downsampled value, but can alter the Gram matrix. So, "all" I need to do is find a vector in the kernel that, when added to the deconvolution result, alters the value of its Gram matrix to match an existing one.
Does anyone know about doing stuff like this, or should I just focus on messing with the autoencoder setup I thought of? Actually putting that into code and testing it out.
Here's some highlights from my first run:
Aliust Dassyr 3U
Creature - Elf Shaman
T: Draw a card
2/3
[i]Other than the creature types not fitting the ability/color, I could see this as a real card...[/i]
Orchose Gaster 4
Artifact
T: Put a -1-1 counter on target creature.
[i]Ouch[/i]
Marss of Mestakerings U
Creature - Human Shathan
H: You may put a +1/+1 counter on $this.
1/1
[i]If that was black, I'd say it was perfect...[/i]
Bead the Jessance 2B
Instant
Target player discards a card, then shuffle your library.
[i]Unnecessary shuffling strikes again![/i]
Starct to Rucbe 1GG
Creature - Giant Warrior
Whenever another creature dies, exile it instead.
3/3
[i]Screw you, graveyard retrieval decks![/i]
Bloodhift Rake 2G
Instant - Arcane
Return $this to its owner's hand.
[i]Can be interesting with splicing.[/i]
Gift's Trintsmit B/G
Creature - Human Samurai
Bushido 3
When $then enters the battlefield, look at the top two cards of your library.
1/1
[i]Other than being a bit on the cheap side and the wrong colors, nice.[/i]
Hexanflegion Sliek 4G
Creature - Spirit
1, Sacrifice $this: Destroy target artifact or enchantment.
2/2
[i]A disenchant on a stick...[/i]
Swichvange Gure 5GG
Creature - Faeli Ascridil
Whenever $THIS blocks, return $THIS to its owner's hand. Target player shuffles his or her library into his or her graveyard.
5/6
[i]So... I block, it comes back to my hand and then super mills target player? Ouch.[/i]
Hurbmant Hoonn 2B
Instant
Exile target nonland creature.
[i]Rather specific requirement, but okay.[/i]
Thout Prath 5G
Creature - Elf Whiture Eldrazi
Whenever $THIS deals combat damage to a player, that player discards a card.
5/4
[i]This one is mean.[/i]
Blust's Gliver 3WW
Sorcery
Look at the top five cards of your library.
[i]Okay./[i]
Tencholl Boodstycrown 1B
Creature - Human Wizard
W: $this gets -3/-0 until end of turn.
2/1
[i]Why would I *ever* use this ability? Legal though.[/i]
Oh, here's a shell script I wrote to help combine the results when done:
#!/bin/bash
rm all.txt
for f in cv/*.t7
do
echo Exporting $f...
th sample.lua $f -gpuid -1&>$f.txt
cat $f.txt>>all.txt
done
Good to see people still coming back to this thread.
Congratulations on getting the network up. I look forwards to seeing more cards from you! may the network produce functional planeswalkers
What's H? This card stuck out to me as I'm not sure which symbol this is.
H=Phyrexian mana
Nightos, Kruedhen U/UU/U
Creature Spirit
T: Target player draws a card. $this gains First Strike until end of turn.
2/2
[i]Yes, the casting cost is blue or blue and another blue or blue...[/i]
Skillkition Dragons 1W
Creature - Human Shaman
Flash
2/2
[i]Boring but practical Human Shaman that is Dragons.[/i]
Ogel Lightcearcereatiop 4RG
Creature - Elemental
Wither
T: Add R to your mana pool.
1/1
[i]If this was beefier or cheaper, I'd see it as a real, but off color, card.[/i]
Spoomic Ghread 1R
Creature - Shapeshifter
1U: $THIS gets +2/+2 until end of turn.
1/2
[i]A bit off color, but this can be a monstrously huge creature with enough mana.[/i]
Lumheston of Kituher 5BB
Creature - Rat
4/6
[i]That's... a big rat. Spendy too.[/i]
Death Quage 2BB
Sorcery
Until end of turn, return a land you control to its owner's hand.
[i]And then it comes back?[/i]
Nat of the Escilf
Land
T: Add C t your mana pool.
T: Tap target artifact or enchantment.
[i]That last part would be good against enchantment creatures I guess.[/i]
Saegari Sphith 2WW
Legendary Creature - Plain Hogue
2/3
[i]Plain is right...[/i]
Griffloun Rage
Land
T, Pay 1 life: Draw a card for each basic land type.
[i]Doesn't specify in play so, I guess it's always for five unless WotC changes the number of basics...(Or, do snow lands count?)[/i]
Touth 6RR
Sorcery
Destroy all permanent.
[i]But only the one. With the obvious fix, I like this.[/i]
Crean-Shounder of Geisite B
Creature - Goblin Rogue
$this's power and toughness are equal to its power.
1/1
[i]What?[i]
Ithel with Dispop 4BB
Sorcery
Creatures you control get +1/+1 and have indy.
[i]This card belongs in a museum![/i]
Rrowled Eguge 2U
Instant
Target opponent reveals his or her hand. Draw a card.
[i]Good job, rnn![/i]
Cliud's Fergeal G
Creature - Human Artificer
Whenever $this attacks, draw a card.
1/1
[i]I'll take four![/i]
That's enough for now.
[Edit]
Also relevant:
https://arxiv.org/abs/1511.02793
https://arxiv.org/abs/1605.05396
[Edit 2]
Also https://junyanz.github.io/CycleGAN/
A couple other things I noticed in the sample_hs_v.lua script (I might get to fixing these if I have time, or someone else can intervene and fix them):
-- The primetext positions are all wonky (for example, if I prime with -bodytext_prepend, then it stuffs the primetext into the name. This has to do with how the fields have been reordered in mtgencode, but evidently weren't updated here.
-- Additionally, the primetext overwrites the number following the "|" to indicate what field it would have been.
I need to look into this a touch more to see if I can handle the columns without too much code rearranging. If I can get it to use the column indexes after the pipes as indicators, then it will be the most flexible (as long as the index numbers don't change for the fields they represent, it would be agnostic with respect to column order).
EDIT: I just had to share this card from my first run. It's so beautiful:
Scraing Angee 4U (Mythic Rare)
Instant
You may sacrifice a land.
Destroy all players.
https://arxiv.org/pdf/1603.06744.pdf
https://github.com/deepmind/card2code
Well, if we just want to have anime girls (and the rare anime guy), this can supply game-grade art pretty much indefinitely.
smooth interpolations, too, if you try that.
Here's their paper
https://makegirlsmoe.github.io/assets/pdf/technical_report.pdf
Note that that demo is running locally - apparently macs can use gpus with safari and massively speed up
Day 1RW
Enchantment (rare)
Whenever a creature attacks, Day deals 1 damage to that creature and 4 damage to you.
If you would leak, Day deals 4 damage to its controller.
I'm trying to make an EDH deck using only computer generated cards...
RRGrenzo plays your deck, GGYeva's mono green control, WW9-tails trys desperately for monowhite not to suck
RWBUTymna and Kraum's saboteur tribal, UWG Kestia's Enchantress Aggro, RUB Jeleva casts big dumb spells, RGB Vaevictis' big critters can kill your critters hard
Arena Standard
UUUU Tempo, since before it was cool
Various Wx decks running Fountain of Renewal and Day of Glory
Anything I can cram Chaos Wand in to
https://forums.spacebattles.com/threads/make-girls-moe-ais-make-anime-girls.557299/page-2#post-38340394
I guess you could do something like have mana costs converted into hair and eye color to make the art match... use the rulestext as part of the random noise vector to try to make similar cards look similar too.
What I've seen for a LOT, LOT of people's networks is their frequency to incorrectly generate rules text within the context of the card type, plurality, ect.
If I were able to generate a table (perhaps a .csv) that showed specific phrases in 1 column, and the ONLY possible <correct> following rules in another column, would someone be able to use that to improve their network?
EXAMPLE:
Independent Phrase:
"Destroy all..."
Dependent Follow-Up / Phrase Completion:
#optional linkers
["non-token", "token"]
#mandatory
["permanents", "creatures", "enchantments", "artifacts", "planeswalkers", "lands", "tokens", "non-token permanents"]
#optional closures
["with %keyword%", "with converted mana cost %X% or [more/less]"]
EXAMPLE 2:
Independent Phrase:
"Exile target..."
Dependent Follow-Up:
["permanent", "creature", "enchantment", "artifact", "plansewalker", "land", "spell"]
Well, you get the idea. If I made a .csv or excel table with phrase dependencies like this, does anyone think they could incorporate it well into their existing network?
Perhaps I will generate it by iterating through the dataset, perhaps by just using my knowledge of the game.
So would a standalone grading program help anyone? Changing 1 line at the end could easily append the new, correctly-formatted cards to either their own list or the existing card data.
In the original char-rnn there's no text|text|text encoding on the input, it just takes any text at all in any format. But it's pretty good with it and if you give it JSON or XML for example it generally will spit out valid output. In the mtg-run version it puts the cards fields on separate lines separated by | symbols.
So I'm guessing the vertical bars change the system from the original char-rnn plain text version in a more signficant way. The train.lua file itself is looking for them, they aren't being fed to the training network right? So this must produce a different output then if the magic cards had been in an XML or whatever format and run through the original char-rnn. I'm guessing the bars do something like hard code into the network the idea that these are different fields, and it doesn't have to learn to distinguish the different sections?
It would certainly help me. I'm doing exploration in this space. There is a lot of information about MTG that informs whether cards are "correct" or not that's not written on the cards.
One of the things the network is pretty awful at is matching color pie. We have MaRo's article which tells us exactly which mechanics are in each color, https://magic.wizards.com/en/articles/archive/making-magic/mechanical-color-pie-2017-2017-06-05. So my first stab at this will be to try to classify cards as in-pie or out-of-pie (and if we want to feed the results back in to the network as an experiment, I'm all for experiments).
The following data sets would be useful for doing this.
1. For each card, tag it with all the abilities listed in Maro's article that are on that card.
We could use this to train a network that can tag newly-generated cards with their abilities in the same way.
2. For each card, provide some measure of whether it's in-pie or out-of-pie.
Even having one person read MaRo's article and give us a bunch of grades for random cards (doesn't have to be the whole corpus to start) could get us going here.
I've run the network and have everything now, including my checkpoints.
In order to work from my Windows Machine, I piped the sample cards from each checkpoint to their own .txt file.
I see the decoder was written in python 2.X, which I have installed alongside 3.x but it's still complaining about the difference in syntax.
What command would I give my CMD on Windows to have Python 2.7 run decode.py on, say "test1.txt" and have it output the decoded cards to "output.txt"?
"python decode.py test1.txt output.txt" Makes python 3 try to run it and it complains instead of doing its job.
Here are some more pictures: https://imgur.com/a/o0mWf
Training took about 24h on a GTX 1070. They all have a resolution of 400x300 which is close to the aspect ratio of the actual artwork. I used a high dropout rate meaning that the generator didn't overfit. So none of the results should look similar to any existing cards. The one drawback being that most of them won't make any sense at all.
I'll try a slightly different network architecture with a lower dropout rate next. Just to see what that would look like.
Let me know if anyone is interested in them. I can send you the model to generate more, or just upload as many pictures as you want.