I sampled the latest network that I trained. The norm stabilization might have helped some, but with the parameters I chose I had the stabilization effect turned down substantially. It's helpful, but I think that the lack of dropout is playing a bigger role right now with the results that we see.
Without dropout, there are more opportunities for memorization, which leads to occasional clones of cards like this one:
Demon's Herald B
Creature - Human Wizard (Uncommon) 2B, T, sacrifice a blue creature, a black creature, and a red creature: search your library for a card named Prince of Thralls and put it onto the battlefield. Then shuffle your library.
1/1
#This is Demon's Herald exactly.
Now, not using dropout does have certain advantages in that it appears more likely that we'll get fewer outright garbage cards, and if the network produces an exact clone of a card, then we can filter out the result automatically. The same goes for near-perfect clones like these:
Rathcask Tithe 2BB
Creature - Human Cleric (Rare) 3B, T, sacrifice a blue creature, a black creature, and a red creature: search your library for a card named Scion of Darkness and put it onto the battlefield. Then shuffle your library.
1/1
#This is the lovechild of Demon's Herald and Dark Supplicant. Nice try, RoboRosewater.
Karonahi, World Render 3G
Creature - Human Wizard (Rare) 2B, T, sacrifice a blue creature, a black creature, and a red creature: search your library for a card named Prince of the Spirit of the Claws of the Convert of Bolase and put it onto the battlefield. Then shuffle your library.
1/1
#Again, nice try, but you'll have to do better. And I'd love to see what "Prince of the Spirit of the Claws of the Convert of Bolase" is supposed to be.
But here's the thing: the cards that get generated aren't neatly divided into clones and original works; it's more of a sliding scale.
For example, you can have cards that are pseudo-clones, cards that strongly resemble existing cards but that go off in a different direction:
Sarkhan's Stage 2BR
Enchantment (Uncommon) 1B, exile one or more creature cards from your graveyard: put an X/X black Zombie Horror creature token onto the battlefield.
Sacrifice a creature: regenerate Sarkhan's Stage.
#This is an attempt at a better Corpseweft, because it is also a sac outlet and it is virtually immune to enchantment destruction.
And then you can have cards that are hybrids of cards that have certain similarities in their design:
Blight Herder 6
Creature - Elemental
Flash
Evoke 1G
When Blight Herder enters the battlefield, put two 1/1 colorless Eldrazi Scion creature tokens onto the battlefield. They have "sacrifice this creature: add 1 to your mana pool."
3/4
#Hybrid of Blight Herder with Briarhorn, joined by the fact that both Lorwyn elementals and Eldrazi like ETB abilities.
On the creative side of the spectrum, we have cards that draw inspiration from many sources but whose origins are hard to suss out. That, I guess, is pretty close to what we want to get from the network.
Spawning Grounds 3G
Sorcery (Rare)
Until end of turn, creatures you control gain "whenever this creature deals damage to an opponent, put a token that's a copy of that creature onto the battlefield.
Dreads Hermit W
Creature - Spirit (Uncommon)
Whenever you cast a spirit or arcane spell, you may put a storage counter on Dreads Hermit. 1, T, remove X storage counters from Dreads Hermit: add X mana in any combination of B or W to your mana pool.
2/1
Talas Carapacer UUU
Creature - Elemental (Rare)
Flying
Talas Carapacer gets +2/+0 as long as you control another creature on the battlefield.
All Spirits have "when this permanent enters the battlefield under your control, put a +1/+1 counter on Talas Carapacer." T: Target creature can't be blocked this turn.
1/1
And on the far end of creativity, we start to see some fraying of the logic because the network is going way off the beaten path:
Kirag, Bloodfire Justice R
Planeswalker - Kirag (Uncommon)
+1: Up to one target creature can't be regenerated this turn.
5
#What an adorable mini-planeswalker!
Shadowfeed 3BB
Instant (Rare)
Search your library for an instant card or a Rabler card, reveal that card, and put it into your hand. Then shuffle your library.
Flashback 7B
Ether Apprentice 4WU
Creature - Angel
Flying
Protection from white 7: Return those cards to the battlefield under their owners' control at the beginning that dealt damage during the untap step.
3/3
Jilomar, the Tender Gamer 5UU
Legendary Creature - Beast (Mythic Rare)
Flying
Your maximum hand size flips an enchantment card.
3/3
Remember, from the perspective of the machine, everything that it generates is representative of Magic the Gathering. Even when it's spitting out nonsense, those are Magic cards according to the learned model. We tend to think of Magic as a formal system of fixed rules, but for the neural network it's more of a series of fuzzy propositions about what cards can be, such as:
* "Bars in the text mark the beginning and end of fields of a card." (confidence 1)
* "Creatures have power and toughness." (confidence 0.9999999)
* "Elves can be green." (confidence 0.98)
* "Dragons have flying." (confidence 0.97)
And then you have more outlandish propositions like "planeswalkers can be of uncommon rarity", "Rabler is a card type", "creatures can deal damage during the untap step", and even "a player's maximum hand size can flip enchantments". They're unlikely to be true, but the likelihood is non-zero. It's a necessary of consequence of using an algorithm that is empowered to make educated guesses about things it has never observed: some hypotheses end up being wrong.
-----
Anyway, I get the feeling that we've been setting the dropout rate too high - we could probably make do with a value closer to zero (if not zero). I also need to do more testing with the norm stabilization, to get a better feel for its effects.
EDIT: ... Sigh ... I can't do GPU training on this computer. Perhaps on some other computer, but not this one. This isn't a "Oh, it's really hard" thing, this is "I skipped checking that it actually satisfied the system requirements". CUDA can't run on this. Maybe OpenCL, but I don't feel like climbing out of one rabbit hole only to immediately plunge down another. (Okay, fine, it's definitely compatible with OpenCL 1.2. But I go no further for now.)
Aww, I'm sorry to hear it.
EDIT: As a side note, for those of you who may not be familiar with the process, parameter tuning is a much easier problem when dealing with tasks like classification. For a neural network that detects cats, its effectiveness can be determined objectively (it sees the cat or it does not); you can automate a routine that tests lots of different versions of the network and optimizes the parameters to maximize cat detection potential. Here, however, we're trying to measure "creativity"... that's not so easy. lol.
EDIT(2): An article came out 72 hours ago entitled "Attribute2Image: Conditional Image Generation from Visual Attributes" (link here) by authors working at the University of Michigan, Adobe and NEC labs. Fun stuff (see image). On that note, I'm keep an eye out for source code to be released for these new papers (seeing as I don't have the time to implement stuff from scratch).
In the end, I decided against trying to work with the unitary stuff for now, until I can come up with an activation function that's easier for me to have my code analyze (or just, you know, use an existing framework...). The basic issue is that I designed this stuff to work with addition, subtraction, multiplication, and non-linear functions. Their rectifier involves division, and piecewise functions on top of that. If I want something that my code can work with, I need something else.
Besides that, I've been messing with OpenCL stuff. I've got simple demos from Apple working, so that seems okay. Running the tests in pygpu just segfaults at the moment, though.
For the record, I'm getting exploding losses whenever I try to apply the stabilization stuff to the memory cells of the LSTM. I'll need to look into why that might be. Perhaps I'm messing up a calculation somewhere, or I'm not weighting the stabilization loss appropriately with regards to the correctness loss.
Also, I'm currently looking into using illustration2vec for fun and profit. We know we can make vectors that represent cards. Thanks to illustration2vec, we also have a way of getting vectors that represent card art, as well as a system that will take those art vectors and give us tags that describe the content of the image (see image). It won't be perfect; for example, illustration2vec it thinks the goblin is a little boy (a reasonable mistake). But if we can learn a mapping from the card vectors to the art vectors, then we should be able to get descriptions for artwork for novel cards.
I have all the Magic art, and all the cards, and a way of getting all the vectors. I'm working on bringing that all together using Caffe+Python. Should be fun.
And from there, assuming that something like Attribute2Vec comes to fruition (and has source code available), then we'd have a pipeline from cards to crappy images. Slap a coat of Van Gogh or Wayne Reynolds on the result and we should have something passable for art.
But that comes later. For now I'll be happy just to have a vague description of what the art would look like.
EDIT: Step one complete! I have a file full of (card vector, art vector) pairs. Next comes the translation bit, and then finally a tool wrapped around that to take cards and suggest artwork.
Talcos, I am very intrigued by the network coming up with clauses that activate if you win. Could you seed the network with "If you win" and see if it comes up with anything interesting please?
Talcos, I am very intrigued by the network coming up with clauses that activate if you win. Could you seed the network with "If you win" and see if it comes up with anything interesting please?
That's a bit tricky to do, insofar as the phrase "if you win" only ever occurs in the middle of a line of text. Example cards in previous dumps that contain the phrase include:
Spire Bolt 2R
Instant (Uncommon)
Spire Bolt deals 3 damage to target creature or player. Clash with an opponent. If you win, return Spire Bolt to its owner's hand.
Boroal Recluse 1G
Creature - Elf Shaman (Rare)
Sacrifice Boroal Recluse: Flip a coin. If you win the flip, put a 5/5 colorless Djinn artifact creature token with flying onto the battlefield.
Whenever you cast a spell, you may put two +1/+1 counters on Boroal Recluse
2/1
We can, of course, seed the network with text of our own to describe something that can be won or lost, and then let the network take it from there. For example,
Nyxborn Veil 1UR
Instant (Uncommon)
Challenge target opponent to a game of fisticuffs. If you win, draw three cards.
#There's only one fair way to settle this...
Render's Mage-Sails xxRR
Sorcery (Rare)
Enter a lightning round with target opponent. If you win, Render's Mage-Sails deals 3 damage to that player for each creature tapped this way.
#Variable amounts of mana! Tapping of creatures! The lightning round has it all!
Honors' Raid 1R
Enchantment (Rare)
Whenever you enter a lightning round and win, you may pay 3. If you do, put a colorless artifact token named Metallitan onto the battlefield.
#I have no idea what Metallitans are supposed to do. I imagine they're redeemable for prizes. I re-primed the network with the text for this card up to the name just to see what the alternatives were. Other names for the token include "Gold", "Advice", and "Lightshade".
But yeah, the network usually follows the phrase "if you win" with something good for you or bad for your opponent. Was there something specific that you were hoping to see?
There's also a paper due in two weeks. I'll post some more detailed discussion of my discoveries as I work on it.
I like the poster, and I look forward to seeing what you do with the paper.
-----
I sketched out the process for training the card2artvec translation network. Tonight, I just need to figure out the right way to implement it using Caffe. It's a bit different from Torch, but that's okay. Once that's done, I'll be able to take the trained network, pair it with the illustration2vec tag predictor, and then we'll be in business.
Was there something specific that you were hoping to see?
I was asking under the assumption that the network meant "if you win the game" I didn't even consider the Clash mechanic. Since that's the case, could you seed it with "If you win the game" instead?
Was there something specific that you were hoping to see?
I was asking under the assumption that the network meant "if you win the game" I didn't even consider the Clash mechanic. Since that's the case, could you seed it with "If you win the game" instead?
Aha, I see. Here are some results:
Dungeon Shade 3U
Creature - Shade Wizard (Uncommon)
Flying
Pay 4 life: Draw a card.
If you win the game, return Dungeon Shade to its owner's hand.
2/2
#Errata: Cards that you control but do not own become yours at the end of the game if you win. Dungeon Shade is an exception to the rule.
Angelic Savage 1R
Creature - Human Barbarian Warrior
If you win the game, Angelic Savage can't be countered.
Level up R
Level 3 (4/3): 3U: target player draws a card for each tapped artifact creature you control.
2/2
#Also true. If the game state doesn't exist, it's impossible for Angelic Savage to be countered. Or played, for that matter, but that's besides the point.
Mistform Serpent 6U
Creature - Serpent
Mistform Serpent can't attack unless defending player controls an Island.
If you win the game, destroy all permanents you control.
4/5
#Sweep the board clean! The game is over.
Down of Damnation WW
Enchantment (Rare)
At the beginning of your upkeep, if there are two or more instant and/or sorcery cards in your graveyard, you may search your library for a white card, reveal that card, and put it into your hand.
If you lose the game, Down of Damnation deals damage equal to its power to itself.
Soratami Crag
Land (Uncommon)
Sunburst (This enters the battlefield with a charge counter on it for each color of mana spent to cast it.)
At the beginning of each player's upkeep, Soratami Crag deals X damage to that player, where X is the number of charge counters on Soratami Crag.
If you lose the game, take an extra turn after the Bastlepie.
#No idea there. Interesting design though. Pity that it's slapped on a land, where does nothing.
Skirk Preventer 2W
Creature - Human Soldier (Uncommon)
Skirk Preventer gets +1/+1 for each aura attached to it.
If you would lose the game, you may gain 1 life instead.
2/3
#Seems perfectly fair.
---
By the way, caffe seems to dislike me. All I wanted was a simple, feed-forward network, but it wants to stymie me for unusual reasons. There's a bit of a learning curve it seems. But I'll get past it.
EDIT: Solved the problem that I was having. The error message was just very cryptic.
EDIT: Training error is.. NaN. But hey, data is coming in and data is going out, so that's something. I'm getting closer.
Skirk Preventer 2W
Creature - Human Soldier (Uncommon)
Skirk Preventer gets +1/+1 for each aura attached to it.
If you would lose the game, you may gain 1 life instead.
2/3
#Seems perfectly fair.
Well how else can you expect to beat a deck with the skobayashi maru?
Out of curiosity, what would happen if you primed a card with "Vraska"? She only has one card, so would the system output an exact copy, or would it know that she should be a black green planeswalker and come up with appropriate abilities, or would it just spit out something largely unrelated?
Mostly unrelated, though a higher number of card's named Vraska are legendary creatures than on average. Perhaps because it sounds like a name. Example outputs:
Vraska the Ox 2U
Legendary Creature - Human Wizard
Whenever a source deals damage to you, destroy it.
2/2
Vraska the Goblin Gatekeeper 1RR
Legendary Creature - Goblin Warrior (Rare)
At the beginning of your upkeep, if a player controls more creatures than you do, Vraska the Goblin Gatekeeper deals 3 damage to that player.
3/3
Now and then, it tries to fit Vraska into another word or to treat it like something other than a name:
Vraskacate 3W
Sorcery (Rare)
Each player exiles all land cards in his or her hand.
Vraska the Earth 1B
Sorcery (Uncommon)
Return target creature card from your graveyard to your hand.
Draw a card.
This unrelatedness, I think, is healthy. It would be overfitting for the network to infer from one example that the name Vraska has to belong to a green/black planeswalker. The network definitely does learn associations with names, of course.
EDIT: I should note, of course, that if we're dealing with a network that is experiencing severe overfitting, then this can happen. For example, when I sample a network that I know has overfitting issues, priming with "Vraska" (and especially "Vraska the Unseen") gives me a disproportionate number of planeswalkers.
----
By the way, I'll see about going back and finishing that caffe code later today. Been busy, but I'll get it done.
EDIT: I'm close. Getting an exploding loss issue when trying to train, but I know that I'm loading all the data in correctly and that it's doing all the right computations. I just need to look at my network setup and the training configuration. Right now training loss goes up to 8000, then to 22000, then to 512419, then to infinity, then to not-a-number. Whoops. Good news about caffe is that you don't have to write your own training script. You just feed all your data in and say "solve!" and it does everything for you. That's nice.
Skirk Preventer 2W
Creature - Human Soldier (Uncommon)
Skirk Preventer gets +1/+1 for each aura attached to it.
If you would lose the game, you may gain 1 life instead.
2/3
#Seems perfectly fair.
I think it could work if it just made you unable to die and kept you at one health for as long as Skirk preventer is on the field, maybe make it unable to be enchanted too but that's about it
I think I found out why I was getting the exploding losses. I completed the rest of the pipeline while pondering on the issue, and I saw there was a size mismatch between the vectors I was producing and the vectors I need, and that's when it hit me.
I had extracted the wrong vectors from the illustration2vec network. The vectors I was trying to train on were embedded representations of the actual images, rather than the probabilities of the tags.
That created an impossible problem to solve. For example, how are you supposed to predict from the text of Caged Sun that the green sun is caged? And so on.
I'm rerunning my data gathering process to get the correct vectors for training. I'll come back in a few hours and try the training process again, hopefully with more luck this time.
Found myself looking through a dump (probably from a month ago) just to see what I could find. Random highlights:
abyssal splittal (mythic rare) BURRW
creature ~ elemental
flying
whenever @ attacks, defending player puts the top two cards of his or her library into his or her graveyard. B, T: search your library for a multicolored creature card with converted mana cost 5 or less and put it onto the battlefield. then shuffle your library.
(4/3)
### Coherent! Weird mana cost, but at least blue is represented in the mill part, and white is represented in the flying part. Possibly the red modified the p/t? And not sure about the black. The ability makes sense too, which is wonderful; it has searching for a valid target, does something valid with it, and remembers to shuffle the library after searching.
abzan embars (uncommon) 1(U/R)WB
creature ~ elemental assassin
whenever @ blocks or becomes blocked by a creature, destroy that creature at end of combat.
(3/3)
### An Esper/Mardu elemental assassin? Cool! Nice ability too, could be argued in any of the colours represented (I think).
korozda limarid (rare) 2BGG
legendary creature ~ vampire
flying
whenever a card is put into an opponent's graveyard from anywhere, return @ from your graveyard to your hand, then discard a card.
(5/5)
### A very recurring vampire with a coherent graveyard-matters theme! I like it. A 5/5 with flying for 5cmc is also spot on.
earth~torted defenses (uncommon) 3
artifact creature ~ salamander 1, T: put a % counter on @. T: each opponent sacrifices a creature. if that player doesn't, destroy the creature creature with power less than or equal to the number of % counters on @.
countertype % charge
(2/2)
### Fascinating design, I really like it. Just pretend the last sentence starts with "If that player doesn't, destroy target creature" instead of 'destroy the creature creature'. It's a small enough slip that I'm willing to let it slide (usually I only post flawless cards here). This also seems like it could be a power level issue card; forcing a sacrifice each turn is nothing to sneeze at. Then again, it's balanced against the fact that you also have to sacrifice something, and also the fact that the opponent can choose not to. I think the wording would need some adjustment to make it playable, but such wonderful design.
myios's shoreshaper (rare) 5
land
@ enters the battlefield tapped.
at the beginning of your upkeep, put a % counter on @.
remove a % counter from @: add one mana of any color to your mana pool. activate this ability only if you control no one creatures with "this creature's power and toughness are each equal to the number of % counters on @.
countertype % verse
### Ok, we can ignore 'ETB tapped' since this land doesn't tap for anything, yet it still works. You get a verse counter every upkeep, and you can remove a verse counter for any colour of mana, without losing life or anything? OP, I hear you say. Not so fast! This is possibly the best limiting condition I've seen on an RNN card; you can't activate this ability if you control a creature with p/t each equal to the number of verse counters. So, sure, playing a deck with only unequal p/t creatures makes this land work. This is pretty awesome design.
Also, it costs 5 to cast this land, apparently. Not sure if that works within the rules; maybe this should be an enchantment.
Losses are going down! Mind you, the feature vector is very sparse. For example, most Magic cards do not have characters with red hair in them, so the red hair entry in the vector is usually zero. It's easy to get losses low initially if you just predict that the art is featureless, but to make further improvements, you have to start making more intelligent decisions. I only ran it for one epoch, so the network is actually very undertrained at this point... but oh God, I actually think I can get results with this.
I gave it...
Vigilant Drake 2U
Creature - Drake (Uncommon)
Flying, infect
2/3 Top tags: Solo, no humans, cloud, sky, armor. Character looks most like: No strong results here. Copyright: Pokemon? Dark Souls?
#The illustration network was primarily trained on humans (and humanoids), so the best it can say is that the artwork is not of a human subject. But it says "solo", so there IS a subject in the artwork. Interesting that it didn't note the wings, but that's okay. More training might fix that.
Sylvan Escort 2G
Creature - Elf Rogue (Uncommon)
Creatures with power less than Sylvan Escort's power can't block it.
2/2 Top tags:: Solo, 1boy, weapon, armor, tree, nature. Character looks most like: Link Copyright: Probably an original work, though Final Fantasy comes in pretty close.
#So a lone (male) wanderer in the forest bearing a weapon and wearing armor, standing near a tree. Bonus points if the elf is a moody, complex character. Hahaha!! :-D
If you go past the top tags, there's some noise creeping in. I need to sit down and train the network fully. But hey, at least it's somewhat working. Once I've had the opportunity to do that, I'll be sure to post more in-depth results.
From here, all we need to do is map the art description vectors to some kind of art generating network like those papers I showed you earlier. That would be sufficient to get some (really low-quality) artwork. That comes later, of course, haven't figured that step out yet, lol.
Hello all! I meant to post some more results last night, but I had a long day at the lab and ended up falling asleep not long after I got home.
For viewers at home who are just tuning in, here's what's going on:
* We have a neural network that is churning out Magic cards.
* We have a mapping from Magic cards to vectors describing the semantics of Magic cards, learned by another neural network. That is, the vectors categorize the cards such that cards that are semantically similar end up with similar vectors. Llanowar Elves and Fyndhorn Elder have highly similar vectors. The card vectors are compositions of individual word vectors, so "Add G to your mana pool" is vector(add) + vector (G) + vector(to) + vector(mana) + vector (pool). It works really well, actually.
* We have a third neural network (illustration2vec) that can take artwork and tag it according to its content, producing a vector that describes what it is seeing in the scene. So artwork ofLLanowar Elves -> artvector(man wielding sword with tattoos)
* In principle, sufficiently detailed descriptions can be used to generate novel artwork. But in order to do that, we'd need a way to get from cards to art descriptions. That's sort of what we're experimenting here.
* Along those lines, I have a fourth neural network (still with me?) that maps card vectors to art description vectors, trained on all the cards and artworks in Magic.
So, now I have that shiny new network, and it's time to take it for a test run. The first thing to note is that this transformation from one kind of vector to another (at least weakly) preserves the additive compositionality of card vectors. So word vectors can be turned into art description vectors that convey how their use influences the artwork.
For example, if we just feed a vectorization of the different symbols of mana, we can see what associations the network makes between those symbols and features in the artwork. Note that the number indicates the confidence of the network in that feature, that is, how strongly it feels about that feature.
Since it only has one word vector to go on, the associations are fairly weak, but we can clearly see that it's picking up on something. The artwork for green cards have the strongest measurable thematic consistency, followed by white. Black is the weakest color in this respect. Now, that's not to say that black doesn't have a clear direction with its art, but it's not one that corresponds to any tags that illustration2vec is tracking. After all, Magic's swamps and bogs are mostly just dead trees, so I can see the connection there.
Flying: no humans, solo, sky, cloud, wings, water
Trample: no humans, solo, tree, nature, weapon
And the compositionality seems to hold for small creatures. And I say creatures specifically because the illustration2vec network was trained on living subjects, so it tends to miss most things on artifact, enchantment, and land cards. Of course, the wordier a card is, I think noise starts becoming more of an issue. For example:
Vital Ghost 1GG
Creature - Spirit (Uncommon)
Amplify 1 (As this creature enters the battlefield, put a +1/+1 counter on it for each Spirit card you reveal in your hand.) T: Add G to your mana pool. The kami of House Rithfranter were merely the brains of Gaia, and were croched by their own ill-seeds for its debation.
2/2
#Yes, brave House Rithfranter, loyal mana dorks of Gaia! Or something. I'm not sure where it was going with that one. I do like the idea of amplify being a mechanic for spirit tribal though.
* Vital Ghost stands alone.
* Vital Ghost is not a human.
* Vital Ghost is more likely a female figure rather than a male figure (ever so slightly, it's hard to tell).
* Vital Ghost's hair (and she has hair) is more likely long than short.
* Vital Ghost wields... something. Might also be wearing a hat, maybe a helmet.
* Vital Ghost is surrounded by nature.
Which is funny, because that actually describes a green spirit that the network was trained on, Ghost-Lit Nourisher. Long hair, a hat, female, standing alone. The associations are hazy and less certain than what illustration2vec would have chosen, however. Here are the actual features that illustration2vec tags when presented with the image for that card.
Notice how illustration2vec, seeing actual artwork, notes things like her dress (rather than just the more generic tag 'armor'), as well as a clear indication of the backdrop like the night and the moon (as opposed to just 'nature'). These minor details tend to fall away because they're difficult to predict. This also means that an art generating network would have an immense amount of latitude if attempting to conjure art based on these imaginary art descriptions.
Of course, there are limits to what we can do with illustration2vec. Illustration2vec was trained on character artwork, so it performs best on creatures and very weakly on cards like sorceries and instants. For example,
Stunting Blast R
Instant (Common)
Stunting Blast deals 2 damage to target creature or player. "The spirit of the stone is the spark of chaos as the other waves." - Rakka Mar
Strong values for fire and glowingness, but beyond that the details aren't very clear. How 'facial hair' and 'polearm'. even made it into the list is beyond me. It is interesting to note that the first two entries tend to be "solo", and "weapon", as those tags generally apply to the majority of Magic cards. When in doubt, the translation network just slaps them on there because they happen to be true the majority of the time.
But this is just a trial run. One thing that's missing from the scene is composition information. You can have a picture of two figures, a man and a woman, the man wielding the sword and the other wielding the spear, but all we get out of the art description vector is (man, woman, sword, spear). Who was holding what? On top of that come all of the problems of placement of figures in the image, rotation, scaling, background, etc.
Now, an image analyzing network can easily figure all that information out by studying the image in greater detail - that's not our problem. Our problem is that a predictor for art descriptions isn't going to give us that information (without terrible overfitting). We'd rather just have the barebones description like we have right now and let an art generator come up with the rest. But there are a lot of unresolved problems with making decisions about composition.
If we ignore composition completely, you end up with pictures like I've attached below. That's where I used that texture generating network to come up with an image based on my face that provokes the same reaction (for the neural network) and has the same semantic content. Interestingly, in the result you will see that there are only two eyes, two ears, one chin, one forehead, and one nose in the image. They're just not... where they need to be. According to the network, both of these images provoke the reaction of "it's some white dude in a purple shirt" when flashed before its eyes.
So we have a constraint-setting problem to be solved when it comes to image generation. Fortunately, there is work being done on that very problem (like I showed in previous posts). We'll get it solved.
Anyway, if what I have shared with you interests you, I can see about polishing the code I wrote and making it all available on Github.
Also, maplesmall, I love the cards you shared. I meant to comment on them earlier. Earth-torted Defenses in particular has a very fascinating design. Good find!
-------
EDIT: By the way, on a completely unrelated note, there was a fun article in the Washington Post about Google's quantum artificial intelligence research, coming on the heels of Google's claim that, for certain classes of optimization problems, their quantum computer can solve those problems 100 million times faster than is possible with conventional computing technology (link to research article here). Last year, Google researchers put out an article on a quantum machine learning algorithm for handwritten character recognition (link), and around the same time there was already talk of "quantum neural networks" (link). Don't be fooled, the QNN would still be as dumb (that is, "conservatively smart") as a jellyfish, but it'd be dumb at a speed heretofore unknown to mankind.
I'm waiting for the dust to settle before making any judgement on all this, of course. More independent verification is needed, and that's difficult. The hardware is expensive, and you have to run it in a room that's colder than deep space; the slightest disturbance and all you're left with is a room full of dead cats. No one I know has the kind of money needed to pull that off. But if all this pans out, the future is going to be very, very weird.
So for my senior indie study project to graduate I decided to run this RNN, I came across an error.
That is I got everything to work as far as I know but when i run the training the code comes up and processes but then the screen flashes and I just go back to the start screen.
it's a chromebook I used chroot to put the xfce ubuntu on as a secondary os and ran all the tutorial info in the terminal, is this a cpu issue I will post my system next post
thank you
this is pretty severe, I have passed every class but this one, and if I dont have something to show for it, I will hilariously enough, flunk out my own graduation even though I passed all the requirements ; because I procrastinated on mtg card making.
So for my senior indie study project to graduate I decided to run this RNN, I came across an error.
That is I got everything to work as far as I know but when i run the training the code comes up and processes but then the screen flashes and I just go back to the start screen.
it's a chromebook I used chroot to put the xfce ubuntu on as a secondary os and ran all the tutorial info in the terminal, is this a cpu issue I will post my system next post
thank you
this is pretty severe, I have passed every class but this one, and if I dont have something to show for it, I will hilariously enough, flunk out my own graduation even though I passed all the requirements ; because I procrastinated on mtg card making.
thanks all for help
Well, you've come to the right place. I'm about to leave for a meeting, but here's my e-mail:
rmmilewi (at) gmail (dot) com
send me an e-mail and we can talk further about your problem and your options this evening. I have a feeling this will be a long sequence of back-and-forth "did you do this?"/"did you try that?" messages until we can get to the bottom of your problem. One way or another we'll get it resolved.
Looks like the encoding script will need an update; colourless mana <> is now an official MtG casting cost! I wonder what defining characteristics it will have, and what will set it apart from artifacts... if anything.
Is there a way we (as in people not running the neural networks, but capable of assigning tags) can help train the neural network by teaching it tags for nonhuman things? The only booru I can name where MTG official can be clearly separated from artwork is http://e926.net/post?tags=mtg+official_art&searchDefault=Search, which, since it happens to be a furry-focused board, has a much lesser proportion of humans, elves and goblins.
Is there a way we (as in people not running the neural networks, but capable of assigning tags) can help train the neural network by teaching it tags for nonhuman things? The only booru I can name where MTG official can be clearly separated from artwork is , which, since it happens to be a furry-focused board, has a much lesser proportion of humans, elves and goblins.
Good question! I appreciate the offers to help, but I don't want to send you on what could be a wild goose chase or an unnecessary endeavour. Here's why (the spoilers hide elaborations).
First, what we don't yet have is a generative model suited to our purposes, and so we don't really know what the fidelity of the results will be with respect to the tags. I'm certain that we'll get there soon, given recent advancements.
As I've shown y'all before, there was a paper by Mansimov et al. last month that showed that you could map descriptions of images in plain English to novel (if blurry) images, and Radford/Metz/Chintala (who brought us that eyescream project), have given us a way of generating complex images with adversarial networks. In the picture I've attached, those are all completely original bedrooms imagined by the network. But notice that upon close inspection, you'll notice that there are "defects" in the scenes:
* Sometimes you're seeing things at angles that are physically impossible.
* Sometimes beds melt into the floors.
* In one example, the fire has spread from the fireplace and has engulfed the entire room in flames.
And that's after studying tens of thousands of bedrooms. Fantasy art is harder to come by, and we have fewer representative images for different classes of objects like monsters.
So until I get my hands on a working model for art generation, it's hard to say just how helpful those additional tags will be.
Second, visual object recognition/classification is getting really, really good.
Look at the error rate for the top performing networks in the Imagenet competitions (which started in 2010):
* In 2010.. 28%
* In 2011.. 26%
* In 2012.. 15%
* In 2013.. 11%
* In 2014.. 6.66%
* In 2015 (results announced two days ago).. 3.57%
And we just keep getting better!
Now, that's with real-world photographs, not drawings. But I'm convinced that we'll get better at bridging the gap between recognizing real things and recognizing artistic representations of things, and with that will come very rich and detailed tag information.
TL;DR: The problem might solve itself soon, and we don't yet know about the quality of the image generation. But we'll see! We might end up coming back to the idea you've suggested.
EDIT: I am currently shopping around for the right generative models, by the way.
EDIT(2): Okay, so I've been looking at the code for the deep convolutional generative adversarial networks, and I think that could suit our needs. They seem to get good results with relatively small data sets, like the example they give for album covers. I might try training on just creature art and see where that takes us, and then we can go from there. I'll have to find the time to sit down and rework the code for our purposes, of course.
EDIT(3): Still working on it. They haven't released their version of the dataset that they used for the face generation (I'm modifying that code), so I'm having to reverse engineeer the encoding. If I can get it right, everything should run without issue. Right now I'm having to read into the documentation for the Fuel library to make sense of some of the finer details. :-D
The google quantum results are a bit of a red herring - they're essentially comparing problems that are set up optimally for their equipment against the computation time on hardware using suboptimal techniques that have that particular problem as a worst-case. Small changes in the classical hardware reduces the runtime from months to under a second.
Without dropout, there are more opportunities for memorization, which leads to occasional clones of cards like this one:
Demon's Herald
B
Creature - Human Wizard (Uncommon)
2B, T, sacrifice a blue creature, a black creature, and a red creature: search your library for a card named Prince of Thralls and put it onto the battlefield. Then shuffle your library.
1/1
#This is Demon's Herald exactly.
Now, not using dropout does have certain advantages in that it appears more likely that we'll get fewer outright garbage cards, and if the network produces an exact clone of a card, then we can filter out the result automatically. The same goes for near-perfect clones like these:
Rathcask Tithe
2BB
Creature - Human Cleric (Rare)
3B, T, sacrifice a blue creature, a black creature, and a red creature: search your library for a card named Scion of Darkness and put it onto the battlefield. Then shuffle your library.
1/1
#This is the lovechild of Demon's Herald and Dark Supplicant. Nice try, RoboRosewater.
Karonahi, World Render
3G
Creature - Human Wizard (Rare)
2B, T, sacrifice a blue creature, a black creature, and a red creature: search your library for a card named Prince of the Spirit of the Claws of the Convert of Bolase and put it onto the battlefield. Then shuffle your library.
1/1
#Again, nice try, but you'll have to do better. And I'd love to see what "Prince of the Spirit of the Claws of the Convert of Bolase" is supposed to be.
But here's the thing: the cards that get generated aren't neatly divided into clones and original works; it's more of a sliding scale.
For example, you can have cards that are pseudo-clones, cards that strongly resemble existing cards but that go off in a different direction:
Sarkhan's Stage
2BR
Enchantment (Uncommon)
1B, exile one or more creature cards from your graveyard: put an X/X black Zombie Horror creature token onto the battlefield.
Sacrifice a creature: regenerate Sarkhan's Stage.
#This is an attempt at a better Corpseweft, because it is also a sac outlet and it is virtually immune to enchantment destruction.
And then you can have cards that are hybrids of cards that have certain similarities in their design:
Blight Herder
6
Creature - Elemental
Flash
Evoke 1G
When Blight Herder enters the battlefield, put two 1/1 colorless Eldrazi Scion creature tokens onto the battlefield. They have "sacrifice this creature: add 1 to your mana pool."
3/4
#Hybrid of Blight Herder with Briarhorn, joined by the fact that both Lorwyn elementals and Eldrazi like ETB abilities.
On the creative side of the spectrum, we have cards that draw inspiration from many sources but whose origins are hard to suss out. That, I guess, is pretty close to what we want to get from the network.
Spawning Grounds
3G
Sorcery (Rare)
Until end of turn, creatures you control gain "whenever this creature deals damage to an opponent, put a token that's a copy of that creature onto the battlefield.
Dreads Hermit
W
Creature - Spirit (Uncommon)
Whenever you cast a spirit or arcane spell, you may put a storage counter on Dreads Hermit.
1, T, remove X storage counters from Dreads Hermit: add X mana in any combination of B or W to your mana pool.
2/1
Talas Carapacer
UUU
Creature - Elemental (Rare)
Flying
Talas Carapacer gets +2/+0 as long as you control another creature on the battlefield.
All Spirits have "when this permanent enters the battlefield under your control, put a +1/+1 counter on Talas Carapacer."
T: Target creature can't be blocked this turn.
1/1
And on the far end of creativity, we start to see some fraying of the logic because the network is going way off the beaten path:
Kirag, Bloodfire Justice
R
Planeswalker - Kirag (Uncommon)
+1: Up to one target creature can't be regenerated this turn.
5
#What an adorable mini-planeswalker!
Shadowfeed
3BB
Instant (Rare)
Search your library for an instant card or a Rabler card, reveal that card, and put it into your hand. Then shuffle your library.
Flashback 7B
Ether Apprentice
4WU
Creature - Angel
Flying
Protection from white
7: Return those cards to the battlefield under their owners' control at the beginning that dealt damage during the untap step.
3/3
Jilomar, the Tender Gamer
5UU
Legendary Creature - Beast (Mythic Rare)
Flying
Your maximum hand size flips an enchantment card.
3/3
Remember, from the perspective of the machine, everything that it generates is representative of Magic the Gathering. Even when it's spitting out nonsense, those are Magic cards according to the learned model. We tend to think of Magic as a formal system of fixed rules, but for the neural network it's more of a series of fuzzy propositions about what cards can be, such as:
* "Bars in the text mark the beginning and end of fields of a card." (confidence 1)
* "Creatures have power and toughness." (confidence 0.9999999)
* "Elves can be green." (confidence 0.98)
* "Dragons have flying." (confidence 0.97)
And then you have more outlandish propositions like "planeswalkers can be of uncommon rarity", "Rabler is a card type", "creatures can deal damage during the untap step", and even "a player's maximum hand size can flip enchantments". They're unlikely to be true, but the likelihood is non-zero. It's a necessary of consequence of using an algorithm that is empowered to make educated guesses about things it has never observed: some hypotheses end up being wrong.
-----
Anyway, I get the feeling that we've been setting the dropout rate too high - we could probably make do with a value closer to zero (if not zero). I also need to do more testing with the norm stabilization, to get a better feel for its effects.
Aww, I'm sorry to hear it.
EDIT: As a side note, for those of you who may not be familiar with the process, parameter tuning is a much easier problem when dealing with tasks like classification. For a neural network that detects cats, its effectiveness can be determined objectively (it sees the cat or it does not); you can automate a routine that tests lots of different versions of the network and optimizes the parameters to maximize cat detection potential. Here, however, we're trying to measure "creativity"... that's not so easy. lol.
EDIT(2): An article came out 72 hours ago entitled "Attribute2Image: Conditional Image Generation from Visual Attributes" (link here) by authors working at the University of Michigan, Adobe and NEC labs. Fun stuff (see image). On that note, I'm keep an eye out for source code to be released for these new papers (seeing as I don't have the time to implement stuff from scratch).
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Besides that, I've been messing with OpenCL stuff. I've got simple demos from Apple working, so that seems okay. Running the tests in pygpu just segfaults at the moment, though.
Technology is fun! Yaaaaay!
Also, I'm currently looking into using illustration2vec for fun and profit. We know we can make vectors that represent cards. Thanks to illustration2vec, we also have a way of getting vectors that represent card art, as well as a system that will take those art vectors and give us tags that describe the content of the image (see image). It won't be perfect; for example, illustration2vec it thinks the goblin is a little boy (a reasonable mistake). But if we can learn a mapping from the card vectors to the art vectors, then we should be able to get descriptions for artwork for novel cards.
I have all the Magic art, and all the cards, and a way of getting all the vectors. I'm working on bringing that all together using Caffe+Python. Should be fun.
And from there, assuming that something like Attribute2Vec comes to fruition (and has source code available), then we'd have a pipeline from cards to crappy images. Slap a coat of Van Gogh or Wayne Reynolds on the result and we should have something passable for art.
But that comes later. For now I'll be happy just to have a vague description of what the art would look like.
EDIT: Step one complete! I have a file full of (card vector, art vector) pairs. Next comes the translation bit, and then finally a tool wrapped around that to take cards and suggest artwork.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
There's also a paper due in two weeks. I'll post some more detailed discussion of my discoveries as I work on it.
That's a bit tricky to do, insofar as the phrase "if you win" only ever occurs in the middle of a line of text. Example cards in previous dumps that contain the phrase include:
Spire Bolt
2R
Instant (Uncommon)
Spire Bolt deals 3 damage to target creature or player. Clash with an opponent. If you win, return Spire Bolt to its owner's hand.
Boroal Recluse
1G
Creature - Elf Shaman (Rare)
Sacrifice Boroal Recluse: Flip a coin. If you win the flip, put a 5/5 colorless Djinn artifact creature token with flying onto the battlefield.
Whenever you cast a spell, you may put two +1/+1 counters on Boroal Recluse
2/1
We can, of course, seed the network with text of our own to describe something that can be won or lost, and then let the network take it from there. For example,
Nyxborn Veil
1UR
Instant (Uncommon)
Challenge target opponent to a game of fisticuffs. If you win, draw three cards.
#There's only one fair way to settle this...
Render's Mage-Sails
xxRR
Sorcery (Rare)
Enter a lightning round with target opponent. If you win, Render's Mage-Sails deals 3 damage to that player for each creature tapped this way.
#Variable amounts of mana! Tapping of creatures! The lightning round has it all!
Honors' Raid
1R
Enchantment (Rare)
Whenever you enter a lightning round and win, you may pay 3. If you do, put a colorless artifact token named Metallitan onto the battlefield.
#I have no idea what Metallitans are supposed to do. I imagine they're redeemable for prizes. I re-primed the network with the text for this card up to the name just to see what the alternatives were. Other names for the token include "Gold", "Advice", and "Lightshade".
But yeah, the network usually follows the phrase "if you win" with something good for you or bad for your opponent. Was there something specific that you were hoping to see?
I like the poster, and I look forward to seeing what you do with the paper.
-----
I sketched out the process for training the card2artvec translation network. Tonight, I just need to figure out the right way to implement it using Caffe. It's a bit different from Torch, but that's okay. Once that's done, I'll be able to take the trained network, pair it with the illustration2vec tag predictor, and then we'll be in business.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
leveler.cs.washington.edu:8080
I was asking under the assumption that the network meant "if you win the game" I didn't even consider the Clash mechanic. Since that's the case, could you seed it with "If you win the game" instead?
Aha, I see. Here are some results:
Dungeon Shade
3U
Creature - Shade Wizard (Uncommon)
Flying
Pay 4 life: Draw a card.
If you win the game, return Dungeon Shade to its owner's hand.
2/2
#Errata: Cards that you control but do not own become yours at the end of the game if you win. Dungeon Shade is an exception to the rule.
Angelic Savage
1R
Creature - Human Barbarian Warrior
If you win the game, Angelic Savage can't be countered.
Level up R
Level 3 (4/3): 3U: target player draws a card for each tapped artifact creature you control.
2/2
#Also true. If the game state doesn't exist, it's impossible for Angelic Savage to be countered. Or played, for that matter, but that's besides the point.
Mistform Serpent
6U
Creature - Serpent
Mistform Serpent can't attack unless defending player controls an Island.
If you win the game, destroy all permanents you control.
4/5
#Sweep the board clean! The game is over.
Down of Damnation
WW
Enchantment (Rare)
At the beginning of your upkeep, if there are two or more instant and/or sorcery cards in your graveyard, you may search your library for a white card, reveal that card, and put it into your hand.
If you lose the game, Down of Damnation deals damage equal to its power to itself.
Soratami Crag
Land (Uncommon)
Sunburst (This enters the battlefield with a charge counter on it for each color of mana spent to cast it.)
At the beginning of each player's upkeep, Soratami Crag deals X damage to that player, where X is the number of charge counters on Soratami Crag.
If you lose the game, take an extra turn after the Bastlepie.
#No idea there. Interesting design though. Pity that it's slapped on a land, where does nothing.
Skirk Preventer
2W
Creature - Human Soldier (Uncommon)
Skirk Preventer gets +1/+1 for each aura attached to it.
If you would lose the game, you may gain 1 life instead.
2/3
#Seems perfectly fair.
---
By the way, caffe seems to dislike me. All I wanted was a simple, feed-forward network, but it wants to stymie me for unusual reasons. There's a bit of a learning curve it seems. But I'll get past it.
EDIT: Solved the problem that I was having. The error message was just very cryptic.
EDIT: Training error is.. NaN. But hey, data is coming in and data is going out, so that's something. I'm getting closer.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Well how else can you expect to beat a deck with the skobayashi maru?
Mostly unrelated, though a higher number of card's named Vraska are legendary creatures than on average. Perhaps because it sounds like a name. Example outputs:
Vraska the Ox
2U
Legendary Creature - Human Wizard
Whenever a source deals damage to you, destroy it.
2/2
Vraska the Goblin Gatekeeper
1RR
Legendary Creature - Goblin Warrior (Rare)
At the beginning of your upkeep, if a player controls more creatures than you do, Vraska the Goblin Gatekeeper deals 3 damage to that player.
3/3
Now and then, it tries to fit Vraska into another word or to treat it like something other than a name:
Vraskacate
3W
Sorcery (Rare)
Each player exiles all land cards in his or her hand.
Vraska the Earth
1B
Sorcery (Uncommon)
Return target creature card from your graveyard to your hand.
Draw a card.
This unrelatedness, I think, is healthy. It would be overfitting for the network to infer from one example that the name Vraska has to belong to a green/black planeswalker. The network definitely does learn associations with names, of course.
EDIT: I should note, of course, that if we're dealing with a network that is experiencing severe overfitting, then this can happen. For example, when I sample a network that I know has overfitting issues, priming with "Vraska" (and especially "Vraska the Unseen") gives me a disproportionate number of planeswalkers.
----
By the way, I'll see about going back and finishing that caffe code later today. Been busy, but I'll get it done.
EDIT: I'm close. Getting an exploding loss issue when trying to train, but I know that I'm loading all the data in correctly and that it's doing all the right computations. I just need to look at my network setup and the training configuration. Right now training loss goes up to 8000, then to 22000, then to 512419, then to infinity, then to not-a-number. Whoops. Good news about caffe is that you don't have to write your own training script. You just feed all your data in and say "solve!" and it does everything for you. That's nice.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
2W
Creature - Human Soldier (Uncommon)
Skirk Preventer gets +1/+1 for each aura attached to it.
If you would lose the game, you may gain 1 life instead.
2/3
#Seems perfectly fair.
I think it could work if it just made you unable to die and kept you at one health for as long as Skirk preventer is on the field, maybe make it unable to be enchanted too but that's about it
I had extracted the wrong vectors from the illustration2vec network. The vectors I was trying to train on were embedded representations of the actual images, rather than the probabilities of the tags.
That created an impossible problem to solve. For example, how are you supposed to predict from the text of Caged Sun that the green sun is caged? And so on.
I'm rerunning my data gathering process to get the correct vectors for training. I'll come back in a few hours and try the training process again, hopefully with more luck this time.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
abyssal splittal (mythic rare)
BURRW
creature ~ elemental
flying
whenever @ attacks, defending player puts the top two cards of his or her library into his or her graveyard.
B, T: search your library for a multicolored creature card with converted mana cost 5 or less and put it onto the battlefield. then shuffle your library.
(4/3)
### Coherent! Weird mana cost, but at least blue is represented in the mill part, and white is represented in the flying part. Possibly the red modified the p/t? And not sure about the black. The ability makes sense too, which is wonderful; it has searching for a valid target, does something valid with it, and remembers to shuffle the library after searching.
abzan embars (uncommon)
1(U/R)WB
creature ~ elemental assassin
whenever @ blocks or becomes blocked by a creature, destroy that creature at end of combat.
(3/3)
### An Esper/Mardu elemental assassin? Cool! Nice ability too, could be argued in any of the colours represented (I think).
korozda limarid (rare)
2BGG
legendary creature ~ vampire
flying
whenever a card is put into an opponent's graveyard from anywhere, return @ from your graveyard to your hand, then discard a card.
(5/5)
### A very recurring vampire with a coherent graveyard-matters theme! I like it. A 5/5 with flying for 5cmc is also spot on.
earth~torted defenses (uncommon)
3
artifact creature ~ salamander
1, T: put a % counter on @.
T: each opponent sacrifices a creature. if that player doesn't, destroy the creature creature with power less than or equal to the number of % counters on @.
countertype % charge
(2/2)
### Fascinating design, I really like it. Just pretend the last sentence starts with "If that player doesn't, destroy target creature" instead of 'destroy the creature creature'. It's a small enough slip that I'm willing to let it slide (usually I only post flawless cards here). This also seems like it could be a power level issue card; forcing a sacrifice each turn is nothing to sneeze at. Then again, it's balanced against the fact that you also have to sacrifice something, and also the fact that the opponent can choose not to. I think the wording would need some adjustment to make it playable, but such wonderful design.
myios's shoreshaper (rare)
5
land
@ enters the battlefield tapped.
at the beginning of your upkeep, put a % counter on @.
remove a % counter from @: add one mana of any color to your mana pool. activate this ability only if you control no one creatures with "this creature's power and toughness are each equal to the number of % counters on @.
countertype % verse
### Ok, we can ignore 'ETB tapped' since this land doesn't tap for anything, yet it still works. You get a verse counter every upkeep, and you can remove a verse counter for any colour of mana, without losing life or anything? OP, I hear you say. Not so fast! This is possibly the best limiting condition I've seen on an RNN card; you can't activate this ability if you control a creature with p/t each equal to the number of verse counters. So, sure, playing a deck with only unequal p/t creatures makes this land work. This is pretty awesome design.
Also, it costs 5 to cast this land, apparently. Not sure if that works within the rules; maybe this should be an enchantment.
I gave it...
Vigilant Drake
2U
Creature - Drake (Uncommon)
Flying, infect
2/3
Top tags: Solo, no humans, cloud, sky, armor.
Character looks most like: No strong results here.
Copyright: Pokemon? Dark Souls?
#The illustration network was primarily trained on humans (and humanoids), so the best it can say is that the artwork is not of a human subject. But it says "solo", so there IS a subject in the artwork. Interesting that it didn't note the wings, but that's okay. More training might fix that.
Sylvan Escort
2G
Creature - Elf Rogue (Uncommon)
Creatures with power less than Sylvan Escort's power can't block it.
2/2
Top tags:: Solo, 1boy, weapon, armor, tree, nature.
Character looks most like: Link
Copyright: Probably an original work, though Final Fantasy comes in pretty close.
#So a lone (male) wanderer in the forest bearing a weapon and wearing armor, standing near a tree. Bonus points if the elf is a moody, complex character. Hahaha!! :-D
If you go past the top tags, there's some noise creeping in. I need to sit down and train the network fully. But hey, at least it's somewhat working. Once I've had the opportunity to do that, I'll be sure to post more in-depth results.
From here, all we need to do is map the art description vectors to some kind of art generating network like those papers I showed you earlier. That would be sufficient to get some (really low-quality) artwork. That comes later, of course, haven't figured that step out yet, lol.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
For viewers at home who are just tuning in, here's what's going on:
* We have a neural network that is churning out Magic cards.
* We have a mapping from Magic cards to vectors describing the semantics of Magic cards, learned by another neural network. That is, the vectors categorize the cards such that cards that are semantically similar end up with similar vectors. Llanowar Elves and Fyndhorn Elder have highly similar vectors. The card vectors are compositions of individual word vectors, so "Add G to your mana pool" is vector(add) + vector (G) + vector(to) + vector(mana) + vector (pool). It works really well, actually.
* We have a third neural network (illustration2vec) that can take artwork and tag it according to its content, producing a vector that describes what it is seeing in the scene. So artwork ofLLanowar Elves -> artvector(man wielding sword with tattoos)
* In principle, sufficiently detailed descriptions can be used to generate novel artwork. But in order to do that, we'd need a way to get from cards to art descriptions. That's sort of what we're experimenting here.
* Along those lines, I have a fourth neural network (still with me?) that maps card vectors to art description vectors, trained on all the cards and artworks in Magic.
So, now I have that shiny new network, and it's time to take it for a test run. The first thing to note is that this transformation from one kind of vector to another (at least weakly) preserves the additive compositionality of card vectors. So word vectors can be turned into art description vectors that convey how their use influences the artwork.
For example, if we just feed a vectorization of the different symbols of mana, we can see what associations the network makes between those symbols and features in the artwork. Note that the number indicates the confidence of the network in that feature, that is, how strongly it feels about that feature.
White(W): Sky(16.83%),Cloud(16.61%),Tree(10.02%),Water(8.3%),Nature(7.10%)
Blue(U): Sky(12.83%),Cloud(12.31%),Water(10.97%),Tree(9.56%),Nature(8.91%)
Black(B): Tree(9.27%),Nature(8.50%),Glowing(7.13%),Sky(6.32%),Cloud(6.24%)
Red(R): Fire(15.21%),Cloud(12.49%),Sky(12.37%),Glowing(8.97%),Tree(7.95%)
Green(G): Nature(22.75%), Tree(22.51%), Water(10.93%), Grass(8.65%),Sky(7.54%)
Since it only has one word vector to go on, the associations are fairly weak, but we can clearly see that it's picking up on something. The artwork for green cards have the strongest measurable thematic consistency, followed by white. Black is the weakest color in this respect. Now, that's not to say that black doesn't have a clear direction with its art, but it's not one that corresponds to any tags that illustration2vec is tracking. After all, Magic's swamps and bogs are mostly just dead trees, so I can see the connection there.
Types also influence the description:
Dragon: Fire, cloud, sky, wings, glowing, glowing eyes, horns, red eyes, bird
Angel: armor, sword, long hair, blond hair, sky, cloud, wings
As well as body text, like abilities:
Flying: no humans, solo, sky, cloud, wings, water
Trample: no humans, solo, tree, nature, weapon
And the compositionality seems to hold for small creatures. And I say creatures specifically because the illustration2vec network was trained on living subjects, so it tends to miss most things on artifact, enchantment, and land cards. Of course, the wordier a card is, I think noise starts becoming more of an issue. For example:
Vital Ghost
1GG
Creature - Spirit (Uncommon)
Amplify 1 (As this creature enters the battlefield, put a +1/+1 counter on it for each Spirit card you reveal in your hand.)
T: Add G to your mana pool.
The kami of House Rithfranter were merely the brains of Gaia, and were croched by their own ill-seeds for its debation.
2/2
#Yes, brave House Rithfranter, loyal mana dorks of Gaia! Or something. I'm not sure where it was going with that one. I do like the idea of amplify being a mechanic for spirit tribal though.
Which gives us the following tags:
['solo', 0.40203285217285156), ('no humans', 0.26452407240867615), ('1girl', 0.24378596246242523), ('1boy', 0.2385169416666031), ('weapon', 0.2235129177570343), ('nature', 0.1928301453590393), ('tree', 0.1854991316795349), ('male', 0.13536454737186432), ('long hair', 0.12691476941108704), ('armor', 0.12118040770292282), ('blonde hair', 0.11792608350515366), ('sword', 0.11700654774904251), ('water', 0.08586515486240387), ('hat', 0.08258433640003204), ('cloud', 0.0751294493675232), ('bird', 0.07319808006286621), ('sky', 0.07176922261714935), ('grass', 0.06953449547290802), ('short hair', 0.0666426569223404), ('helmet', 0.06322870403528214)]
Which could be interpreted as..
* Vital Ghost stands alone.
* Vital Ghost is not a human.
* Vital Ghost is more likely a female figure rather than a male figure (ever so slightly, it's hard to tell).
* Vital Ghost's hair (and she has hair) is more likely long than short.
* Vital Ghost wields... something. Might also be wearing a hat, maybe a helmet.
* Vital Ghost is surrounded by nature.
Which is funny, because that actually describes a green spirit that the network was trained on, Ghost-Lit Nourisher. Long hair, a hat, female, standing alone. The associations are hazy and less certain than what illustration2vec would have chosen, however. Here are the actual features that illustration2vec tags when presented with the image for that card.
(u'solo', 0.6030265688896179), (u'1girl', 0.5145107507705688), (u'long hair', 0.4362989068031311), (u'1boy', 0.23388443887233734), (u'dress', 0.22583308815956116), (u'blonde hair', 0.1838424652814865), (u'hat', 0.18213021755218506), (u'nature', 0.16683992743492126), (u'sitting', 0.16623134911060333), (u'male', 0.15497440099716187), (u'tree', 0.15181675553321838), (u'black hair', 0.1310776025056839), (u'flower', 0.1288415938615799), (u'no humans', 0.11220281571149826), (u'night', 0.09653007239103317), (u'moon', 0.08932667225599289), (u'very long hair', 0.08754298090934753), (u'closed eyes', 0.07860249280929565), (u'bird', 0.07578708976507187), (u'water', 0.07263866811990738), (u'weapon', 0.06453625112771988), (u'grass', 0.06369359791278839), (u'barefoot', 0.05921982601284981), (u'from behind', 0.05837477743625641), (u'signature', 0.05533984676003456)]}]
Notice how illustration2vec, seeing actual artwork, notes things like her dress (rather than just the more generic tag 'armor'), as well as a clear indication of the backdrop like the night and the moon (as opposed to just 'nature'). These minor details tend to fall away because they're difficult to predict. This also means that an art generating network would have an immense amount of latitude if attempting to conjure art based on these imaginary art descriptions.
Of course, there are limits to what we can do with illustration2vec. Illustration2vec was trained on character artwork, so it performs best on creatures and very weakly on cards like sorceries and instants. For example,
Stunting Blast
R
Instant (Common)
Stunting Blast deals 2 damage to target creature or player.
"The spirit of the stone is the spark of chaos as the other waves." - Rakka Mar
which gives us...
Strong values for fire and glowingness, but beyond that the details aren't very clear. How 'facial hair' and 'polearm'. even made it into the list is beyond me. It is interesting to note that the first two entries tend to be "solo", and "weapon", as those tags generally apply to the majority of Magic cards. When in doubt, the translation network just slaps them on there because they happen to be true the majority of the time.
But this is just a trial run. One thing that's missing from the scene is composition information. You can have a picture of two figures, a man and a woman, the man wielding the sword and the other wielding the spear, but all we get out of the art description vector is (man, woman, sword, spear). Who was holding what? On top of that come all of the problems of placement of figures in the image, rotation, scaling, background, etc.
Now, an image analyzing network can easily figure all that information out by studying the image in greater detail - that's not our problem. Our problem is that a predictor for art descriptions isn't going to give us that information (without terrible overfitting). We'd rather just have the barebones description like we have right now and let an art generator come up with the rest. But there are a lot of unresolved problems with making decisions about composition.
If we ignore composition completely, you end up with pictures like I've attached below. That's where I used that texture generating network to come up with an image based on my face that provokes the same reaction (for the neural network) and has the same semantic content. Interestingly, in the result you will see that there are only two eyes, two ears, one chin, one forehead, and one nose in the image. They're just not... where they need to be. According to the network, both of these images provoke the reaction of "it's some white dude in a purple shirt" when flashed before its eyes.
So we have a constraint-setting problem to be solved when it comes to image generation. Fortunately, there is work being done on that very problem (like I showed in previous posts). We'll get it solved.
Anyway, if what I have shared with you interests you, I can see about polishing the code I wrote and making it all available on Github.
Also, maplesmall, I love the cards you shared. I meant to comment on them earlier. Earth-torted Defenses in particular has a very fascinating design. Good find!
-------
EDIT: By the way, on a completely unrelated note, there was a fun article in the Washington Post about Google's quantum artificial intelligence research, coming on the heels of Google's claim that, for certain classes of optimization problems, their quantum computer can solve those problems 100 million times faster than is possible with conventional computing technology (link to research article here). Last year, Google researchers put out an article on a quantum machine learning algorithm for handwritten character recognition (link), and around the same time there was already talk of "quantum neural networks" (link). Don't be fooled, the QNN would still be as dumb (that is, "conservatively smart") as a jellyfish, but it'd be dumb at a speed heretofore unknown to mankind.
I'm waiting for the dust to settle before making any judgement on all this, of course. More independent verification is needed, and that's difficult. The hardware is expensive, and you have to run it in a room that's colder than deep space; the slightest disturbance and all you're left with is a room full of dead cats. No one I know has the kind of money needed to pull that off. But if all this pans out, the future is going to be very, very weird.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
That is I got everything to work as far as I know but when i run the training the code comes up and processes but then the screen flashes and I just go back to the start screen.
it's a chromebook I used chroot to put the xfce ubuntu on as a secondary os and ran all the tutorial info in the terminal, is this a cpu issue I will post my system next post
thank you
this is pretty severe, I have passed every class but this one, and if I dont have something to show for it, I will hilariously enough, flunk out my own graduation even though I passed all the requirements ; because I procrastinated on mtg card making.
thanks all for help
used 1.17 GB
available 945 MB
Chrome OS
Well, you've come to the right place. I'm about to leave for a meeting, but here's my e-mail:
rmmilewi (at) gmail (dot) com
send me an e-mail and we can talk further about your problem and your options this evening. I have a feeling this will be a long sequence of back-and-forth "did you do this?"/"did you try that?" messages until we can get to the bottom of your problem. One way or another we'll get it resolved.
EDIT: The issues were resolved.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Good question! I appreciate the offers to help, but I don't want to send you on what could be a wild goose chase or an unnecessary endeavour. Here's why (the spoilers hide elaborations).
First, what we don't yet have is a generative model suited to our purposes, and so we don't really know what the fidelity of the results will be with respect to the tags. I'm certain that we'll get there soon, given recent advancements.
As I've shown y'all before, there was a paper by Mansimov et al. last month that showed that you could map descriptions of images in plain English to novel (if blurry) images, and Radford/Metz/Chintala (who brought us that eyescream project), have given us a way of generating complex images with adversarial networks. In the picture I've attached, those are all completely original bedrooms imagined by the network. But notice that upon close inspection, you'll notice that there are "defects" in the scenes:
* Sometimes you're seeing things at angles that are physically impossible.
* Sometimes beds melt into the floors.
* In one example, the fire has spread from the fireplace and has engulfed the entire room in flames.
And that's after studying tens of thousands of bedrooms. Fantasy art is harder to come by, and we have fewer representative images for different classes of objects like monsters.
So until I get my hands on a working model for art generation, it's hard to say just how helpful those additional tags will be.
Second, visual object recognition/classification is getting really, really good.
Look at the error rate for the top performing networks in the Imagenet competitions (which started in 2010):
* In 2010.. 28%
* In 2011.. 26%
* In 2012.. 15%
* In 2013.. 11%
* In 2014.. 6.66%
* In 2015 (results announced two days ago).. 3.57%
And we just keep getting better!
Now, that's with real-world photographs, not drawings. But I'm convinced that we'll get better at bridging the gap between recognizing real things and recognizing artistic representations of things, and with that will come very rich and detailed tag information.
TL;DR: The problem might solve itself soon, and we don't yet know about the quality of the image generation. But we'll see! We might end up coming back to the idea you've suggested.
EDIT: I am currently shopping around for the right generative models, by the way.
EDIT(2): Okay, so I've been looking at the code for the deep convolutional generative adversarial networks, and I think that could suit our needs. They seem to get good results with relatively small data sets, like the example they give for album covers. I might try training on just creature art and see where that takes us, and then we can go from there. I'll have to find the time to sit down and rework the code for our purposes, of course.
EDIT(3): Still working on it. They haven't released their version of the dataset that they used for the face generation (I'm modifying that code), so I'm having to reverse engineeer the encoding. If I can get it right, everything should run without issue. Right now I'm having to read into the documentation for the Fuel library to make sense of some of the finer details. :-D
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
http://www.scottaaronson.com/troyer.pdf