- Talcos
- Registered User
-
Member for 15 years, 5 months, and 16 days
Last active Fri, Dec, 29 2017 19:22:28
- 1 Follower
- 1,502 Total Posts
- 970 Thanks
-
4
maplesmall posted a message on Generating Magic cards using deep, recurrent neural networksI just found this: an interactive, browser-based neural network designed for playing around with. It's pretty fun.Posted in: Custom Card Creation -
1
LASture posted a message on Generating Magic cards using deep, recurrent neural networksAlright so this is a work in progress of my full cube draft, and in retrospect I really need to resize all of these to be smaller images. But for now, anybody who wants to take a peek at the creatures I've done, here's the set so far:Posted in: Custom Card Creation
https://www.dropbox.com/s/z1mvufjx3fesl2x/MagicboxRNN Creatures Beta.rar?dl=0
I'll be able to upload a set of the non-creature spells by the end of the night.
Edit: Non-creature spells are on this download here below:
https://www.dropbox.com/s/j82sxdusov9cq6o/MagicboxRNN Non-Creatures Beta.rar?dl=0
I'm probably going to reduce the stroke around the text from 3 pts to 2 pts so that it's a little more crisp. I've printed them out on card stock using a pretty high quality toner printer. They turn out a little darker than I'd want them to. I'll fiddle around with them before I go final-version and bring it to some printshop to actually give them the full-on treatment.
That being said, if you don't mind them being a little dark, they do print off as completely playable cards. I just use clear plastic sleeves and slip them in front of a junk common/land. -
3
MaximumC posted a message on Generating Magic cards using deep, recurrent neural networksSo, a fellow MTGSalvationer and I managed to get Roborosewater up and running on Cockatrice and we did a Battle Box game with it. (Use all cards from the set as your deck, make a token basic land of your choice each turn.) Worked better than you might expect!Posted in: Custom Card Creation
I started with a Noxlo Greater followed up by Vraska, the Ox that stopped my opponent in his tracks for awhile. Opponent landed that red 3RR enchantment that let him sacrifice a creature to deal 2 damage to another creature, which later proved critical. I built up my board while he tried to cast Onoch Wall, only to get stymied by Thra (turns out Onoch Wall has the same name as Onoch Wall). I landed that 1cc black guy who makes the opponent reveal their hand and then sacrifice three dudes; but he used his red enchantment (infinite winds?) to kill it off. Then, he got a copy of the Goblin that destroys both players when it dies, and we boardstalled.
I build up to cast that green enchantment that makes the opponent "skip" their library, and followed it up with a Human Wizard that I could pay G to kill everything and have us draw 7 cards (seemed good!). But... he got his own copy of the 1cc black reveal-hand-sac-3 dork, and by combing with Noxlo, was able to wipe my board clean before I could go off! He then cast a spell that gave his one flying creature +3/+3 for each creature he controlled and swung for lethal...
...but he sacrificed his suicide goblin before I died to kill everyone.
Set is silly and broken and fun -
4
MaximumC posted a message on Generating Magic cards using deep, recurrent neural networksOkay, I think I've done all I care to, finally. Went back and balanced the colors and rarities, fixed some artwork that was in use elsewhere, got more fitting artwork, and it's looking pretty fun now.Posted in: Custom Card Creation
I'll PM the new set.
Some examples:
Took me awhile to find something that looked like a land or building AND something that could shoot at you.
Zombie elephants are easier to find that you'd think.
Green removal that reminds you that it's still a land...? Yeah, sounds like Curse of the Dryad to me.
Try to find an image of a wolf. That is an Ooze. That you might name "But Wolf." Quite a doozy...
That one.
A snake lord that is not a snake... something elemental... well, the biblical theme works, right?
It's an elf... warrior... wall... thing? Okay, it's an egg that hatches into an evil elf warrior mutant, lets say.
Soldier gets waterbreathing, so sure, that makes sense. Surf's up!
Bird themed re-animator... bingo!
So, what kind of human would just choose a creature for it's mana cost? An artist!
Since artist contacts will be a big job, and it might be easier to play online, I'll look into MSE2->Cockatrice converters for the time being. That might be the best way to do this unless the artists were all cool. Also, I tried working on a card back, and it's HARD. Did you know Roborosewater is a LONG WORD and it's hard to make it look good on a card horizontally? Yeesh.
EDIT: Hah! This utility https://www.dropbox.com/s/ouzk0av49q3jy4d/magic-cockatrice.mse-export-template.zip totally works! I got Roborosewater up and running in Cockatrice. However, it does involve a little bit of mucking around with the directories and an XML file, so it's not plug and play. Anyway, if someone wants to jam this, download this and PM me for the set. -
2
LASture posted a message on Generating Magic cards using deep, recurrent neural networksPosted in: Custom Card CreationQuote from psycrow11 »Not sure if lands can have +1/+1 counters
I had wondered about it too until I saw the Awaken mechanics from Battle for Zendikar. Awaken has you put +1/+1 counters on a land, THEN it becomes a creature (that is still a land). I suppose this works the same way, just one is an activated ability.
@Talcos, aw you are the man. This is just what I was hoping for. And thanks for the congratulations, I'm looking forward to getting married. To help celebrate I intend to release the draft for everybody else to enjoy as well. Over the next few days I'll host up an up-to-date version of my current pile of cards. Although incomplete, it gives you a good idea of what my RNN churns out. -
2
LASture posted a message on Generating Magic cards using deep, recurrent neural networksPosted in: Custom Card Creation
Well for those of you who wanting to do a cube draft, I've still been working a lot at setting mine up; I am currently up to 227 cards, not including lands/artifacts.
I've been busy with a bunch of side projects but I'm trying to set this all up in time for my bachelor party. I'm trying to get an updated version of the corpus encoded in hardcast6drop's format, but for the life of me I can't seem to get it to output the file (my unfamiliarity with linux commands makes this tedious as all hell). If anybody has the encoded corpus that includes up to the new Innistrad set please let me know. Aside from that I do not have a complete collection of the cards posted anywhere but I've included a snapshot of my excel document I'm using to keep track of card count. -
2
mwchase posted a message on Generating Magic cards using deep, recurrent neural networksI found myself wondering if it might work to have a network try to generate an entire deck, from the cards up, for each thing. One the one hand, the individual entries become huge. On the other hand, there are many more possible decks than extant cards. I'm not sure what kind of power it would take to run a character-level network (or some kind of multi-set of parse trees structure) that goes all the way up to decks, but I feel like it would offer another angle on the design space. Like, the cards in a deck have to have positive interactions with each other. So, a deck-fabricating AI would need to somehow encode knowledge of card interactions. Prime it with a card or set of cards, and it can try to figure out what kinds of cards would work with that card, while also having a sensible power level.Posted in: Custom Card Creation -
2
maplesmall posted a message on Generating Magic cards using deep, recurrent neural networksI know that if I edited a card from the original RNN text, I mentioned it in my description. @Anyone who's making a cube of RNN cards, I'd happily throw money at you guys if you got them printed on decent quality cardstock and shipped to Canada. I'd love to bring these to a LGS and play them.Posted in: Custom Card Creation
Off topic, but my search engine Hunter got a major update today. I made a reddit post here to clarify what got fixed/improved. Next stop: comments/ratings/prices! Might take a while though. If anyone plays around with it, let me know what works/doesn't work. -
1
Mustard_Fountain posted a message on Generating Magic cards using deep, recurrent neural networksI always thought Tromple and Mointainspalk were meant to be Slidshocking Krow's downsides to try and make in balanced.Posted in: Custom Card Creation -
2
Circeus posted a message on Generating Magic cards using deep, recurrent neural networksPosted in: Custom Card Creation - To post a comment, please login or register a new account.
2
I guarantee you that that is evidence of the opposite. If you have a phrase in the corpus that is almost always used in a particular context or in a particular way, the network can overfit and assume that it must always be that way, and will memorize the words verbatim. For example, if I try...
I get...
It happens for the same reason that dragon cards usually have flying. It's not that the network knows that dragons are winged creatures and it would make sense for them to fly, it's that the characters "f l y i n g" in the body text often follow the characters "d r a g o n" in the subtype line.
I assure you that, as a rule, I don't touch the cards. Infecting the results with my own ideas would defeat the purpose of the whole experiment, lol. For the same reason, I'm still working out the graphics thing, so I can get art for cards without having to assign art myself. Ran into a small bug earlier in the week with the model, something's not getting loaded properly when I try and restore the model for sampling. I think it's an issue of my own making though. I'll get it resolved.
I should also say that I'm really excited about how far the Hunter project has come along.
EDIT: If you wanted a Turing test for authenticity of cards, suspicious for me would be a card that is significantly longer than about 160 characters or so, because the network hates run-ons. For the same reason, it can't generate complete planeswalkers or sensible flip cards. That and it's unlikely (though not impossible) for the network to produce cards with multiple interconnected or highly related abilities (aside from "add a charge counter"/"remove a charge counter", etc.). For example, Brimaz, King of Oreskos is unlikely to be produced (unless it's overfitting), simply because the network's attention span and its limited comprehension of the semantics of cards pose significant challenges when it comes to sustained, meaningful creation.
5
Speaking of art, I'm rerunning the training process to condition the generator on the card vectors. Well, part of the card vectors. I left off the body text because that just muddies the waters too much (there's a correlation between "flying" in the bodytext and the artwork, but not so much "enters the battlefield tapped", etc.). I have to make sure I did everything right, do some tests, etc. But I'll let y'all know how it turns out.
1
Yeah, I can look into that later and let y'all know. And yeah, the distributions that the machine learns and the actual distributions can vary. I mean, within cards, it does a good job, like Dragons and Angels should almost always have flying, but in terms of the distribution of card types, colors, etc. there can be some deviations.
For example, there are more green creature cards than creature cards of any other color, but, when you analyze RoboRosewater's output, you tend to find that all colors are equally likely. RoboRosewater also has a stronger bias towards making 3 CMC creatures than you see in the set of real Magic cards.
Like you said, I think that some creature types get subsumed into others. Like merfolk and vedalken are functionally similar, but there are more merfolk than vedalken, so if it wants to create a Caller of Gales-esque utility creature, it might be biased towards merfolk even when a vedalken would have worked just as well. Upping the temperature can fix that, but it also causes it to make more mistakes as a result of its frantic excitement.
-------
As an aside, I just got done with fielding questions over Skype reagrding population projections using neural networks for the benefit of folks at a US Census Bureau conference. To paraphrase myself: "I know that our techniques must seem like witchcraft to you... and you would be right! It *is* witchcraft. And it's incredibly effective."
2
On a related note, I retooled my scripts so that I could feed in animations like GIF files.
Forest waterfall -> reconstruction
Scarecrow model from World of Warcraft -> reconstruction
A peaceful scene -> reconstruction
A peaceful scene + vector representation of an image of fire -> the clouds are now a fiery tempest eradicating everything
Simulated rotation of the Girl with the Pearl Earring -> Simulated rotation of the Lampshade Lady.
Still no face on the Lampshade Lady, but as I pointed out before, I messed up the bounding boxes when we trained the last network, and faces got cut out so often that the network seems to strategically place objects in the way of faces in order to avoid having to draw them. That'll get fixed in the next iteration, and we'll also have increased resolution.
Again, the idea here isn't to try and perfectly reconstruct the input, but to use the input as a means of situating ourselves inside the vector space in a region that structurally/semantically resembles the input. The waterfall, for example, isn't the original waterfall, but happens to be another, similar waterfall inside our miniature MTG-art-inspired universe. You can think of these animations as if you were looking through a tiny pinhole into this bizarre little world.
It's fun to play with, because we never trained the system on animations, and it's nice to see that the concept of motion carries over well. That and it's fun to set people, places, and things on fire.
More to come in the near future. Sorry for the wait.
Oh, and I saw a fun paper by Li Fei-Fei's people at Stanford University on real-time neural style transfer and super-resolution. Like, a speed-up by a factor of 1000. Like I said several months ago, these improvements would come along :D. At this point, I'm just waiting for someone to come out with a Google Cardboard app so I can strap it on and see everything as a Van Gogh painting.
@nyrt: Fascinating find! Thank you for sharing!
@Elseleth: I'm excited to see your robo-cube come to fruition. I can try and pull out all the good non-creature cards that I can find later for you.
2
I could have / should have done that. As written, the code has a 32x32 network configuration and a 64x64 network configuration. I changed it so that I could create an NxN configuration for some value of N that I passed. Easy enough. But yes, recursive deconvolution is a thing, and I should look into it. My first goal right now is to condition the network using tags for cards, so that it learns to relate goblin/sorcery/green card artworks with each other. That way, I can control the output by asking it to make something extra "goblin-y", for example. That increases the odds that the art will make sense in the context that I want to use it. I just about have that ready to go; I'm just waiting on the opportunity to run the code again, and that should be soon. Then I can see about restructuring things to get higher resolution images out.
1
Not me, that's for sure. I recognize some of the names though, like Grefenstette. It looks like the Google Deepmind folks found a way to improve upon our work. I had recommended the use of a bidirectional LSTM, but I've been too busy to get around to testing that. The use of code compression is novel, however. I'm happy that I was able to attract people with adequate funding/time to investigate this topic in more detail.
I'm gonna have to take a look at their implementation whenever that becomes available. I might e-mail them about that later. The numbers look really good.
Thank you for sharing this! This is most helpful.
EDIT(1): I sent an e-mail to the lead authors with my congratulations and inquiries.
EDIT(2): I had a lovely conversation with the folks at DeepMind. Their work has a lot of fun tricks that I could co-opt for our purposes. They're not sure about a timeline for releasing their source code though, as there's a bureaucratic process for all that. I'd have to re-implement it. Which is fine.
EDIT(3): By the way, when I get the art renders working at higher resolutions (and one way or another, I will eventually), I'm going to have so much fun with it. I've found that, when I choose to guide the art generation process with an example, if it doesn't recognize an object in the scene, it tends to replace it with something else that conforms to the geometry. But it's not an exact match, it has artistic freedom (e.g. adding in feet, tail feathers, a mouth, and a dot for an eye). I think this process closely resembles what surrealist André Breton called "pure psychic automatism".
Oh, and I also attached a sample output that I made for one Thraximundar_ of reddit. The art was created by interpolation/inspiration from the art of semantically similar cards. Bit hard to make out, but like I said, that's one of the things I'm working on. The flavor text I kinda cheated on simply because I produced ten of them and hand-picked the best one; I'll need to work out an automated system for that so I don't contaminate RoboRosewater's cards with my ideas, lol.
1
Took me longer to reply than I had intended, but hello and welcome! I had a look at your source code. I'm most impressed! I admire very deep and nested procedural generations; I can tell there's a lot of attention paid to detail.
I concur with everything that our resident expert (he's modest) maplesmall has told you. There are challenges with using character-level generative models, and it's not always the best fit for every situation. At the very least, I'm not sure that I'd recommend it as an end-to-end solution for you. However, there are plenty of ways that you could incorporate ML tools and techniques into your generation process (piece-wise).
For example, a lot of the stats can be drawn from learned distributions if needed. That being said, you've spent a lot of time and energy calibrating your stat-purchasing model, and I see no reason to throw that out if it works well for you.
Abilities could be generated in a way similar to what we do, though I might recommend a word-level model rather than our character-level one. Or perhaps even some kind of clause-level model, so it'd be like generating an abstract syntax tree of sorts... but I'm not know whether there are any good implementations of that sort available online.
Now, description text, there's something that you're going to be hard-pressed to do in a purely procedural way (unless you want it to come off as very artificial sounding). A generative architecture like ours could churn out monster description text just fine (and you have plenty of data for that), but it wouldn't deliver what you wanted because what it generated would not be conditioned upon or keyed to the monsters, so you'd get fascinating but arbitrary text attached to your monsters. Instead, you'd want to use a conditional neural language models, like the one used here. But instead of picture in -> text out, it'd be monster in -> text out. Same sort of idea. Of course, the technology for that is still maturing, so the results will probably be something of a mixed bag.
If you'd like, you can PM me or send me an email at rmmilewi (at) gmail (dot) com and we can talk more about this in detail. I can see about directing you towards the resources that you'd need.
By the way, while we're on the subject, Alec Radford just put out a lecture entitled "Deep Advances in Generative Modeling". It's a 40 minute presentation that covers virtually all the algorithms that we've been talking about in this thread and then some. If you skip ahead to 34:50, you can see some unpublished results where they condition an image generator on text.
-----
As for me, I've just about got the art-generation-conditioned-on-card-vectors thing coded, but I'm having to wait to run it. Right now I'm training a bunch of LSTM networks to do population forecasting for my state on behalf of some economists and census folks. But once that's done with, I'll be sure to train a new image generation model. Once I get that working, I can see about writing some scripts that'll integrate the card and art generation processes.
EDIT: On a related note, I just got some new hardware in, so that may mean that I'll be able to run Magic experiments in parallel with others. That'll speed things up, lol.
4
Do those directories exist? The cv subdirectory, followed by the custom_format-256 directory? If not, you could create them with the mkdir command. That might be why you are having an issue.
---
Earlier in the week, I said I'd release a Magic-art-image-generation model and script for playing with it. Then I got hit by an avalanche of work. On the bright side, it's Spring break next week, so that frees up my schedule somewhat. I'm going to look into retraining the system to run with tags (images are passed to the network along with tags like "red" or "goblin", so that way the tags give us a more gentle way of controlling the output image rather than operating on the latent space directly.
Oh, and, as usual, the techinques that I've been using are already obsolete (probably). I just read a fascinating paper by Wang and Gupta entitled "Generative Image Modeling using Style and Structure Adversarial Networks". The novel aspect of their work is that they split up the generator network into two parts. The first is a structure generator that creates the geometry of the image (surface normals). Then that geometry is passed to the style generator that handles textures, lighting, appearance, etc.
Right now, with the setup we've got, style and structure are intermixed. That's why when I change the lighting of a scene, the geometry warps and shifts. Wang and Gupta are able to control these two aspects independently.
For the training data, you'd have to have surface normals for the artwork, but that's actually quite feasible. There are trained models out there for predicting the geometry of 2D images (example). Those systems are usually trained on photographs, but I suspect that they could work decently on artwork. Murk Dwellers no, Rafiq of the Many yes.
Not that I'll mess with that at the moment, but it's nice to know that these options exist.
EDIT: Oh, almost forgot! A team from Google made yet another interesting breakthrough in a paper entitled "One-Shot Generalization in Deep Generative Models". One-shot learning is where you have one or two images of a thing and you then have to arrive at a understanding of what that thing is (or in this case, be able to generate new and interesting versions of it). It's sort of a "holy grail" in computer vision. They're one step closer to achieving that. I've attached an example where the system is presented with a strange alphabet that it has never been trained on, and has to learn to write letters in that alphabet from just those single examples. They did another similar test with faces. Just a few faces, make new faces, that sort of thing. And it does decently, actually. Not bad. There's a lot more that needs to be done in this area, but the pay-off could be great. An example application would be to show a picture of an apple to a robot that has never seen apples before and say "This is an apple. Go find me more of these things. Bring them back to me. Kthxbai."
4
Thank you for summarizing the commentary! I was too busy to follow the games very closely; what you've shared is very informative for me.
I think that's why Demis Hassabis of DeepMind said that their next target could be something like Starcraft. Mind you, beating the Koreans at all of their favorite games isn't the mission of DeepMind, but a game like Starcraft would be an interesting testbed for metagame analysis. And even if DeepMind doesn't pursue that right away, I assure that others are already thinking along similar lines, just as Facebook and Google's DeepMind have been concurrently working on mastering Go.
I think there are two parts to that analysis. The first is being able to identify a metagame, and the other part is being able to independently come up with possible metagames. These two reinforce each other, of course - generative models and predictive models being two sides of the same coin. I'm convinced that if we can overcome the challenge of metagame analysis for a game with fixed elements like Starcraft, we can do it for a game with evolving elements like Magic.
---
I'm in the process of figuring out how to organize all the image-generating scripts for release. I'd like to set everything up so you can just install the necessary packages, and play with it right away. Some things could be streamlined. I'll see about having all that ready in a day or two.
5
Oh absolutely. Now, to get a bot that can play well, that's another story, and it depends heavily on how we approach the problem and the data that we have. Same goes for deck-building. In many ways, these are highly related topics, but I'll focus on just the gameplay aspect for now.
As we've seen with bots like AlphaGo and Giraffe, it's possible to integrate a neural-network-style approach into a conventional framework. You can have a "game tree" of sorts, where player A makes a move, player B responds to A, A responds to B responding to A, and so on, branching out into all the different possible realities (taking into account randomness and unknown information). A bot looks at each state and says "what is the value of getting to that game state?", and then tries to steer the game towards the path that is most likely to lead to victory. For this, you need an evaluation function, a way of comparing game states, and oftentimes, these are hard-coded.
For example, if you go into the source code for Forge, you'll find all kinds of evaluation rules that the AI. For example,
So let's say that the AI has a Doom Blade in hand that it wants to cast, and it sees the opponent has a Storm Crow and a token that is a copy of Tidal Kraken. The bot would like to take out the biggest, most pressing threat.
According to Forge's creature evaluation function, Storm Crow is worth 155 points, and the Tidal Kraken token is worth 270 points. Killing the token would maximize the opponent's losses and minimize the bot's, so the bot kills the Kraken.
In this case, the choice was very clear cut (though some experts might argue that the Storm Crow was inherently more threatening than the Tidal Kraken), but there are limitations to this kind of approach. First, it's not a very empirical approach; we just arbitrarily decided that each point of power was worth 15 points and each point of toughness was worth 10. Second, hand-crafted solutions tend to be very brittle and render the AI unable to respond well to unforeseen situations.
For example, if the AI has an unanswerable Moat on the board, that reduces the Kraken's value to 176 points (I counted), which is still greater than the Crow's. Now, it's possible that there are guards in the code that I didn't see that would prevent the bad decision from being made, but that adds more layers of complexity, and more opportunities for failure.
One way of incorporating machine learning into this process is to replace parts of the evaluation function with learned models (e.g. neural networks). What's nice about these sorts of systems is that their responses have a stronger empirical basis and, in the case of neural networks, can be very flexible in the face of totally unforeseen situations. Of course, adding in magical black boxes can create other issues, like a lack of interpretability. A common example I bring up is when you have a visual question answering system like [url=http://cloudcv.org/vqa/]this one[/url], operating on [url=
http://cloudcv.org/app/media/pictures/vqaDemo/COCO_test2014_0000004518761453763791_36.jpg]this image[/url]. We ask the system the following questions:
Q: "What game is being played?" A: Tennis (confidence 99%)
Q: "What color is the man's shirt?" A: White (confidence 25%, blue in second place with confidence 18%)
Q: "What is the sex of the person?" A: Female (confidence 65%)
Q: "Is his wife cheating on him?" A: No (confidence 96%)
In the first two cases, we get back answers that are correct, and that we can verify using the image. We can even understand why the bot thought the shirt might be blue. In the third case, the bot makes a mistake, but we can forgive it because humans are one of the least sexually dimorphic species on the planet.
But why is the bot so sure that the tennis player's wife is faithful to him? He's always away at tournaments, all he thinks about is improving his game and not his relationship, and she feels oppressed always standing in the shadow of his success. We don't know why the bot is so convinced that she'd be happy in such a terrible relationship. On the bright side, at least it was able to respond with an answer to a question that it was never asked before. That shows flexibility... even if sometimes we don't get interpretability.
---
I need to go back and finish fixing that memory issue with the image generator training process, but it might have to wait. I have a 10 AM meeting tomorrow with someone who has come to give a lecture, a 1:30 PM meeting with an economist interested in doing population projections for our state, and a 3 PM meeting with my team. I have some preparations to make. Oh, and I just got word that another one of my papers was accepted for publication in an international journal, so I have to go celebrate.
But I promise I will return to it! And I'll release all the code needed to do the image generation on your own (including the most recent trained model).
P.S. I hear that AlphaGo lost a round against its human opponent. Interesting!