2019 Holiday Exchange!
 
A New and Exciting Beginning
 
The End of an Era
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from helpiminabox »
    Are there any speed improvements to the new framework?
    Yes, absolutely. The new language model library is based on this code on github. It's a direct successor to char-rnn, and if you scroll down to the bottom of the readme there's some pretty detailed benchmark information showing how much better it is.

    My clone of the code can be found here. Be sure to look at the 'dev' branch, master is currently tied up with a pull request. The neural net code is the same, but I've developed additional code to allow much more sophisticated training input and sampling output on Linux. I'll add a tutorial explaining how the new functionality works once I'm ready for a semi-official release together with mtgencode, hopefully this weekend.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from Elseleth »
    That's correct, sample_hs_v3.lua expects a format that isn't used any more. You can force the encoder to produce it (I think, but I'm not sure which one it is specifically), but you'd have to retrain or find the right legacy checkpoint to sample.
    There's an --encoding option called "old"; I have a suspicion that'd be it! But why regress to the past when we can look to the future, eh?
    Indeed!
    Quote from Elseleth »
    Anyway, I just paid the month's AWS bill a couple days ago, so I'm trying a 512 x 3 network using field and mana randomization. I don't remember if random mana did any notable good, but it sounded like random fields at least were a worthy way to go.
    Randomizing mana is usually a good idea, Talcos did some work that showed that if you don't, the network tends to be lazy and refuse to look at much of the mana cost field, which means it has a harder time with color.

    Randomizing fields is fun, I think there's a higher limit to how much we can teach the network there, as there are more possible randomized cards we could give it (almost infinite, really), but it also makes the learning problem harder, so with the finite amount of training that we can do it tends to give less consistent results.

    What I want do do with the new training process is implement some form of curriculum learning, where we give it a mixture of ordered cards and randomized ones. That way, it has to learn the meanings of the field labels independently of order, but when we generate cards we can tell it to give us nicely ordered ones and it will be more consistent. This is all part of a grander hypothesis I have about adding ambiguity to your encoding scheme to increase the effective amount of training you can do - basically, you want to make the data just ambiguous enough that when you present it in all of the different ways you can, you have a big enough dataset to train the largest kind of model you can handle with your hardware.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from maplesmall »
    Awesome Grin

    Would that scale up as well? So if I could do a 3-layer 512-cell network on my 6gb GPU previously, would I be able to do 3-layer 1024 or 2048-cell now?
    Yes, absolutely. Last time I tried to scale up, I ran into a lot of issues with CUDA crashing and halting training. Part of my current toolchain is a script that basically watches the training process, and can both restart it when it fails and change the training parameters periodically based on outside input, like some custom accuracy measure. It's really ugly, but the version I'm using for my research has managed to administrate the training curriculum from this paper for over a day now, and actually produces better accuracy than what the authors published.
    Quote from Elseleth »
    Quote from moronstudios »
    What comes out when you prime it for 4 color legendary creatures?
    Alas, priming using specific fields doesn't seem to work for me. The inserted text goes in the wrong places. I'm guessing that sample_hs_v3.lua (which has all the priming options) isn't up to date for the format the network was trained on.
    That's correct, sample_hs_v3.lua expects a format that isn't used any more. You can force the encoder to produce it (I think, but I'm not sure which one it is specifically), but you'd have to retrain or find the right legacy checkpoint to sample. Going forward I think all formats will have standardized field labels just to remove that headache. Part of the new toolchain will also include a massively reworked version of the targeted sampling script, with a full python api and probably a good command line tool as well that will at least reproduce the original functionality.

    Hacking on this starts now, and will continue through the weekend. I hope to have new checkpoints / card dumps soon, and a new tutorial soon after.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Very small update, I updated mtgencode with the latest cards and fixed the new C symbol.

    Currently working on training and evaluating with my shiny new torch-rnn framework. I'll update the tutorial once everything is working.

    torch-rnn tends to use a lot less memory than char-rnn, so it will probably be easier for people with middle-end graphics cards to train big networks. I don't have the precise numbers, but I wouldn't be surprised if the traditional 3-layer, 512-cell networks that have produced most of the cards on here would fit comfortably on even a 1-GB gpu.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from Elseleth »
    Sounds like an excellent idea!

    How fuzzy can the evaluation logic be? If you're looking only for exact hits, then mana cost will work great, but it's going to be pretty limited beyond that. If on the other hand there's a score on how close the NN got, you could do things like:

    Prime a planeswalker up to the subtype line; do you get three loyalty abilities?
    Similar for levelers, do you get more than one level tier?
    If you prime a subtype with clear themes, do you get appropriate keywords? E.g. a Bird should fly, a Wurm should not, an Ally scores better if it has a Rally or Cohort effect...

    If the NN can score well for making appropriate decisions in examples like that, even if they're not the exact ones in the test data, that'd be cool.
    That's the beauty of this method of evaluation. The evaluation logic can be as fuzzy as you want, all you have to do with the actual neural network is give in the input and let it the output whatever it thinks should come next, and then you can evaluate that output however you want. Even for mana costs it would probably make sense to give it some kind of accuracy based on how many symbols from the real cost it got right, with some penalty if the cost was not correctly formatted or not a cost.
    Quote from maplesmall »
    My only concern with this method is that if we're encouraging it to generate cards that resemble real cards, we could end up stifling its 'natural creativity' to a certain extent. Or would the temperature parameter take care of that for us?
    What we do for evaluation would have absolutely no impact on the neural network's training, it would just tell us extra things about how successful that training had been. We could still sample from the network as usual and get the full natural creativity if we wanted to, and we probably would for tasks like generating a set. We could also do target sampling exactly like Talcos's sample_hs script, but with more flexibility.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Based on my current research on generating programs with neural networks, I've had a new idea about evaluating networks that generate cards. Instead of looking at the loss (which is kind of meaningless) or bringing in outside metrics (which are hard to write), we could use a sequence to sequence learning task where we prime the network with part of a card, then see how accurate it is at generating the rest of it.

    For example, if we had a format where the mana cost is last (which is easy enough to arrange), we could give the neural network everything about some real card up to the mana cost, then see what mana cost it spits out. You still have to figure out how to define an accuracy metric, but it's a whole lot easier and more meaningful than trying to use the validation loss or something like that.

    It doesn't have to be cost: this sort of evaluation would work for any part of the card that is determined by other parts. For example, it makes less sense from an evaluation standpoint to prime with the cost R and then compare accuracy against Lightning Bolt, because there are a lot of cards that cost R. But if we were to give it everything except the type, there are many cards for which only a few types make sense - if the card has a trigger on entering the battlefield, then clearly it can't be an instant. I think these sorts of metrics will be particularly useful because they can identify networks that know things and generate plausible cards independently of their ability to memorize lots of cards exactly. And we can sort out the overfitters separately by looking at word2vec distances.

    Anyone have any ideas about tasks like this that would make sense? I think predicting costs is going to be the big one. There are more abilities that resemble costs other than just the mana cost though: I can think of kicker, suspend, echo, evoke, any others? How about prowl and ninjutsu? The thing is some of them, like kicker, are strongly determined by the text: if the rest of the text mentions being kicked, then there better be a kicker ability. Ideally I'd like to separate all of these out so I can put them at the end as special 'cost-like' abilities, to make the dependencies as clear as possible, and then evaluate how good the networks are at learning those dependencies.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Hi everybody, I figured I should stick my head up again since I've started to work on more things related to this. I've been very busy, but one of my current research projects is converging towards some technology that I think will be able to improve card generation.
    Quote from Elseleth »
    Is the tutorial at https://github.com/billzorn/mtgencode still the state of the art for card generation, or are there tweaks worth incorporating for the latest greatest output?
    The short answer is yes, the tutorial still describes the most up to date generation process that I know of. There are a few options you might want to play around with, like the neural network size and the dropout, but that's easy to change in the commands. I don't know if Talcos or anyone else has come up with new generation techniques, but I haven't seen any issues / pull requests on github.

    That said, the library is getting out of date. I haven't updated it to include the latest sets, so the encoding algorithm probably breaks with the new colorless-only mana symbol. Maplesmall has assured me this is easy to fix, and I'd like to push a major overhaul when I have some time, but I don't know when that will be.

    The biggest change is that char-rnn is no longer state of the art as a character-level language model backend. There is a full rewrite of it called torch-rnn that offers significant performance improvements and greatly reduced memory usage, which should make it possible to train much large networks. I've worked extensively with the codebase, and I'm currently in the process of adding much more powerful training and sampling interfaces to my own fork. It's a terrible mess of lua and python, but once I've got it working for my research project it should be easy to come back and do some clever things to improve magic card generation, like experimenting curriculum learning and other techniques to mix up the format during training to get as much out of the limited data as we can with a larger neural net.

    You could try using torch-rnn as it is right now, by following their tutorial and just substituting a card corpus encoded with mtgencode for their example data file, but I suspect you'll run into some issues with not having enough data because by default they do very silly things to split the data up into training epochs. I intend to solve those problems, and I'll be sure to update the tutorial when I do, and share any new results.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from Melted_Rabbit »
    The increase in training loss occurs to a smaller degree the batch after saving a checkpoint in the run and during the other run that aborted. I assume this is related to saving the checkpoint.
    Interesting.

    I've noticed a similar problem in a few of my networks, where training will experience a sudden spike in the loss. Sometimes it dies down immediately, other times it ends up crippling the training, though it recovers somewhat after a few epochs. I had not noticed a link between when this happened and when it saved the checkpoints, I'll have to pay more attention when I train a new network.

    1024 is a pretty ambitions size. I was able to do it by scaling back the batch size a bit (otherwise it wouldn't fit on my 6GB Titan) but I kept getting memory errors in the Torch framework that would break my training after a few thousand batches for anything above size 768. Your problem seems unrelated, but it's entirely possible that some tiny bug or hardware error could cause a massive failure and throw everything off.

    What happens if you try to resume training on the checkpoint right before the explosion, say with a different seed?
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    I trained lots of checkpoints to do my hyperparameter sweep; I have about 80 of various sizes that all use the most recent version of the encoding. I'll try to get them organized and post at least the good ones to Google Drive so others can use them. They're all GPU checkpoints, which are significantly smaller than CPU checkpoints because apparently the CUDA libraries do some compression (or just use float32 or something), so I think the best solution is to have people use the gpu-to-cpu feature that now comes with char-rnn. I'll post some documentation about how to use that in the mtgencode readme.

    I also have a bunch of disorganized scripts, including an Ipython notebook for plotting data, and a huuuuuuuuuuuge amount of data that I couldn't fit into the paper I wrote. I'd like to make that available as well, as some of it is certain to be interesting. For example I just produced a bunch of dumps to compare what happens if we put the name in the first field as opposed to the last field of the encoding. I just have to run my analysis on it and then fiddle with the graphs until they're readable.

    So yeah, whether you want to see more cards or know more about the best hyperparameters to use for training the networks, stay tuned over the next few days. I'll try to provide as much as I can, and document / automate my techniques so that others can reproduce my work and expand on it.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from neotelesocio »
    Came across this thread a few days ago, after a couple weeks following the RoboRosewater twitter! Loving everything I've seen so far, and had moderate success using the RNN myself.
    Welcome aboard! Always good to see someone new enjoying this technology.
    Quote from neotelesocio »
    I'm currently 33 epochs into a size-512, 2-layer network(running on a VM so that's about as much as it'll handle) and the network is spitting out an alarming amount of cards with names that already exist, and in a few cases carbon copies of existing cards down to rarity. The training loss and validation loss are both between .15 and .2, but hardcast's tutorial(which I followed to get started) says training_loss generally starts to hover around .5. I assume this is related to this over(under?)fitting issue. How can I fix it? Does it have something to do with dropout? I haven't experimented with that parameter yet. What's the difference between, say, 0, .5, and 1.0 dropout?
    So, as far as choosing parameters goes, I have very little experience with 2-layer networks, but I'm just finishing up a bunch of work trying to optimize the training parameters for 3-layer networks. As in, I wrote an 8-page research paper for a class, and I'll post a link to the final version when I hand it in in a few hours.

    You're massively overfitting if the network is spitting out copies of real cards. In general the way to prevent this is to increase the dropout. Essentially, dropout turns off a randomly selected fraction of neurons for each training batch, so the network can't become too reliant on particular connections. Dropout 0.25 turns of a quarter of connections, dropout 0.5 turns off half of them, and so on. So, dropout 1.0 would be a bad idea.

    I'm actually pretty curious to see what the best parameters for 2-layer networks are, as I haven't used them much at all. With 3 layers, I've had the best success with size 768, dropout 0.5, and seq_length 200. Training loss is actually worse if you increase the dropout above 0, but other metrics indicate that this is very, very misleading about the quality of the output. This is all using the latest encoding format (explicit labels, name field last) as currently on GitHub.
    Quote from neotelesocio »
    Sorry if these questions have been answered earlier in the thread, I'm not much for reading dozens of forum pages Hooray!
    Thanks in advance!
    No worries! It's a beast of a thread. I have some Chrome windows I have to remember not to close because they have tabs open to important pages, lol.

    I think I'm the first person to do a hyperparameter optimization sweep, so if you want to know what the best training parameters all, you've come at exactly the right time. And if you just want to sample from existing checkpoints, I've trained something like 100 different networks in the past month. I'll try to organize and post as much of my work as possible.

    EDIT:
    Link to paper and to the poster I made a week ago on google drive.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    So I created a poster for my class project: google drive link.

    There's also a paper due in two weeks. I'll post some more detailed discussion of my discoveries as I work on it.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from maplesmall »
    That is fantastic! Interesting how all the checkpoints you've tested have no dropout; any test results from networks with the default dropout value? It seems based on this that a network size of 512 is optimal; 640 seems to have slightly worse results.

    I believe 0 is the default dropout value? Anyway, yes, I do have the same data (all in duplicate) from models trained with dropouts of 0.25 and 0.50, I just didn't produce the hideous text tables to show off the statistics. I'm working on scripts to do that more automatically.

    If you can run mtgencode, you can compute the same statistics from any of the dumps on my google drive - they're labeled with the dropout.
    ./scripts/validate.py name_of_dump.txt


    EDIT: ok, here's some numbers with dropout.

    real cards               s512, d0, v0.1952       s512, d0.25, v0.2663    s512, d0.50, v0.3269
    
    -- overall --            -- overall --           -- overall --           -- overall --
      total: 15065             total: 5777             total: 6032             total: 5913
      good : 15061 (99.97%)    good : 5153 (89.19%)    good : 5367 (88.97%)    good : 5304 (89.70%)
      bad  : 4 (0.026%)        bad  : 624 (10.80%)     bad  : 665 (11.02%)     bad  : 609 (10.29%)
    ----                     ----                    ----                    ----
    types:                   types:                  types:                  types:
      total: 15065 (100.0%)    total: 5777 (100.0%)    total: 6032 (100.0%)    total: 5913 (100.0%)
      good : 15065 (100.0%)    good : 5774 (99.94%)    good : 6028 (99.93%)    good : 5910 (99.94%)
      bad  : 0 (0.0%)          bad  : 3 (0.051%)       bad  : 4 (0.066%)       bad  : 3 (0.050%)
    pt:                      pt:                     pt:                     pt:
      total: 8007 (53.14%)     total: 3527 (61.05%)    total: 3419 (56.68%)    total: 3330 (56.31%)
      good : 8007 (53.14%)     good : 3519 (60.91%)    good : 3413 (56.58%)    good : 3329 (56.29%)
      bad  : 0 (0.0%)          bad  : 8 (0.138%)       bad  : 6 (0.099%)       bad  : 1 (0.016%)
    lands:                   lands:                  lands:                  lands:
      total: 533 (3.538%)      total: 110 (1.904%)     total: 138 (2.287%)     total: 127 (2.147%)
      good : 533 (3.538%)      good : 68 (1.177%)      good : 133 (2.204%)     good : 114 (1.927%)
      bad  : 0 (0.0%)          bad  : 42 (0.727%)      bad  : 5 (0.082%)       bad  : 13 (0.219%)
    X:                       X:                      X:                      X:
      total: 757 (5.024%)      total: 484 (8.378%)     total: 501 (8.305%)     total: 500 (8.455%)
      good : 756 (5.018%)      good : 168 (2.908%)     good : 174 (2.884%)     good : 128 (2.164%)
      bad  : 1 (0.006%)        bad  : 316 (5.469%)     bad  : 327 (5.421%)     bad  : 372 (6.291%)
    kicker:                  kicker:                 kicker:                 kicker:
      total: 114 (0.756%)      total: 93 (1.609%)      total: 98 (1.624%)      total: 112 (1.894%)
      good : 112 (0.743%)      good : 36 (0.623%)      good : 41 (0.679%)      good : 28 (0.473%)
      bad  : 2 (0.013%)        bad  : 57 (0.986%)      bad  : 57 (0.944%)      bad  : 84 (1.420%)
    counters:                counters:               counters:               counters:
      total: 401 (2.661%)      total: 237 (4.102%)     total: 308 (5.106%)     total: 167 (2.824%)
      good : 401 (2.661%)      good : 91 (1.575%)      good : 100 (1.657%)     good : 89 (1.505%)
      bad  : 0 (0.0%)          bad  : 146 (2.527%)     bad  : 208 (3.448%)     bad  : 78 (1.319%)
    choices:                 choices:                choices:                choices:
      total: 175 (1.161%)      total: 114 (1.973%)     total: 99 (1.641%)      total: 103 (1.741%)
      good : 174 (1.154%)      good : 45 (0.778%)      good : 34 (0.563%)      good : 45 (0.761%)
      bad  : 1 (0.006%)        bad  : 69 (1.194%)      bad  : 65 (1.077%)      bad  : 58 (0.980%)
    auras:                   auras:                  auras:                  auras:
      total: 2318 (15.38%)     total: 852 (14.74%)     total: 928 (15.38%)     total: 969 (16.38%)
      good : 2318 (15.38%)     good : 852 (14.74%)     good : 928 (15.38%)     good : 969 (16.38%)
      bad  : 0 (0.0%)          bad  : 0 (0.0%)         bad  : 0 (0.0%)         bad  : 0 (0.0%)
    equipment:               equipment:              equipment:              equipment:
      total: 200 (1.327%)      total: 44 (0.761%)      total: 59 (0.978%)      total: 67 (1.133%)
      good : 200 (1.327%)      good : 43 (0.744%)      good : 59 (0.978%)      good : 67 (1.133%)
      bad  : 0 (0.0%)          bad  : 1 (0.017%)       bad  : 0 (0.0%)         bad  : 0 (0.0%)
    planeswalkers:           planeswalkers:          planeswalkers:          planeswalkers:
      total: 61 (0.404%)       total: 15 (0.259%)      total: 10 (0.165%)      total: 19 (0.321%)
      good : 61 (0.404%)       good : 2 (0.034%)       good : 0 (0.0%)         good : 4 (0.067%)
      bad  : 0 (0.0%)          bad  : 13 (0.225%)      bad  : 10 (0.165%)      bad  : 15 (0.253%)
    levelup:                 levelup:                levelup:                levelup:
      total: 27 (0.179%)       total: 17 (0.294%)      total: 8 (0.132%)       total: 14 (0.236%)
      good : 27 (0.179%)       good : 6 (0.103%)       good : 4 (0.066%)       good : 11 (0.186%)
      bad  : 0 (0.0%)          bad  : 11 (0.190%)      bad  : 4 (0.066%)       bad  : 3 (0.050%)
    activated:               activated:              activated:              activated:
      total: 4307 (28.58%)     total: 1618 (28.00%)    total: 1741 (28.86%)    total: 1719 (29.07%)
      good : 4307 (28.58%)     good : 1591 (27.54%)    good : 1709 (28.33%)    good : 1677 (28.36%)
      bad  : 0 (0.0%)          bad  : 27 (0.467%)      bad  : 32 (0.530%)      bad  : 42 (0.710%)
    triggered:               triggered:              triggered:              triggered:
      total: 4340 (28.80%)     total: 1848 (31.98%)    total: 1944 (32.22%)    total: 1985 (33.57%)
      good : 4340 (28.80%)     good : 1818 (31.46%)    good : 1914 (31.73%)    good : 1934 (32.70%)
      bad  : 0 (0.0%)          bad  : 30 (0.519%)      bad  : 30 (0.497%)      bad  : 51 (0.862%)
    
                             names:                 names:                   names:
                               dist : 0.784            dist : 0.749            dist : 0.715
                               dupes: 961              dupes: 177              dupes: 52
                             cards (word2vec):      cards (word2vec):        cards (word2vec):
                               dist : 0.917            dist : 0.914            dist : 0.906
                               dupes: 209              dupes: 48               dupes: 24
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    I just finished a massive parameter sweep across 60 different parameter settings for training models with default char-rnn and my modified mtg-rnn. The full data of the sweep is available on my google drive.

    Here's some fun metrics I generated with some of my data analysis scripts:
    Validation and distance metrics for baseline_XXX_1 component of mtg-rnn sweep.
    
    Each column corresponds to a single checkpoint. 's' is the size, 'd' is the
    dropout, and 'v' is the validation error. All of these checkpoints are from
    epoch 50 (the end of training). So, 's128, d0, v0.3941' is a checkpoint
    from a size 128 network with dropout 0, that had a validation loss of 0.3941.
    If you look at the full sweep, it corresponds to mtg-rnn-sweep1/baseline_128_1.
    
    All of the validation metrics like 'types' and 'pt' are simple string processing
    tests to determine if a card has a property, and roughly check if it is used
    in the correct way. For the specific definitions, you'll have to see the source
    code in scripts/validate.py.
    
    The 'names' and 'cards' distances are the average of the name text edit distance
    and the word2vec semantic distance from each card in the dump to the nearest
    real card. Calling them distances is a little misleading, they're really
    similarity measures with 1.0 being identical.
    
    This data isn't very scientific, but it is interesting to look at. There seems
    to be a fair amount of variance between different 1MB dumps.
    
    
    real cards               s128, d0, v0.3941       s256, d0, v0.2736       s384, d0, v0.2117       s512, d0, v0.1952       s640, d0, v0.1798
    
    -- overall --            -- overall --           -- overall --           -- overall --           -- overall --           -- overall --
      total: 15065             total: 5666             total: 5820             total: 5960             total: 5777             total: 5800
      good : 15061 (99.97%)    good : 4457 (78.66%)    good : 4979 (85.54%)    good : 5277 (88.54%)    good : 5153 (89.19%)    good : 5130 (88.44%)
      bad  : 4 (0.026%)        bad  : 1209 (21.33%)    bad  : 841 (14.45%)     bad  : 683 (11.45%)     bad  : 624 (10.80%)     bad  : 670 (11.55%)
    ----                     ----                    ----                    ----                    ----                    ----
    types:                   types:                  types:                  types:                  types:                  types:
      total: 15065 (100.0%)    total: 5666 (100.0%)    total: 5820 (100.0%)    total: 5960 (100.0%)    total: 5777 (100.0%)    total: 5800 (100.0%)
      good : 15065 (100.0%)    good : 5648 (99.68%)    good : 5818 (99.96%)    good : 5956 (99.93%)    good : 5774 (99.94%)    good : 5796 (99.93%)
      bad  : 0 (0.0%)          bad  : 18 (0.317%)      bad  : 2 (0.034%)       bad  : 4 (0.067%)       bad  : 3 (0.051%)       bad  : 4 (0.068%)
    pt:                      pt:                     pt:                     pt:                     pt:                     pt:
      total: 8007 (53.14%)     total: 2688 (47.44%)    total: 3094 (53.16%)    total: 2956 (49.59%)    total: 3527 (61.05%)    total: 2641 (45.53%)
      good : 8007 (53.14%)     good : 2648 (46.73%)    good : 3078 (52.88%)    good : 2943 (49.37%)    good : 3519 (60.91%)    good : 2618 (45.13%)
      bad  : 0 (0.0%)          bad  : 40 (0.705%)      bad  : 16 (0.274%)      bad  : 13 (0.218%)      bad  : 8 (0.138%)       bad  : 23 (0.396%)
    lands:                   lands:                  lands:                  lands:                  lands:                  lands:
      total: 533 (3.538%)      total: 231 (4.076%)     total: 225 (3.865%)     total: 228 (3.825%)     total: 110 (1.904%)     total: 177 (3.051%)
      good : 533 (3.538%)      good : 184 (3.247%)     good : 86 (1.477%)      good : 147 (2.466%)     good : 68 (1.177%)      good : 126 (2.172%)
      bad  : 0 (0.0%)          bad  : 47 (0.829%)      bad  : 139 (2.388%)     bad  : 81 (1.359%)      bad  : 42 (0.727%)      bad  : 51 (0.879%)
    X:                       X:                      X:                      X:                      X:                      X:
      total: 757 (5.024%)      total: 568 (10.02%)     total: 407 (6.993%)     total: 461 (7.734%)     total: 484 (8.378%)     total: 564 (9.724%)
      good : 756 (5.018%)      good : 74 (1.306%)      good : 90 (1.546%)      good : 144 (2.416%)     good : 168 (2.908%)     good : 201 (3.465%)
      bad  : 1 (0.006%)        bad  : 494 (8.718%)     bad  : 317 (5.446%)     bad  : 317 (5.318%)     bad  : 316 (5.469%)     bad  : 363 (6.258%)
    kicker:                  kicker:                 kicker:                 kicker:                 kicker:                 kicker:
      total: 114 (0.756%)      total: 92 (1.623%)      total: 51 (0.876%)      total: 46 (0.771%)      total: 93 (1.609%)      total: 70 (1.206%)
      good : 112 (0.743%)      good : 5 (0.088%)       good : 14 (0.240%)      good : 15 (0.251%)      good : 36 (0.623%)      good : 19 (0.327%)
      bad  : 2 (0.013%)        bad  : 87 (1.535%)      bad  : 37 (0.635%)      bad  : 31 (0.520%)      bad  : 57 (0.986%)      bad  : 51 (0.879%)
    counters:                counters:               counters:               counters:               counters:               counters:
      total: 401 (2.661%)      total: 338 (5.965%)     total: 475 (8.161%)     total: 236 (3.959%)     total: 237 (4.102%)     total: 192 (3.310%)
      good : 401 (2.661%)      good : 38 (0.670%)      good : 156 (2.680%)     good : 82 (1.375%)      good : 91 (1.575%)      good : 68 (1.172%)
      bad  : 0 (0.0%)          bad  : 300 (5.294%)     bad  : 319 (5.481%)     bad  : 154 (2.583%)     bad  : 146 (2.527%)     bad  : 124 (2.137%)
    choices:                 choices:                choices:                choices:                choices:                choices:
      total: 175 (1.161%)      total: 161 (2.841%)     total: 144 (2.474%)     total: 174 (2.919%)     total: 114 (1.973%)     total: 92 (1.586%)
      good : 174 (1.154%)      good : 1 (0.017%)       good : 38 (0.652%)      good : 78 (1.308%)      good : 45 (0.778%)      good : 30 (0.517%)
      bad  : 1 (0.006%)        bad  : 160 (2.823%)     bad  : 106 (1.821%)     bad  : 96 (1.610%)      bad  : 69 (1.194%)      bad  : 62 (1.068%)
    auras:                   auras:                  auras:                  auras:                  auras:                  auras:
      total: 2318 (15.38%)     total: 1036 (18.28%)    total: 1092 (18.76%)    total: 1061 (17.80%)    total: 852 (14.74%)     total: 1074 (18.51%)
      good : 2318 (15.38%)     good : 1036 (18.28%)    good : 1092 (18.76%)    good : 1061 (17.80%)    good : 852 (14.74%)     good : 1074 (18.51%)
      bad  : 0 (0.0%)          bad  : 0 (0.0%)         bad  : 0 (0.0%)         bad  : 0 (0.0%)         bad  : 0 (0.0%)         bad  : 0 (0.0%)
    equipment:               equipment:              equipment:              equipment:              equipment:              equipment:
      total: 200 (1.327%)      total: 43 (0.758%)      total: 82 (1.408%)      total: 112 (1.879%)     total: 44 (0.761%)      total: 114 (1.965%)
      good : 200 (1.327%)      good : 43 (0.758%)      good : 81 (1.391%)      good : 112 (1.879%)     good : 43 (0.744%)      good : 114 (1.965%)
      bad  : 0 (0.0%)          bad  : 0 (0.0%)         bad  : 1 (0.017%)       bad  : 0 (0.0%)         bad  : 1 (0.017%)       bad  : 0 (0.0%)
    planeswalkers:           planeswalkers:          planeswalkers:          planeswalkers:          planeswalkers:          planeswalkers:
      total: 61 (0.404%)       total: 30 (0.529%)      total: 20 (0.343%)      total: 25 (0.419%)      total: 15 (0.259%)      total: 37 (0.637%)
      good : 61 (0.404%)       good : 0 (0.0%)         good : 2 (0.034%)       good : 4 (0.067%)       good : 2 (0.034%)       good : 6 (0.103%)
      bad  : 0 (0.0%)          bad  : 30 (0.529%)      bad  : 18 (0.309%)      bad  : 21 (0.352%)      bad  : 13 (0.225%)      bad  : 31 (0.534%)
    levelup:                 levelup:                levelup:                levelup:                levelup:                levelup:
      total: 27 (0.179%)       total: 11 (0.194%)      total: 25 (0.429%)      total: 6 (0.100%)       total: 17 (0.294%)      total: 5 (0.086%)
      good : 27 (0.179%)       good : 4 (0.070%)       good : 13 (0.223%)      good : 2 (0.033%)       good : 6 (0.103%)       good : 4 (0.068%)
      bad  : 0 (0.0%)          bad  : 7 (0.123%)       bad  : 12 (0.206%)      bad  : 4 (0.067%)       bad  : 11 (0.190%)      bad  : 1 (0.017%)
    activated:               activated:              activated:              activated:              activated:              activated:
      total: 4307 (28.58%)     total: 1692 (29.86%)    total: 1688 (29.00%)    total: 1587 (26.62%)    total: 1618 (28.00%)    total: 1639 (28.25%)
      good : 4307 (28.58%)     good : 1555 (27.44%)    good : 1634 (28.07%)    good : 1556 (26.10%)    good : 1591 (27.54%)    good : 1621 (27.94%)
      bad  : 0 (0.0%)          bad  : 137 (2.417%)     bad  : 54 (0.927%)      bad  : 31 (0.520%)      bad  : 27 (0.467%)      bad  : 18 (0.310%)
    triggered:               triggered:              triggered:              triggered:              triggered:              triggered:
      total: 4340 (28.80%)     total: 1589 (28.04%)    total: 1526 (26.21%)    total: 1661 (27.86%)    total: 1848 (31.98%)    total: 1622 (27.96%)
      good : 4340 (28.80%)     good : 1496 (26.40%)    good : 1509 (25.92%)    good : 1635 (27.43%)    good : 1818 (31.46%)    good : 1601 (27.60%)
      bad  : 0 (0.0%)          bad  : 93 (1.641%)      bad  : 17 (0.292%)      bad  : 26 (0.436%)      bad  : 30 (0.519%)      bad  : 21 (0.362%)
    
                             names:                  names:                  names:                  names:                  names:
                               dist : 0.691            dist : 0.732            dist : 0.776            dist : 0.784            dist : 0.802
                               dupes: 17               dupes: 253              dupes: 905              dupes: 961              dupes: 1266
                             cards (word2vec):       cards (word2vec):       cards (word2vec):       cards (word2vec):       cards (word2vec):
                               dist : 0.887            dist : 0.905            dist : 0.918            dist : 0.917            dist : 0.921
                               dupes: 14               dupes: 51               dupes: 142              dupes: 209              dupes: 218
    Oh, and as a little note, there are a few invalid cards in the real cards because it's really hard to write tests that account for all of the corner cases.
    I'm working on rendering all of this data (and the data for the rest of the sweep, that's just 5 checkpoints of the 60 that I generated) in a more easily consumable form. I'll try to post things as I come up with them. Expect lots of pretty graphs.

    Preliminary analysis suggests that large networks trained with mtg-rnn's training set card order randomization and *no dropout at all* are really good at producing things that look like real cards. Of course, that comes with the tradeoff of producing more cards that are too similar to existing cards to be interesting. One of my main goals is to examine that tradeoff in more detail.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from maplesmall »

    @failbird105, try using a piece of software called Magic Set Editor 2 for making the cards? It's better than all the web-based card generators and a lot more powerful (we even have an RNN set symbol quite a few pages back that someone made, that MSE2 lets you include).

    I should really get that symbol and add it to the set files mtgencode can automatically generate from dumps. Any idea where it is?

    Oh, @Failbird, in case you didn't know, that's a thing. My code here can take raw dump files as they come out of the neural network and turn them into a MSE2 set file for you automatically. There's no art (yet) and it doesn't do much to clean up the text, but it might be easier than typing everything in yourself. I'm not sure how hard it is to cut and paste cards between sets - I think it should just work, though I don't know how it handles the art.
    Posted in: Custom Card Creation
  • To post a comment, please or register a new account.