2019 Holiday Exchange!
 
A New and Exciting Beginning
 
The End of an Era
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    I set my temp to 1 because i want something wacky, but i still get a bunch of boring cards. they aren't the same each time, but they aren't very interesting.

    harcast, which t7 do you recommend for my purposes? i want the patterns to be mostly correct, but the cards to be interesting, mostly i just want the body text to be good, because i can modify the mana cost, name, type, etc pretty easily afterward
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Does your own computers performance effect the sampling script? I am using Harcasts and most of the time the cards are very bland and it doenst seem to understand which colors go with which abilities. In your article it seemed to have it pretty much figured out by the end, but when i sample from hardcast's files i cant get anything like what you got at the end of the article.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    can someone link me to a .t7 file of "the most learned" neural net that someone has created so far? (CPU only please, unless we can convert GPU to CPU now?)
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    @hardcast that was the issue, i have been capitalizing everything
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    I'm having trouble specifying more parameters, i can do it fine with parameters such as -gpu, -seed, and -temperature, but when i try to constrain it with types or mana cost it breaks, see here:

    th sample.lua cv/lm_lstm_epoch0.23_0.7888.t7 -gpuid -1 -length 2000 -temperature 1 -seed 423 -types "Enchantment"
    creating an LSTM...
    missing seed text, using uniform probability over first character
    --------------------------
    Q^^/uther {GG} less, you may draw a card, put a +&/+&^ coonters onto target player, untap the top of tho top that cards warlior to your upkeep until end of turn.\{UUUU^}, T: put a +&^/+&^ counters on @.|


    th sample.lua cv/lm_lstm_epoch0.23_0.7888.t7 -gpuid -1 -length 2000 -temperature 1 -seed 423 -types "Enchantment"
    creating an LSTM...
    missing seed text, using uniform probability over first character
    --------------------------
    Q^^/uther {GG} less, you may draw a card, put a +&/+&^ coonters onto target player, untap the top of tho top that cards warlior to your upkeep until end of turn.\{UUUU^}, T: put a +&^/+&^ counters on @.|

    /Users/Frankerson/torch/install/bin/luajit: bad argument #1 to '?' (empty tensor at /Users/Frankerson/torch/pkg/torch/generic/Tensor.c:851)
    stack traceback:
    [C]: at 0x0ebae6e0
    [C]: in function '__index'
    sample.lua:179: in main chunk
    [C]: in function 'dofile'
    ...rson/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
    [C]: at 0x010eabc2e0
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from maplesmall »
    Temperature 0.2 is very low. Try 0.8 or even the default (1) to see if anything different happens, since temperature controls how 'adventurous' the sampler is feeling.

    edit: was generating some enchantments from hardcast's "colour correct" .t7 file, and this was a bit novel:

    Sangstrale Consumption
    3GW
    Enchantment
    Creatures without flying can't block creatures without flying.
    ## Certainly fits the colours. Given how creatures without flying usually can't block creatures with flying anyway, the only things this wouldn't affect is creatures with reach.

    Thunderbomb
    5WB
    Enchantment
    Whenever a creature enters the battlefield, creatures you control gain indestructible until end of turn.
    ## Definitely white, not sure about the black. 7CMC seems good too, for such a powerful effect.



    Thanks for the info, i started retraining on your input.txt thats on your google drive and am using the script that talcos posted in the previous page of the thread (sample_hs.lua). I also increased the RNN to 256 before starting my new training, it will take longer but i think it's worth it.

    What is this "color correct".t7 file? How did you get it to be more correct with colors? And can i download the .t7? I tried to download some of your .t7 files but it gave me an error when i used the sample script on them, something about my GPU.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    So i left my computer running all day while i was at work, when i came home it had created 22 epochs, i tried the sample script on the latest one with temperature .2 and this is what it gave back:

    Franks-MacBook-Pro:char-rnn-master Frankerson$ th sample.lua cv/lm_lstm_epoch22.56_0.5884.t7 -gpuid -1 -length 2000 -temperature .2 -seed 765
    creating an LSTM...
    missing seed text, using uniform probability over first character
    --------------------------
    Lest ||E S|| Creature Human Soldier | {1}{B} | 2/2 | {T}: Add {1} to your mana pool. | Sacting Sand ||E S|| Creature Human Soldier | {2}{G} | 2/2 | {T}: Target creature gets +1/+1 until end of turn. | Serend Sall ||E S|| Creature Human Wizard | {1}{U} | 1/1 | {T}: Target creature gets +2/+2 until end of turn. | Spell of the Sard ||E S|| Creature Human Wizard | {1}{U} | 1/1 | {T}: $THIS deals 1 damage to target creature or player. | Sartice ||E S|| Creature Human Soldier | {2}{W} | 2/2 | {T}: Target creature gets +1/+1 until end of turn. | Spire of the Sald ||E S|| Creature Human Soldier | {2}{B} | 2/2 | {1}{G}, {T}: Target creature gets +1/+1 until end of turn. | Sarding Starter ||E S|| Creature Human Soldier | {1}{R} | 2/2 | {T}: Add {1} to your mana pool. | Death of the Sear ||E S|| Creature Human Soldier | {1}{R} | 1/1 | {T}: Target creature gets +1/+1 until end of turn. | Share of the Goblin ||E S|| Creature Human Soldier | {2}{R} | 2/2 | {T}: Add {G} to your mana pool. | Sorder Spirit ||E S|| Creature Human Wizard | {2}{R} | 2/2 | {T}: Add {1} to your mana pool. | Crane of the Spire ||E S|| Creature Human Wizard | {1}{U} | 2/2 | {T}: Add {1} to your mana pool. | Spire of the Searth ||E S|| Creature Human Soldier | {1}{U} | 1/1 | {2}{B}, {T}: Target creature gets +1/+1 until end of turn. | Spire of the Sear ||E S|| Creature Human Soldier | {1}{U} | 1/1 | {T}: Target creature gets +1/+1 until end of turn. | Spire of the Flane ||E S|| Creature Human Wizard | {2}{U} | 2/2 | {T}: Add {1} to your mana pool. | Spire Scare ||E S|| Creature Human Soldier | {2}{U} | 2/2 | {T}: Target creature gets +2/+2 until end of turn. | Cole of the Sat ||E S|| Creature Human Soldier | {2}{R} | 2/2 | {T}: Add {G} to your mana pool. | Regenter ||E S|| Creature Human Soldier | {2}{G} | 2/2 | {T}: Add {G} to your mana pool. | Spire Spirit ||E S|| Creature Human Soldier | {2}{U}{R} | 2/1 | {T}: Add {1} to your mana pool. | Spire Spirit


    I think there may be something wrong with my neural network or my sample script cause if you read through these all the creature types are the same, all the abilities are the same, etc.... Ive tried changing the seed and trying different epochs but not matter what i do it seems to result in the same cards (minus the name and mana cost being slightly different each time)

    Whats causing this? Also can i get a link to hardcastSixdrops input file and the best sample script for output? i just want to be sure im up to date.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from Talcos »
    Quote from um0123 »

    What do you mean by you converted all the numbers to a unary format? Why does that cause the (&^^^^^^)'s to appear?



    Numbers are replaced with a number of "^" symbols, the sum of which equal the value of the number that we replaced. The ampersand symbol indicates the start of a number. So you'll see things like

    * "a &^^^^/&^^^^ green beast creature token" == "a 4/4 green beast creature token"
    * "+&^/+&^ until end of turn" == "+1/+1 until end of turn"

    The reason for this is that the network starts with no knowledge of numbers, and that has to be taught to it. We've found that it's easier to teach the network such things when big values are physically larger and small values are physically smaller, and the value of a number is specified using just one symbol. Otherwise you have to teach it extra stuff like the fact that the symbols 1 and 2 equal 3 when put together, which adds unnecessary cognitive burden.

    EDIT: Oh, and in case it proves confusing, every mana symbol is represented by two symbols (e.g. RR -> R). Ikko, the Glass Assassin costs "{^RR^^}", which translates to 3R, not 3RR. This is done to make it easier to represent hybrid mana costs, phyrexian mana costs, etc.



    This makes a ton of sense, but now I am confused by the output of some of my second checkpoint. In my second checkpoint almost all the cards output were actually numbers not carats. So the mana costs I saw on my second checkpoint samples were all something like " 3B" or "1U" and similarly the power and toughness was actual numbers not carats. Weird it would do this on the first checkpoint samples but not the second?). Does this mean its still learning what goes where on a card? For instance it sees numbers in the p/t so it puts numbers there as characters instead of carats because it doesn't realize that p/t is supposed to be larger or smaller on certain cards? Or is my sample script that I'm used whack?

    Edit: I may not be using the latest sample script but I don't know where to find the latest one, the one I'm using iis whatever was on your hithib yesterday. Also how did you get it to make cards with user made mechanics like authorize? What parameters or training did you set to make that happen?

    Also how do I know if I'm using the right input.txt, I just used the kne in the guide I linked but I'm not sure if that's the best one.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Thanks for the reply, Talos! i will def look into those links you sent me! I feel it's hard to do machine learning as a hobby these days when its so fast paced and there are few textbooks aimed at doing intermediate level projects.

    What do you mean by you converted all the numbers to a unary format? Why does that cause the (&^^^^^^)'s to appear? Or is it simply because im sampling from a very early output and they will go away with time?

    Thanks you HardCastSixDrop for all the help getting my setup running. I cant wait to cube with these crazy cards, once they get a bit more coherent. Right now my sample script seemed to be obsessed with "the battlefield"

    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    So my validation error is just over 1.2 but i only have one checkpoint so far, going to see what happens when i get home.

    My question now i about the sample script.

    when i ran it on the only checkpoint it seems to just spit out walls of text wit no separations between cards or any separation between the different card parts (name, mana cost, type, etc) except for pipes (this character |), but besides the pipes i cant tell what i s a name or what is body text.

    also what do you mean when you stay talcos has an improved sampling script that allows you to insert certain values into the text as its being generated? And why does it matter what format/checkpoints are used? I am using the input.txt that was in the guide i linked, is this not the best input?

    Can you explain how to seed the sample script please? Is it as simple as adding the -seed flag with a number after? if so what is the limit range for the number?

    Thanks so much for your help

    EDIT: i ran the script lua i have (from the original rnn github page) on one of your epochs on googl (the epoch22 one) and althogh the output was much nicer (it separated different cards by different lints, each card category (name, type, etc) was pretty eeasy to distinguish) one problem i saw was it used a bunch of carats ('^') to fill in certain categories. like instead of power toughness it would have (&^^^^)/(&^^^^) and in stead of card type it has (&^^^^^^^^^^^^^^^^^^^) for a lot of them. Is this due to bad training or is there something wrong with the sampler?

    One thin i see is whenever i run it it says :

    creating an LSTM...
    missing seed text, using uniform probability over first character


    which idk if that matters or not.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Thanks you both for the help! Does the sampling file always create the same cards if you give it the same parameters? or is there some randomness to it no matter what you do. For example if i took some of your checkpoints and ran the sample file on them will i always get the same cards you got (assuming you used the same parameters).

    Second, what parameters should i use for my sample script?

    Third, about how many training sessions (or number of checkpoints) will it take before i get readable cards (not necessarily ones that make sense, but ones that form sentences - even if the sentences have bad grammar and sometimes dont actually make sense in the rules of magic?

    Last, if you can recommend a book for learning this stuff i would love it. I wanted to learn more about neural networks for computer vision as well as genetic algorithms for some time but i have been busy, now that its summer i have some time. I know a good deal of C/C++, Python, and Matlab. I took a course in basic machine learning at my uni where we did basic neural networks (but never did something as complicated as sentence creation), and i did all the projects in matlab.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from maplesmall »
    Just a shot in the dark, but the file you're using for the sample script doesn't look like the file names that the training outputs. When you ran the training script, did it start outputting lines every few seconds talking about training loss and such, with an increasing number on the left? If yes, be patient and after 2400 batches (the number on the left) you'll have a file you can use for sampling. If no, you have another problem.
    According to that error log you have wayyyy too slow a PC to use those parameters. You want batches running every few seconds not minutes. Rnn size 800 is the problem; that's insanely big. Use 128. That should help Smile


    Thanks so much for the reply!

    What do you mean by "the file you're using for the samples script doesn't look liek the file names that the training outputs." When i ran the script training script it output exactly what i posted in the last quote (the only that had a PANIC). I am going to adjust the rnn size to 128 and try again. Will it end with some sample output when i am done? Meaning can i only use the sample script to generate cards AFTER the training has completed? When i read the walkthrough i linked to it made it seem like the sample script runs in parallel with the training script.

    edit: just started to run it with 128 as the rnn size, and it seems to be chugging along! im getting a increase in the number on the left (x/5500) every 2 seconds or so. When this is done is THAT when i want to run the sample script?
    here is some sample output from the script thats running now:

    Franks-MacBook-Pro:char-rnn-master Frankerson$ th train.lua -data_dir data/format/ -gpuid -1 -rnn_size 128 -num_layers 3 -dropout .5
    loading data files...
    cutting off end of data so that the batches/sequences divide evenly
    reshaping tensor...
    data load done. Number of data batches in train: 1108, val: 59, test: 0
    vocab size: 85
    creating an LSTM with 3 layers
    number of parameters in the model: 385237
    cloning rnn
    cloning criterion
    1/55400 (epoch 0.001), train_loss = 4.43341933, grad/param norm = 3.8906e-01, time/batch = 2.45s
    2/55400 (epoch 0.002), train_loss = 4.25976590, grad/param norm = 8.0419e-01, time/batch = 1.92s
    3/55400 (epoch 0.003), train_loss = 3.61447496, grad/param norm = 1.0911e+00, time/batch = 1.90s
    4/55400 (epoch 0.004), train_loss = 3.45101057, grad/param norm = 5.6739e-01, time/batch = 2.11s
    5/55400 (epoch 0.005), train_loss = 3.45110753, grad/param norm = 8.9842e-01, time/batch = 2.31s
    6/55400 (epoch 0.005), train_loss = 3.37637626, grad/param norm = 9.5223e-01, time/batch = 2.27s
    7/55400 (epoch 0.006), train_loss = 3.39045740, grad/param norm = 7.4750e-01, time/batch = 2.24s
    8/55400 (epoch 0.007), train_loss = 3.38928930, grad/param norm = 5.3588e-01, time/batch = 2.02s
    9/55400 (epoch 0.008), train_loss = 3.36567347, grad/param norm = 4.8569e-01, time/batch = 2.46s
    10/55400 (epoch 0.009), train_loss = 3.31335886, grad/param norm = 4.2100e-01, time/batch = 2.16s
    11/55400 (epoch 0.010), train_loss = 3.39155927, grad/param norm = 4.6773e-01, time/batch = 2.12s
    12/55400 (epoch 0.011), train_loss = 3.35951010, grad/param norm = 3.5619e-01, time/batch = 2.05s
    13/55400 (epoch 0.012), train_loss = 3.38585141, grad/param norm = 3.7475e-01, time/batch = 1.93s
    14/55400 (epoch 0.013), train_loss = 3.34184137, grad/param norm = 3.9901e-01, time/batch = 2.03s


    keep in mind that i am AIMING for a somewhat ridiculous cards, i dont necessarily want them to make 100% because i think my playgorup would have fun interpretting what "tap the cards on the bottom of your deck" means exactly. also i love playing with crazy ridiculous powerful cards that make no sense.

    So what do i do now? just wait for the training to be complete before i can make some crazy cards?
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Hello! I am super excited by this project. I took a machine learning course in college last year (i am a CS minor) but never learned how to make something generate novel things like magic cards (all the projects i did were predictions on numbers like miles per gallon, weight, etc).

    I downloaded and installed torch and followed all the steps as listed here
    http://www.mtgsalvation.com/forums/creativity/custom-card-creation/612057-generating-magic-cards-using-deep-recurrent-neural?page=21#c512

    and everything seems to be set up right. But now i am trying to run it and not sure what to do

    I ran this script in on tab(btw i am on OS X, but it shouldn't really matter)
    th train.lua -data_dir data/format -gpuid -1 -rnn_size 800 -num_layers 3 -dropout 0.5

    where data/format contains a file called input.txt that i was told to download

    then i ran this command in a new terminal tab
    th sample.lua cv/run1temp7.txt -gpuid -1 -length 2000 -temperature 0.7

    but what happend is the second command came back with this

    /Users/Frankerson/torch/install/bin/luajit: ...rs/Frankerson/torch/install/share/lua/5.1/torch/File.lua:199: read error: read 0 blocks instead of 1 at /Users/Frankerson/torch/pkg/torch/lib/TH/THDiskFile.c:302
    stack traceback:
    [C]: in function 'readInt'
    ...rs/Frankerson/torch/install/share/lua/5.1/torch/File.lua:199: in function 'readObject'
    ...rs/Frankerson/torch/install/share/lua/5.1/torch/File.lua:305: in function 'load'
    sample.lua:84: in main chunk
    [C]: in function 'dofile'
    ...rson/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
    [C]: at 0x010ac482e0


    and soon after the first tab (where i started the training) came back with this:

    vocab.t7 and data.t7 do not exist. Running preprocessing...
    one-time setup: preprocessing input text file data/format/input.txt...
    loading text file...
    creating vocabulary mapping...
    putting data into tensor...
    saving data/format/vocab.t7
    saving data/format/data.t7
    loading data files...
    cutting off end of data so that the batches/sequences divide evenly
    reshaping tensor...
    data load done. Number of data batches in train: 1108, val: 59, test: 0
    vocab size: 85
    creating an LSTM with 3 layers
    number of parameters in the model: 13159285
    cloning rnn
    cloning criterion
    1/55400 (epoch 0.001), train_loss = 4.49767269, grad/param norm = 2.1800e-01, time/batch = 145.08s
    PANIC: unprotected error in call to Lua API (not enough memory)


    Clearly the second issue is a problem with RAM on my computer (i only have 4 GB) is there any way i can reduce the RAM usage but make it take longer. like i can set it up before i go to work and get home 9 hours later with some sweet ridiculous cards.

    for the first issue, why did it not output anything? What was I doing wrong?

    I hope you can help! i really am excited about this. I am hoping to generate cards that aren't fully comprehensible so i can make a crazy draft set and we can have a designated judge determine what i means to "tap a player" etc.
    Posted in: Custom Card Creation
  • posted a message on The state of control in modern.
    Quote from MCd
    Tron is NOT a control deck its a RAMP deck.

    Also would you say that there is no such thing as a Methodist because it's a very narrow definition of the christian religion?


    MonoU Tron is completely a control deck.

    GR Tron is not the only tron out there, you know...
    Posted in: Modern
  • posted a message on [Primer] RUG Scapeshift
    Hey can anyone give me some informed opinions about wurmcoil engines use in this deck?
    Posted in: Combo
  • To post a comment, please or register a new account.