Based off of some advice, I messed around a bit with the Magic Set Editor 2, but in the long run decided it wasn't for me and I didn't like how it reduced the quality. I'll admit Photoshop is more work, but I prefer to have the level of control that it gives me. That being said, here are a few new cards, some are generated by other people, with some slight changes for balance/flavor.
Edit: I'm just leaving the symbol at all-rares because I'm not bothering with trying to classify rare/uncommon/common. I was going to remove it entirely but it looked weird. But yeah, I'll be going through everybody's cards and if any jump out at me I'll post em.
Hey guys. I've been stalking this thread for a while, although I've never posted on it yet. I'm trying to install the thing on a Ubuntu machine, but I can't get any of the Torch things to install right. No errors popped up during installation, but when I try to use the "luarocks install x" commands, I keep getting an error. Any ideas?
Hey guys. I've been stalking this thread for a while, although I've never posted on it yet. I'm trying to install the thing on a Ubuntu machine, but I can't get any of the Torch things to install right. No errors popped up during installation, but when I try to use the "luarocks install x" commands, I keep getting an error. Any ideas?
Welcome! If you're having a problem, could you elaborate on what error that you're seeing? We might have encountered it before and found a way around it.
Hey guys. I've been stalking this thread for a while, although I've never posted on it yet. I'm trying to install the thing on a Ubuntu machine, but I can't get any of the Torch things to install right. No errors popped up during installation, but when I try to use the "luarocks install x" commands, I keep getting an error. Any ideas?
Welcome! If you're having a problem, could you elaborate on what error that you're seeing? We might have encountered it before and found a way around it.
I'm getting a "No results matching query were found" error on both of the "luarocks install" commands.
I'm currently running on 64-bit, but I had the same problem on a 32-bit. It's possible that I installed something wrong, since I only used the 32-bit for my first attempt, so I'll go try it again.
Hello again! I just wanted to share with you some of the work that I've been doing on a mapping of cards to content vectors, which, as we've discussed, could prove to be very useful if we want to do art generation or more precise analytics on card dumps. For this, I am using Google's word2vec algorithm, specifically their implementation of the continuous-bag-of-words (CBOW) algorithm. The idea is that we train a feedforward neural network to map words in a text to vectors such that words that show up in similar contexts have similar vectors. The vectors represent collections of anonymous features that describe syntactic and semantic relationships between words. I mentioned that briefly in an earlier post in a very vague way.
With this model, word-level semantics are at least weakly preserved through the linear combination of vectors. That is, when you add vectors together, you get a new vector that combines the meaning of the original two. For example, if you combine the vectors for "french" and "river", the closest matches you get are words like "loire", "garrone", and "scheldt", all of which are names of rivers in France. Impressive! However, while this power is amazing, it gets weaker as you add more words. So if you were to take a book about George Washington, sample the text, vectorize the samples, and then try to decode the meaning based on those vectors, you'll find it's harder to do as the sample size grows:
v(a single sentence): George Washington was an American president.
v(a few paragraphs): A famous/infamous person/dog/inanimate object was a leader/general/CEO in a war/conflict/corporate merger.
v(the whole book): This is a book about important things.
Fortunately for us, our sampling is done over single cards, so we shouldn't run into that kind of trouble.
So, first, I take an encoded version of the original corpus of Magic cards and I flatten and clean the input so as to make it easier to train a vector model on the words.
Example of original card:
|liar's pendulum||artifact||||{^}|{^^}, T: name a card. target opponent guesses whether a card with that name is in your hand. you may reveal your hand. if you do and your opponent guessed wrong, draw a card.|
After preprocessing:
artifact { ^ } { ^ ^ } , T : name a card . target opponent guesses whether a card with that name is in your hand . you may reveal your hand . if you do and your opponent guessed wrong , draw a card .
We'll call the reduced version convertedcorpus.txt. Then, having compiled word2vec, I run...
And then I have the vector model, a mapping from words in Magic to vectors corresponding to their "meaning". The word2vec package provides scripts for probing out this vector model. For example, I can do the following:
$ ./distance.exe vectors_cbow.bin
Enter word or sentence (EXIT to break): land
Word: land Position in vocabulary: 67
Word Cosine distance
-------------------------------
tapped 0.649102
gate 0.601231
adds 0.556503
mountain 0.553767
taps 0.549006
produced 0.533038
forest 0.514023
basic 0.505879
produces 0.493072
island 0.479219
swamp 0.475245
pool 0.472690
land's 0.460673
produce 0.455295
add 0.454526
type 0.447234
lands 0.436638
So all of these words have vectors similar to land because they occur in similar contexts. Note that these are very fuzzy associations; the word2vec approach is usually applied to massive corpuses, like the entirety of Wikipedia, and you tend to get much better results in those cases. However, it'll work well enough for our purposes.
Next I go back through the encoded corpus and I produce a list with elements of the form (name,v), where name is the name of a card and v is the normalized sum of all of the vectors for each of the words in that card. Next I wrote a small script which would let me put in an encoded card and then see which cards in the original set of Magic cards most closely match that card according to cosine similarity. 1 means identical, -1 means absolutely unrelated.
For example, I take the text of the card Vivid Meadow and the top ten matches I get are:
So the top results contain cards that are semantically similar to Vivid Meadow according to our vector model. Notice that all the Vivid lands are there, but that Tendo Ice Bridge comes before Vivid Crag and Vivid Marsh. This is probably because the Meadow produces white mana and the Crag and the Marsh produce enemy colored mana, which drives them slightly further apart (but not by much). This is very important because a word-based string comparison method would tell us that all the Vivid lands were equally related, but the word-based vector representation tells us a different story.
We can use this approach to evaluate cards produced by the network. This allows us to pinpoint what cards the network is drawing inspiration from when it creates a new card.
For example, we have the card:
|sea sun||artifact||||{^^^^}|{^^^}, T: target creature gets +X/+X until end of turn, where X is the number of forests you control.|
and our model tells us that the closest matches are
(0.9313368638747992, "belbe's armor")
(0.9260633350305565, 'wine of blood and iron')
(0.9153528198457781, 'viridian lorebearers')
(0.9147972776855005, 'elder of laurels')
(0.9138350885373167, 'strength of cedars')
(0.9090302839871085, 'puffer extract')
(0.9015582629952401, 'dragon throne of tarkir')
(0.8942363749768294, 'chimeric staff')
(0.8664127593517529, 'feral animist')
(0.8651477790760224, 'nantuko mentor')
As we can see, there is a fairly strong relationship between Sea Sun and Belbe's Armor, but it's not so similar that we could consider it a clone or near-clone (compare to Vivid Meadow and Vivid Creek which have a similarity score of 0.9925000591622002).
This technique is especially helpful for making sense of complex, highly novel cards, like the following:
|brain the sanctum|legendary|enchantment artifact||||{^^WW}|whenever a creature enters the battlefield under your control, you may have it deal &^ damage to target creature or player.\{^^^^^^}, T: destroy target nonblack creature. it can't be regenerated.|
Notice that there are no close matches, only partial ones. If we want to pull highly novel cards out of a dump from the network, we can make a pass over the dump and retain cards whose closest matches fall below a certain threshold.
Anyway, that's what I came up with this morning. I'll see about getting all of this packaged up and made available for your entertainment later. Let me know what you think!
P.S. I think this sort of approach could be useful for a card recommendation system for deck building. Just a thought.
I finally have a very rough proof of concept prototype website here.
The only feature at the moment is generating graphical cards from nn output based on hardcast's format (I am using his code after all), with a bit of hacking to make it a python3 package.
I am also using Nafnlaus's image search code but I am getting slow search speeds on my vps so you may run into 504 errors. If that happens try again.
I'm also being a very bad net citizen and leaching images so I have restricted the system to the first six cards in the list.
Github link for source code and installation instructions. The instructions assume familiarity with git, python, and pip.
Plans for the next versions:
Fix my nginx config so I don't need to write two functions for every page.
Contact mtgcardsmith.com to see if they have an api I can use. If not I will have to write my own card image generation code.
Write a basic image cache system.
Improve text output on the cards.
2 and 3 will be required before I will expand the 6 card limit. Leaching is bad and I feel gross for even doing it x.x
Any chance at getting this code set up on a server with a simple UI so people can generate sets without running it on their own machines?
100% chance I need to get some of those fundamentals ready first. I messaged the mtgcardsmith admin to see how open they are if I can use their site or backend (ends up we live in the same city. small world!). Being a good net citizen is important to me so I want to make sure that I'm not leaching graphics or server performance from others without their permission.
Talcos (and others) I'm curious to know of the possible ways these trained brains could be categorized so a user can select a brain to use to build their own custom set. One obvious way is how many epochs the brain has been trained. I would like to also start laying the ground work for hosting the trained snapshots in an organized fashion. Google drive is convenient but there are other options too.
Private Mod Note
():
Rollback Post to RevisionRollBack
Proud to be saving the world since 1984 -- I also have an open source website to make AI generated magic cards. Source code
Any chance at getting this code set up on a server with a simple UI so people can generate sets without running it on their own machines?
100% chance I need to get some of those fundamentals ready first. I messaged the mtgcardsmith admin to see how open they are if I can use their site or backend (ends up we live in the same city. small world!). Being a good net citizen is important to me so I want to make sure that I'm not leaching graphics or server performance from others without their permission.
Talcos (and others) I'm curious to know of the possible ways these trained brains could be categorized so a user can select a brain to use to build their own custom set. One obvious way is how many epochs the brain has been trained. I would like to also start laying the ground work for hosting the trained snapshots in an organized fashion. Google drive is convenient but there are other options too.
For the record, I am investigating a way for us to have art generation without having to turn rely on outside sources to pull in images each time. The process more or less would work like this:
* The card generation network produces a card.
* A content vector is generated from the text of that card (just as I demonstrated in my last post).
* The content vector is used to prime another network that generates the artwork. Sort of like a fuzzy, keyed retrieval of artwork (e.g. oh, the vector indicates that this is a red goblin card, need to make the art all red and goblin-y). There is literature out there that demonstrates that this is not too difficult to do.
But until we get to that point, we can rely on outside sources, provided we can give proper attribution to the artists where feasible.
Anyway, as for the categorization, my hope is that we'll arrive at a final network that can serve our needs and we can then just parameterize the sampling of that network to generate desirable results. But if you want to offer a range of networks, you could categorize them by their width (cells per layer), depth (number of layers), and training parameters (epochs, learning rate, and so on). At some point, we will need to categorize the snapshots better, though thus far we've been changing up the way that input is organized and the training parameters each time to figure out what works best. It's still an ongoing, evolutionary process.
So, I think I found the problem with my installation. When I run the last command to install Torch, I get a bunch of "Error: No such file or directory" when it tries to install Luarocks.
Oh, man. You just hit the card name jackpot. Those are pretty much all solid gold. Screamfire Crusher? Sounds like something Gruul would want yesterday. Chile of Renwood...? Meh. But MAGISWING BLIZZARGE?! I'd hate to run afoul of a MAGISWING BLIZZARGE. That's gotta be the best name I've read in this thread. Frural Champion is just fun to say. Then there's the mysterious Fangelkor Deade, who's clearly one of the most badass warriors in all of Nipher. Speaking of Nipher, I hear they have Craw Unicorns.
So, I think I found the problem with my installation. When I run the last command to install Torch, I get a bunch of "Error: No such file or directory" when it tries to install Luarocks.
Very odd. Can you share with us the specific error messages that you're getting and what you're doing when you're getting them? As in, copy and paste the text from the terminal?
---
By the way, mwsgames.com offers low-quality scans of all card art for use with Magic Workstation. Honestly, low-quality is preferable to me because it's easier to train on. I'm looking through the files, and this looks like it's exactly what we need. The images are crappy enough that I don't feel bad about infringing on anyone's intellectual rights, and the products of the network will be highly derivative works anyway.
The plan is to produce a convolutional neural network that maps a content vector of size 200, with values ranging between -1 and 1, to raw images of size 221 x 163. The process would be similar to Dosovitskiy et al 2015 in their paper where they used convolutional neural networks to generate generate convincing chairs (as in chairs you sit on). Instead of producing chairs, we're producing Magic art.
I've attached a diagram from that paper since it's easier to show you what would happen rather than attempting to explain it in just one post. In that diagram, on the left, the user feeds a description of the chair they want (the class) followed by the angle at which they want the chair to be rendered, and other modelling parameters like lighting. Each of the convolutional layers (in blue) progressively unfolds and expands upon the image-to-be until we finally have the chair we want.
More or less, that's what we'd do for Magic cards. The Torch framework already supports convnets, and this doesn't seem like it'd be that difficult to set up. At the same time, I have a busy schedule, so it might be a bit before I can get to that. However, I just wanted you to know that that was on the horizon.
Now, given the relatively small number of images we have to work with, I can't promise that the images produced will be completely coherent. However, they will be interesting, and they will be related semantically to the cards for which they are generated. A dragon card should at least have one or more dragon blobs floating about in the image, breathing fire, that sort of thing.
Just so I'm summarizing correctly, a separate nn is required for images as the connections are fundamentally different. In other words, one brain can't handle text and images (on consumer computers anyways)
Private Mod Note
():
Rollback Post to RevisionRollBack
Proud to be saving the world since 1984 -- I also have an open source website to make AI generated magic cards. Source code
So, I think I found the problem with my installation. When I run the last command to install Torch, I get a bunch of "Error: No such file or directory" when it tries to install Luarocks.
Very odd. Can you share with us the specific error messages that you're getting and what you're doing when you're getting them? As in, copy and paste the text from the terminal?
cd ~/torch; ./install.sh
Prefix set to /home/joshua/torch/install
-- Found readline library
-- LuaJIT Target: x86
-- LuaJIT target x86
-- Configuring done
-- Generating done
-- Build files have been written to: /home/joshua/torch/build
[ 3%] Built target minilua
[ 26%] Built target buildvm
[ 61%] Built target libluajit
Linking C executable luajit
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_BC_POW':
buildvm_x86.dasc:(.text+0x84b): undefined reference to `lj_wrap_pow'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_log':
buildvm_x86.dasc:(.text+0x269c): undefined reference to `lj_wrap_log'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_log10':
buildvm_x86.dasc:(.text+0x26cb): undefined reference to `lj_wrap_log10'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_exp':
buildvm_x86.dasc:(.text+0x26fa): undefined reference to `lj_wrap_exp'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_sin':
buildvm_x86.dasc:(.text+0x2729): undefined reference to `lj_wrap_sin'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_cos':
buildvm_x86.dasc:(.text+0x2758): undefined reference to `lj_wrap_cos'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_tan':
buildvm_x86.dasc:(.text+0x2787): undefined reference to `lj_wrap_tan'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_asin':
buildvm_x86.dasc:(.text+0x27b6): undefined reference to `lj_wrap_asin'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_acos':
buildvm_x86.dasc:(.text+0x27e5): undefined reference to `lj_wrap_acos'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_atan':
buildvm_x86.dasc:(.text+0x2814): undefined reference to `lj_wrap_atan'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_sinh':
buildvm_x86.dasc:(.text+0x2843): undefined reference to `lj_wrap_sinh'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_cosh':
buildvm_x86.dasc:(.text+0x2872): undefined reference to `lj_wrap_cosh'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_tanh':
buildvm_x86.dasc:(.text+0x28a1): undefined reference to `lj_wrap_tanh'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_pow':
buildvm_x86.dasc:(.text+0x28e5): undefined reference to `lj_wrap_pow'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_atan2':
buildvm_x86.dasc:(.text+0x2929): undefined reference to `lj_wrap_atan2'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_fmod':
buildvm_x86.dasc:(.text+0x296d): undefined reference to `lj_wrap_fmod'
collect2: error: ld returned 1 exit status
make[2]: *** [exe/luajit-rocks/luajit-2.1/luajit] Error 1
make[1]: *** [exe/luajit-rocks/luajit-2.1/CMakeFiles/luajit.dir/all] Error 2
make: *** [all] Error 2
./install.sh: line 64: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 67: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 68: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 69: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 71: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 72: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 73: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 74: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 75: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 76: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 77: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 78: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 79: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 80: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 81: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 82: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 83: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 93: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 94: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 95: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 96: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 97: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 98: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 99: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 100: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 101: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 102: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 105: /home/joshua/torch/install/bin/luarocks: No such file or directory
Do you want to automatically prepend the Torch install location
to PATH and LD_LIBRARY_PATH in your /home/joshua/.bashrc? (yes/no)
[yes] >>>
Up until this point, I haven't had any errors occur.
Also, the image generation thing sounds awesome. I can't wait to see how it works.
Just so I'm summarizing correctly, a separate nn is required for images as the connections are fundamentally different. In other words, one brain can't handle text and images (on consumer computers anyways)
Yes. But then, that's why your visual cortex is wired differently than the part of your brain that enables you to understand how layers work in Magic. And the network that generates the text communicates to the network/mechanisms that generates the content vectors that then talks to the network that generates the art. So they function as a coherent whole. I mean, you could strengthen that bond with feedback, where images inspire cards, but I think the one-way solution is the most straightforward way of accomplishing what we want.
EDIT: Programmer_112, I'll look into your problem. In the mean time, if anyone else has any ideas, feel free to contribute.
Well, I finally caved and installed an Ubuntu dual-boot. I'm still attempting to make Torch run on Windows but it's a long, arduous process which I don't see ending any time soon. Or maybe even succeeding.
Anyway, I got CUDA going on Ubuntu and the relevant torch libraries installed and I can do 3-layer 256-rnn size batches in 0.19 seconds, it appears. That means I'll accomplish a full 50 epochs in 2.3 hours! This is wayyyy faster than my virtual Ubuntu setup, so it's certainly worth it. I tried sizes of 300 above for rnn but it gave me out-of-memory errors, which makes sense given my measly 1gb graphics memory. At least now if I ever upgrade my gpu it'll help both my gaming and my neural network running.
I'll run some samples once I get some checkpoints out and post any good cards here, as usual. I may not be able to run 512-rnn size training runs but I'm glad my gpu actually works for this sort of thing.
Can the corpus vectors be used to inform the neural net of potentially good choices during card generation? Perhaps by weighting the potential word selections with their cosine distance, or some function therof?
Example: Penalize each choice proportionial to its distance from some experimentally determined vector sweet spot. Telling the network to throw out bits that are too near or too far from verbatim bits of corpus might also be useful.
So I had a weird result; after 12 epochs the validation loss started going up. At 4 epochs it was 0.51. at 8 it was 0.48, then at 12 it was 0.49 until it hit 0.6677 at 50 epochs. What causes that to happen? (I was running training with 3 layers, 256 rnn_size and all other parameters default).
Nevertheless, I got some interesting cards from epoch 16, since its loss was close to the minimum with 0.4983. So here we go...
living tear UU
enchantment ~ aura
enchant land
enchanted land has "T: Counter target spell. ## This... seems powerful to me. Possibly too powerful.
fungust vici 3
artifact ~ equipment
equip 4
equipped creature gets +1/+0 and has protection from black.
when you cast @, target creature gets +2/+2 until end of turn. ## Anti-black equipment? Why not. I like the clause when cast, that's new and interesting.
molderwull center 1R
enchantment ~ aura
flash
enchant creature T: prevent the next 2 damage that would be dealt to target creature or player this turn. 2: return @ to its owner's hand. ## I really like this. It's probably entirely too undercosted for what it can do, but it sure is versatile.
exckictiber imp B
creature ~ imp
flying
whenever @ deals combat damage to a player, that player sacrifices a creature.
when you cast @, exile all zombies.
modular 1
1/1 ## Wowza. Add maybe 3 to this one for balance. It's... a black zombie-hate card? With modular for good measure, so it's actually a 2/2 for all intents and purposes.
grave begonerator 4GWUB
creature ~ human rebel berserker
whenever @ attacks, put a +1/+1 counter on it.
7/7 ## This guy... has 4 colours. Not often does the network pull 4 colours into one card. 8CMC for a constantly-growing 7/7? Could be worth it.
damaro, arthi town legend WWUBRGW
legendary creature ~ elf archer
reach
at the beginning of your upkeep, sacrifice @ unless you sacrifice two lands.
9/7 ## Never mind, this thing wins. Meet Chromanticore's daddy, everyone! I like his old-style 'sac some lands to keep him in play'. The network also seems to have figured out reach and archers tend to go together. Frankly, this card is awesome.
reckless predator 1W
creature ~ human knight
flying 2G: @ gets +2/+2 until end of turn for each creature blocking @.
whenever @ deals combat damage to a player, that player discards two cards.
1/1 ## I feel like this guy's cost should be more... 3B, mostly for his discard ability. The blocking clause is terrific; it's even on colour (from Gang of Elk). Seems to be a bit counter-productive to his flying though.
drake infantry 3
artifact creature ~ octopus assassin 2, sacrifice @: target opponent discards a card.
2/2 ## That creature subtype. Oh, that creature subtype.
Can the corpus vectors be used to inform the neural net of potentially good choices during card generation? Perhaps by weighting the potential word selections with their cosine distance, or some function therof?
Example: Penalize each choice proportional to its distance from some experimentally determined vector sweet spot. Telling the network to throw out bits that are too near or too far from verbatim bits of corpus might also be useful.
Very good points, I love it! Like a filter. Perhaps we could have a network generating cards, an encoder converting them into vectors, and a decoder that produces a clean if conservative version of whatever card the original network made. Yes, it is definitely possible. But there are some complications with that, certain challenges that we would have to face.
EDIT: I may have skipped a step in my explanation here. We can't really do the vector predictions iteratively because they don't tell the network comes next, just what could happen before or after the current word within a certain context bound. It's easier to consider the network's choices in hindsight than it is to try and control what it produces via the vectors.
For example, let's assume we have this card (that I just made up for the sake of example):
Attended Faultless 1G
Creature - Elf
When Attended Faultless enters the battlefield, put a 1/1 black Goblin creature token onto the battlefield. The Faultless make a point to travel with their ugliest servants, to make themselves seem more beautiful by comparison.
2/2
We compute the vectors for each word/symbol in the encoded form and we get the following vector:
Those 200 numbers encode the semantic content of the card. If we compare those numbers with those of existing Magic cards, we see that the card occupies the same design space as the following cards:
A lot of cards come close, but the fact that this is a green card that makes black goblin tokens causes it to stand out from its peers.
Now, the question is, can we get back to the card from the vector? The answer is no, not how I have things set up. To get the content vector, we computed the normalized sum of the vectors for each word. That's a commutative operation, so those words could have occurred in any order (just as (3 + 5) = 8 and (5 + 3) = 8). Now, assume that we have a decoder network, a network that can map content vectors to cards. It won't produce garbage output (it's smart), but it does have to contend with several equally likely choices:
* Is it a goblin that makes elf tokens or an elf that makes goblin tokens?
* Is it a 2/2 creature that makes a 1/1 token or a 1/1 that makes a 2/2 token?
Assuming that it has no bias about any of those choices, you'll only get back the original text of the card 25% of the time. Now, if the decoder is really smart, it'll see that a green goblin is possible but less likely than a green elf and a black goblin is more likely than a black elf, but there are enough black elves and green goblins that it can't rule out any possibilities. As such, the real decoder will probably give you back the original card slightly more than 25% of the time, maybe in the 30%-40% range.
However, there IS a way around this. I read a paper the other day (Mou et al. 2015), who showed that you could preserve structured information by weighting vectors according to where they occur in the structure (e.g. v(elf) * TYPE + v(goblin) * BODYTEXT). I skipped over attempting that for the time being because I was mainly interested in getting artwork from the content vectors. The only difference that would make is that it would dictate which creature needed to be in the foreground and which needed to be in the background; I'll be happy if the art is coherent, I'm less concerned about getting the right scene composition (that can come later).
But having a field weighting scheme would make it possible to have a network that cleans or filters garbage cards in the way that you are suggesting. That's a very clever idea, and it's one that we can explore in the future. On a related note, it would also make it possible for us to have an encoder that maps artwork to vectors and then we could map vectors to cards (so we could show the network novel artwork and have it dream up cards appropriate for that artwork). That's also something else that we could consider.
So I had a weird result; after 12 epochs the validation loss started going up. At 4 epochs it was 0.51. at 8 it was 0.48, then at 12 it was 0.49 until it hit 0.6677 at 50 epochs. What causes that to happen? (I was running training with 3 layers, 256 rnn_size and all other parameters default).
It could have something to do with the learning rate. If the learning rate is too high, it's possible for the network to reach a near-optimal state and then to diverge from it as it attempts to make further improvements (it starts over-engineering some aspects of its predictions at the expense of others). That'd be my hypothesis. We may be able to fix that sort of thing by tweaking the parameters a little.
Domaro humors me immensely. That is awesome (though honestly it could use a few extra abilities to make up for the fact that you have to sacrifice lands to keep using him). This is one of those situations where the network understands and yet doesn't understand the game: if it's an archer, it makes sense to give it reach. But reach is not so helpful if the sacrifice clause makes you want to attack with him every turn to get the most out of the card.
Fungust Vici is very interesting. I like the idea of an equipment that has a "when cast" clause appended to it. It might also be good to put that sort of thing on an aura, so you at least get something out of the spell if the opponent kills the aura's target in response.
So, I think I found the problem with my installation. When I run the last command to install Torch, I get a bunch of "Error: No such file or directory" when it tries to install Luarocks.
Very odd. Can you share with us the specific error messages that you're getting and what you're doing when you're getting them? As in, copy and paste the text from the terminal?
cd ~/torch; ./install.sh
Prefix set to /home/joshua/torch/install
-- Found readline library
-- LuaJIT Target: x86
-- LuaJIT target x86
-- Configuring done
-- Generating done
-- Build files have been written to: /home/joshua/torch/build
[ 3%] Built target minilua
[ 26%] Built target buildvm
[ 61%] Built target libluajit
Linking C executable luajit
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_BC_POW':
buildvm_x86.dasc:(.text+0x84b): undefined reference to `lj_wrap_pow'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_log':
buildvm_x86.dasc:(.text+0x269c): undefined reference to `lj_wrap_log'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_log10':
buildvm_x86.dasc:(.text+0x26cb): undefined reference to `lj_wrap_log10'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_exp':
buildvm_x86.dasc:(.text+0x26fa): undefined reference to `lj_wrap_exp'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_sin':
buildvm_x86.dasc:(.text+0x2729): undefined reference to `lj_wrap_sin'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_cos':
buildvm_x86.dasc:(.text+0x2758): undefined reference to `lj_wrap_cos'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_tan':
buildvm_x86.dasc:(.text+0x2787): undefined reference to `lj_wrap_tan'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_asin':
buildvm_x86.dasc:(.text+0x27b6): undefined reference to `lj_wrap_asin'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_acos':
buildvm_x86.dasc:(.text+0x27e5): undefined reference to `lj_wrap_acos'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_atan':
buildvm_x86.dasc:(.text+0x2814): undefined reference to `lj_wrap_atan'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_sinh':
buildvm_x86.dasc:(.text+0x2843): undefined reference to `lj_wrap_sinh'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_cosh':
buildvm_x86.dasc:(.text+0x2872): undefined reference to `lj_wrap_cosh'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_tanh':
buildvm_x86.dasc:(.text+0x28a1): undefined reference to `lj_wrap_tanh'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_pow':
buildvm_x86.dasc:(.text+0x28e5): undefined reference to `lj_wrap_pow'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_atan2':
buildvm_x86.dasc:(.text+0x2929): undefined reference to `lj_wrap_atan2'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_fmod':
buildvm_x86.dasc:(.text+0x296d): undefined reference to `lj_wrap_fmod'
collect2: error: ld returned 1 exit status
make[2]: *** [exe/luajit-rocks/luajit-2.1/luajit] Error 1
make[1]: *** [exe/luajit-rocks/luajit-2.1/CMakeFiles/luajit.dir/all] Error 2
make: *** [all] Error 2
./install.sh: line 64: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 67: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 68: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 69: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 71: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 72: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 73: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 74: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 75: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 76: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 77: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 78: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 79: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 80: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 81: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 82: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 83: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 93: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 94: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 95: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 96: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 97: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 98: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 99: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 100: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 101: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 102: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 105: /home/joshua/torch/install/bin/luarocks: No such file or directory
Do you want to automatically prepend the Torch install location
to PATH and LD_LIBRARY_PATH in your /home/joshua/.bashrc? (yes/no)
[yes] >>>
Up until this point, I haven't had any errors occur.
Also, the image generation thing sounds awesome. I can't wait to see how it works.
Thanks!
I haven't yet had time to investigate your issue further, but I did see this post involving another would-be Torch user suffering from a very similar problem and I think they were able to fix the issue. You might look there (I'll look into the issue further when I have the time).
---
Good news: I hear that IT is hard at work with my CUDA-related issues, the resolution of which will make everything go much faster for me.
Somewhat neutral news: For the image generation task, I see that Torch provides support for convolutional networks such that I could map images to content vectors. But we need to go in the opposite direction, so I need to invert the constituent operations. Unfortunately, max-pooling is non-invertible. Note: if it were invertible, then you could just shout "Enhance! Enhance!" at a blurry, low-resolution image to magically make it high quality, like they do on TV and in movies, but that's not how the universe works.
So I need equal-but-opposite implementations for unpooling and deconvolution, and I'm not sure that Torch provides those yet. I'd prefer to find someone else's implementation so that I don't, you know, mess things up or do things in an inefficient way. I'm sure it's been done plenty of times before.
At the very least, I have the input vectors and I have the output images, so it's only a matter of time before I put two and two together.
To summarize for the viewers at home: artificially-generated artwork for artificially-generated cards! Hopefully it won't be too long before I have something to share with you.
Edit: I'm just leaving the symbol at all-rares because I'm not bothering with trying to classify rare/uncommon/common. I was going to remove it entirely but it looked weird. But yeah, I'll be going through everybody's cards and if any jump out at me I'll post em.
Welcome! If you're having a problem, could you elaborate on what error that you're seeing? We might have encountered it before and found a way around it.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
I'm getting a "No results matching query were found" error on both of the "luarocks install" commands.
Do you have a 64-bit Ubuntu? I got similar errors when trying to execute on a 32-bit machine.
Currently Playing:
Legacy: Something U/W Controlish
EDH Cube
Hypercube! A New EDH Deck Every Week(ish)!
source ~/.bashrc
Currently Playing:
Legacy: Something U/W Controlish
EDH Cube
Hypercube! A New EDH Deck Every Week(ish)!
With this model, word-level semantics are at least weakly preserved through the linear combination of vectors. That is, when you add vectors together, you get a new vector that combines the meaning of the original two. For example, if you combine the vectors for "french" and "river", the closest matches you get are words like "loire", "garrone", and "scheldt", all of which are names of rivers in France. Impressive! However, while this power is amazing, it gets weaker as you add more words. So if you were to take a book about George Washington, sample the text, vectorize the samples, and then try to decode the meaning based on those vectors, you'll find it's harder to do as the sample size grows:
v(a single sentence): George Washington was an American president.
v(a few paragraphs): A famous/infamous person/dog/inanimate object was a leader/general/CEO in a war/conflict/corporate merger.
v(the whole book): This is a book about important things.
Fortunately for us, our sampling is done over single cards, so we shouldn't run into that kind of trouble.
So, first, I take an encoded version of the original corpus of Magic cards and I flatten and clean the input so as to make it easier to train a vector model on the words.
Example of original card:
After preprocessing:
We'll call the reduced version convertedcorpus.txt. Then, having compiled word2vec, I run...
And then I have the vector model, a mapping from words in Magic to vectors corresponding to their "meaning". The word2vec package provides scripts for probing out this vector model. For example, I can do the following:
So all of these words have vectors similar to land because they occur in similar contexts. Note that these are very fuzzy associations; the word2vec approach is usually applied to massive corpuses, like the entirety of Wikipedia, and you tend to get much better results in those cases. However, it'll work well enough for our purposes.
Next I go back through the encoded corpus and I produce a list with elements of the form (name,v), where name is the name of a card and v is the normalized sum of all of the vectors for each of the words in that card. Next I wrote a small script which would let me put in an encoded card and then see which cards in the original set of Magic cards most closely match that card according to cosine similarity. 1 means identical, -1 means absolutely unrelated.
For example, I take the text of the card Vivid Meadow and the top ten matches I get are:
So the top results contain cards that are semantically similar to Vivid Meadow according to our vector model. Notice that all the Vivid lands are there, but that Tendo Ice Bridge comes before Vivid Crag and Vivid Marsh. This is probably because the Meadow produces white mana and the Crag and the Marsh produce enemy colored mana, which drives them slightly further apart (but not by much). This is very important because a word-based string comparison method would tell us that all the Vivid lands were equally related, but the word-based vector representation tells us a different story.
We can use this approach to evaluate cards produced by the network. This allows us to pinpoint what cards the network is drawing inspiration from when it creates a new card.
For example, we have the card:
and our model tells us that the closest matches are
As we can see, there is a fairly strong relationship between Sea Sun and Belbe's Armor, but it's not so similar that we could consider it a clone or near-clone (compare to Vivid Meadow and Vivid Creek which have a similarity score of 0.9925000591622002).
This technique is especially helpful for making sense of complex, highly novel cards, like the following:
Which gives us the following matches:
Notice that there are no close matches, only partial ones. If we want to pull highly novel cards out of a dump from the network, we can make a pass over the dump and retain cards whose closest matches fall below a certain threshold.
Anyway, that's what I came up with this morning. I'll see about getting all of this packaged up and made available for your entertainment later. Let me know what you think!
P.S. I think this sort of approach could be useful for a card recommendation system for deck building. Just a thought.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
The only feature at the moment is generating graphical cards from nn output based on hardcast's format (I am using his code after all), with a bit of hacking to make it a python3 package.
I am also using Nafnlaus's image search code but I am getting slow search speeds on my vps so you may run into 504 errors. If that happens try again.
I'm also being a very bad net citizen and leaching images so I have restricted the system to the first six cards in the list.
Github link for source code and installation instructions. The instructions assume familiarity with git, python, and pip.
Plans for the next versions:
100% chance I need to get some of those fundamentals ready first. I messaged the mtgcardsmith admin to see how open they are if I can use their site or backend (ends up we live in the same city. small world!). Being a good net citizen is important to me so I want to make sure that I'm not leaching graphics or server performance from others without their permission.
Talcos (and others) I'm curious to know of the possible ways these trained brains could be categorized so a user can select a brain to use to build their own custom set. One obvious way is how many epochs the brain has been trained. I would like to also start laying the ground work for hosting the trained snapshots in an organized fashion. Google drive is convenient but there are other options too.
For the record, I am investigating a way for us to have art generation without having to turn rely on outside sources to pull in images each time. The process more or less would work like this:
* The card generation network produces a card.
* A content vector is generated from the text of that card (just as I demonstrated in my last post).
* The content vector is used to prime another network that generates the artwork. Sort of like a fuzzy, keyed retrieval of artwork (e.g. oh, the vector indicates that this is a red goblin card, need to make the art all red and goblin-y). There is literature out there that demonstrates that this is not too difficult to do.
But until we get to that point, we can rely on outside sources, provided we can give proper attribution to the artists where feasible.
Anyway, as for the categorization, my hope is that we'll arrive at a final network that can serve our needs and we can then just parameterize the sampling of that network to generate desirable results. But if you want to offer a range of networks, you could categorize them by their width (cells per layer), depth (number of layers), and training parameters (epochs, learning rate, and so on). At some point, we will need to categorize the snapshots better, though thus far we've been changing up the way that input is organized and the training parameters each time to figure out what works best. It's still an ongoing, evolutionary process.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
I don't think it beats Slidshocking Krow.
Very odd. Can you share with us the specific error messages that you're getting and what you're doing when you're getting them? As in, copy and paste the text from the terminal?
---
By the way, mwsgames.com offers low-quality scans of all card art for use with Magic Workstation. Honestly, low-quality is preferable to me because it's easier to train on. I'm looking through the files, and this looks like it's exactly what we need. The images are crappy enough that I don't feel bad about infringing on anyone's intellectual rights, and the products of the network will be highly derivative works anyway.
The plan is to produce a convolutional neural network that maps a content vector of size 200, with values ranging between -1 and 1, to raw images of size 221 x 163. The process would be similar to Dosovitskiy et al 2015 in their paper where they used convolutional neural networks to generate generate convincing chairs (as in chairs you sit on). Instead of producing chairs, we're producing Magic art.
I've attached a diagram from that paper since it's easier to show you what would happen rather than attempting to explain it in just one post. In that diagram, on the left, the user feeds a description of the chair they want (the class) followed by the angle at which they want the chair to be rendered, and other modelling parameters like lighting. Each of the convolutional layers (in blue) progressively unfolds and expands upon the image-to-be until we finally have the chair we want.
More or less, that's what we'd do for Magic cards. The Torch framework already supports convnets, and this doesn't seem like it'd be that difficult to set up. At the same time, I have a busy schedule, so it might be a bit before I can get to that. However, I just wanted you to know that that was on the horizon.
Now, given the relatively small number of images we have to work with, I can't promise that the images produced will be completely coherent. However, they will be interesting, and they will be related semantically to the cards for which they are generated. A dragon card should at least have one or more dragon blobs floating about in the image, breathing fire, that sort of thing.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
cd ~/torch; ./install.sh
Prefix set to /home/joshua/torch/install
-- Found readline library
-- LuaJIT Target: x86
-- LuaJIT target x86
-- Configuring done
-- Generating done
-- Build files have been written to: /home/joshua/torch/build
[ 3%] Built target minilua
[ 26%] Built target buildvm
[ 61%] Built target libluajit
Linking C executable luajit
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_BC_POW':
buildvm_x86.dasc:(.text+0x84b): undefined reference to `lj_wrap_pow'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_log':
buildvm_x86.dasc:(.text+0x269c): undefined reference to `lj_wrap_log'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_log10':
buildvm_x86.dasc:(.text+0x26cb): undefined reference to `lj_wrap_log10'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_exp':
buildvm_x86.dasc:(.text+0x26fa): undefined reference to `lj_wrap_exp'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_sin':
buildvm_x86.dasc:(.text+0x2729): undefined reference to `lj_wrap_sin'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_cos':
buildvm_x86.dasc:(.text+0x2758): undefined reference to `lj_wrap_cos'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_tan':
buildvm_x86.dasc:(.text+0x2787): undefined reference to `lj_wrap_tan'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_asin':
buildvm_x86.dasc:(.text+0x27b6): undefined reference to `lj_wrap_asin'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_acos':
buildvm_x86.dasc:(.text+0x27e5): undefined reference to `lj_wrap_acos'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_atan':
buildvm_x86.dasc:(.text+0x2814): undefined reference to `lj_wrap_atan'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_sinh':
buildvm_x86.dasc:(.text+0x2843): undefined reference to `lj_wrap_sinh'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_cosh':
buildvm_x86.dasc:(.text+0x2872): undefined reference to `lj_wrap_cosh'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_tanh':
buildvm_x86.dasc:(.text+0x28a1): undefined reference to `lj_wrap_tanh'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_pow':
buildvm_x86.dasc:(.text+0x28e5): undefined reference to `lj_wrap_pow'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_atan2':
buildvm_x86.dasc:(.text+0x2929): undefined reference to `lj_wrap_atan2'
CMakeFiles/luajit.dir/lj_vm.s.o: In function `lj_ff_math_fmod':
buildvm_x86.dasc:(.text+0x296d): undefined reference to `lj_wrap_fmod'
collect2: error: ld returned 1 exit status
make[2]: *** [exe/luajit-rocks/luajit-2.1/luajit] Error 1
make[1]: *** [exe/luajit-rocks/luajit-2.1/CMakeFiles/luajit.dir/all] Error 2
make: *** [all] Error 2
./install.sh: line 64: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 67: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 68: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 69: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 71: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 72: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 73: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 74: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 75: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 76: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 77: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 78: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 79: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 80: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 81: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 82: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 83: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 93: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 94: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 95: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 96: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 97: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 98: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 99: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 100: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 101: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 102: /home/joshua/torch/install/bin/luarocks: No such file or directory
./install.sh: line 105: /home/joshua/torch/install/bin/luarocks: No such file or directory
Do you want to automatically prepend the Torch install location
to PATH and LD_LIBRARY_PATH in your /home/joshua/.bashrc? (yes/no)
[yes] >>>
Also, the image generation thing sounds awesome. I can't wait to see how it works.
Thanks!
Yes. But then, that's why your visual cortex is wired differently than the part of your brain that enables you to understand how layers work in Magic. And the network that generates the text communicates to the network/mechanisms that generates the content vectors that then talks to the network that generates the art. So they function as a coherent whole. I mean, you could strengthen that bond with feedback, where images inspire cards, but I think the one-way solution is the most straightforward way of accomplishing what we want.
EDIT: Programmer_112, I'll look into your problem. In the mean time, if anyone else has any ideas, feel free to contribute.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Anyway, I got CUDA going on Ubuntu and the relevant torch libraries installed and I can do 3-layer 256-rnn size batches in 0.19 seconds, it appears. That means I'll accomplish a full 50 epochs in 2.3 hours! This is wayyyy faster than my virtual Ubuntu setup, so it's certainly worth it. I tried sizes of 300 above for rnn but it gave me out-of-memory errors, which makes sense given my measly 1gb graphics memory. At least now if I ever upgrade my gpu it'll help both my gaming and my neural network running.
I'll run some samples once I get some checkpoints out and post any good cards here, as usual. I may not be able to run 512-rnn size training runs but I'm glad my gpu actually works for this sort of thing.
Example: Penalize each choice proportionial to its distance from some experimentally determined vector sweet spot. Telling the network to throw out bits that are too near or too far from verbatim bits of corpus might also be useful.
Nevertheless, I got some interesting cards from epoch 16, since its loss was close to the minimum with 0.4983. So here we go...
living tear
UU
enchantment ~ aura
enchant land
enchanted land has "T: Counter target spell.
## This... seems powerful to me. Possibly too powerful.
fungust vici
3
artifact ~ equipment
equip 4
equipped creature gets +1/+0 and has protection from black.
when you cast @, target creature gets +2/+2 until end of turn.
## Anti-black equipment? Why not. I like the clause when cast, that's new and interesting.
molderwull center
1R
enchantment ~ aura
flash
enchant creature
T: prevent the next 2 damage that would be dealt to target creature or player this turn.
2: return @ to its owner's hand.
## I really like this. It's probably entirely too undercosted for what it can do, but it sure is versatile.
exckictiber imp
B
creature ~ imp
flying
whenever @ deals combat damage to a player, that player sacrifices a creature.
when you cast @, exile all zombies.
modular 1
1/1
## Wowza. Add maybe 3 to this one for balance. It's... a black zombie-hate card? With modular for good measure, so it's actually a 2/2 for all intents and purposes.
grave begonerator
4GWUB
creature ~ human rebel berserker
whenever @ attacks, put a +1/+1 counter on it.
7/7
## This guy... has 4 colours. Not often does the network pull 4 colours into one card. 8CMC for a constantly-growing 7/7? Could be worth it.
damaro, arthi town legend
WWUBRGW
legendary creature ~ elf archer
reach
at the beginning of your upkeep, sacrifice @ unless you sacrifice two lands.
9/7
## Never mind, this thing wins. Meet Chromanticore's daddy, everyone! I like his old-style 'sac some lands to keep him in play'. The network also seems to have figured out reach and archers tend to go together. Frankly, this card is awesome.
reckless predator
1W
creature ~ human knight
flying
2G: @ gets +2/+2 until end of turn for each creature blocking @.
whenever @ deals combat damage to a player, that player discards two cards.
1/1
## I feel like this guy's cost should be more... 3B, mostly for his discard ability. The blocking clause is terrific; it's even on colour (from Gang of Elk). Seems to be a bit counter-productive to his flying though.
drake infantry
3
artifact creature ~ octopus assassin
2, sacrifice @: target opponent discards a card.
2/2
## That creature subtype. Oh, that creature subtype.
Very good points, I love it! Like a filter. Perhaps we could have a network generating cards, an encoder converting them into vectors, and a decoder that produces a clean if conservative version of whatever card the original network made. Yes, it is definitely possible. But there are some complications with that, certain challenges that we would have to face.
EDIT: I may have skipped a step in my explanation here. We can't really do the vector predictions iteratively because they don't tell the network comes next, just what could happen before or after the current word within a certain context bound. It's easier to consider the network's choices in hindsight than it is to try and control what it produces via the vectors.
For example, let's assume we have this card (that I just made up for the sake of example):
Attended Faultless
1G
Creature - Elf
When Attended Faultless enters the battlefield, put a 1/1 black Goblin creature token onto the battlefield.
The Faultless make a point to travel with their ugliest servants, to make themselves seem more beautiful by comparison.
2/2
We compute the vectors for each word/symbol in the encoded form and we get the following vector:
[0.08918239142969347, -0.07020373337562555, 0.05209813735358393, -0.12545127855543353, 0.07968180514180137, -0.13399928911514852, 0.0743145680211909, 0.06988412695682845, -0.01900370808127865, 0.009161254323265358, 0.0688177226537796, 0.047873326676195704, 0.07727499949119722, -0.07488622828279229, -0.04665469339383273, 0.09194066082278932, -0.15160681649115143, 0.17059334324900133, 0.03544912734607157, -0.08035659973268258, -0.06802190585642306, 0.018355064341338215, 0.06271569766365057, 0.03753550322092408, -0.038445525294391505, -0.07598551819390482, -0.012099245983395618, -0.03514371426931306, 0.09836076051341651, 0.04838674935564608, 0.02285152112121127, 0.01387453429054968, 0.010656423450275003, 0.18162102373624503, -0.02030551213390656, 0.15512389228661158, 0.04708289889211954, 0.04644971009459727, 0.015541717813611066, 0.011967028189840544, -0.08157625382934684, -0.002433775800919163, 0.0013432492637412796, -0.0884559875457492, 0.04695014344104137, -0.06738691831677904, -0.026822773942261253, -0.06249662153861369, -0.06746191528819837, 0.03806348068255066, -0.022880041461905316, -0.11055888152084, -0.019492450933522126, 0.07047312652306616, 0.04533663242844099, 0.0283955193730011, 0.07806051253801359, -0.05949435052163171, 0.06987435701429305, -0.013116218147911049, -0.008087475906941496, -0.005180260146981008, -0.019591726555714336, 0.09022828422202182, -0.10729722658895471, 0.0015028563293532306, -0.019469714298766955, 0.03708989377737497, 0.03664096247565504, 0.09513764141797681, 0.12429262162950952, -0.1065889922823554, -0.05505662718793066, 0.007847881603707203, -0.010021265023376294, -0.060790414493150875, -0.04123764262751267, -0.07427242531751463, 0.001305868891276937, 0.02195126246452455, 0.08944245222489529, 0.038097062529609316, -0.07901921344653566, -0.012285874456562118, 0.14943817838512718, -0.03811407303207852, -0.0598469431624407, -0.007595320930172239, 0.03016182902722139, -0.01990466827558861, 0.06843421727758701, -0.060462281474931316, 0.1978429447065705, -0.03758232055166021, -0.046484216266383716, -0.02728025260240823, 0.026329762547590157, 0.1583343972994491, 0.0464254367912073, 0.09209601780842445, 0.07770760013494067, 0.11427524516414772, -0.03840618803128972, -0.023424694911965, -0.09026559073368955, -0.07805528971671895, -0.017043126897864157, 0.0394675959075578, -0.06867845235016981, 0.026228903893822295, -0.02798126330595878, 0.13695872928017955, -0.04830686898220776, -0.04936811249742411, 0.14146815554464195, 0.012254879191925952, 0.05368724379224868, -0.11406032392776941, -0.0917561497271507, -0.03749411188611167, 0.06094865901866781, -0.049186343118815104, -0.04374156933554425, 0.0289267001894592, -0.08244449542146791, -0.032325550571331546, 0.06640164207037953, -0.020472045159427077, -0.04627427811189173, 0.20595208319181071, -0.044032475323931924, -0.04032132755525972, 0.03604848925317056, 0.08314518847627583, 0.046088258707168275, -0.008296375170854945, -0.024774480633751237, -0.008969149169229682, -0.0032183177783883356, 0.021000722317517352, -0.002961477756373891, -0.18112853424687536, -0.09042282300233419, -0.03933597857603454, -0.06417224952844008, 0.03596920615708961, -0.002265186511604433, 0.06462723992100099, 0.10706038841036153, 0.013452129126246317, 0.007586274164873258, 0.0030417979302906294, -0.015534770707288563, -0.0781049321052581, 0.07448762642717688, -0.11572459701960297, -0.06870417870058369, -0.04738511269925164, 0.0704885074227744, 0.0677410429275961, 0.0487272209970766, 0.061988711402731794, 0.030184921063886646, 0.04560193127862925, 0.12673328014922036, -0.09913852432426427, 0.045700904216746205, -0.07256947299203204, 0.05681479534578181, -0.009498942044009692, -0.07456202337075837, 0.07844603845905661, -0.059092179752819, -0.048921267049240165, 0.015568851696252956, -0.12839752560579598, 0.04477204277530275, 0.027689896531836698, 0.042531020580514535, 0.09377897187286201, 0.12056789240603819, 0.05808154989560593, 0.002020082279822123, 0.11671457513184921, -0.02346689489951747, -0.10650158128577525, 0.038978415688243805, -0.036164854754599296, 0.08706876107907612, 0.007646652318697951, -0.03536609025892942, -0.08884965172187183, 0.03531400697456557, -0.02617486688847681, 0.038132486524207865, -0.0647813541416202, -0.08720208467428077, 0.022146158656047604, -0.06782583352467092, 0.0062585791837239755]
Those 200 numbers encode the semantic content of the card. If we compare those numbers with those of existing Magic cards, we see that the card occupies the same design space as the following cards:
A lot of cards come close, but the fact that this is a green card that makes black goblin tokens causes it to stand out from its peers.
Now, the question is, can we get back to the card from the vector? The answer is no, not how I have things set up. To get the content vector, we computed the normalized sum of the vectors for each word. That's a commutative operation, so those words could have occurred in any order (just as (3 + 5) = 8 and (5 + 3) = 8). Now, assume that we have a decoder network, a network that can map content vectors to cards. It won't produce garbage output (it's smart), but it does have to contend with several equally likely choices:
* Is it a goblin that makes elf tokens or an elf that makes goblin tokens?
* Is it a 2/2 creature that makes a 1/1 token or a 1/1 that makes a 2/2 token?
Assuming that it has no bias about any of those choices, you'll only get back the original text of the card 25% of the time. Now, if the decoder is really smart, it'll see that a green goblin is possible but less likely than a green elf and a black goblin is more likely than a black elf, but there are enough black elves and green goblins that it can't rule out any possibilities. As such, the real decoder will probably give you back the original card slightly more than 25% of the time, maybe in the 30%-40% range.
However, there IS a way around this. I read a paper the other day (Mou et al. 2015), who showed that you could preserve structured information by weighting vectors according to where they occur in the structure (e.g. v(elf) * TYPE + v(goblin) * BODYTEXT). I skipped over attempting that for the time being because I was mainly interested in getting artwork from the content vectors. The only difference that would make is that it would dictate which creature needed to be in the foreground and which needed to be in the background; I'll be happy if the art is coherent, I'm less concerned about getting the right scene composition (that can come later).
But having a field weighting scheme would make it possible to have a network that cleans or filters garbage cards in the way that you are suggesting. That's a very clever idea, and it's one that we can explore in the future. On a related note, it would also make it possible for us to have an encoder that maps artwork to vectors and then we could map vectors to cards (so we could show the network novel artwork and have it dream up cards appropriate for that artwork). That's also something else that we could consider.
It could have something to do with the learning rate. If the learning rate is too high, it's possible for the network to reach a near-optimal state and then to diverge from it as it attempts to make further improvements (it starts over-engineering some aspects of its predictions at the expense of others). That'd be my hypothesis. We may be able to fix that sort of thing by tweaking the parameters a little.
Domaro humors me immensely. That is awesome (though honestly it could use a few extra abilities to make up for the fact that you have to sacrifice lands to keep using him). This is one of those situations where the network understands and yet doesn't understand the game: if it's an archer, it makes sense to give it reach. But reach is not so helpful if the sacrifice clause makes you want to attack with him every turn to get the most out of the card.
Fungust Vici is very interesting. I like the idea of an equipment that has a "when cast" clause appended to it. It might also be good to put that sort of thing on an aura, so you at least get something out of the spell if the opponent kills the aura's target in response.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Sadly, it is not a creature that exiles a graveyard, as it rightly should be.
I haven't yet had time to investigate your issue further, but I did see this post involving another would-be Torch user suffering from a very similar problem and I think they were able to fix the issue. You might look there (I'll look into the issue further when I have the time).
---
Good news: I hear that IT is hard at work with my CUDA-related issues, the resolution of which will make everything go much faster for me.
Somewhat neutral news: For the image generation task, I see that Torch provides support for convolutional networks such that I could map images to content vectors. But we need to go in the opposite direction, so I need to invert the constituent operations. Unfortunately, max-pooling is non-invertible. Note: if it were invertible, then you could just shout "Enhance! Enhance!" at a blurry, low-resolution image to magically make it high quality, like they do on TV and in movies, but that's not how the universe works.
So I need equal-but-opposite implementations for unpooling and deconvolution, and I'm not sure that Torch provides those yet. I'd prefer to find someone else's implementation so that I don't, you know, mess things up or do things in an inefficient way. I'm sure it's been done plenty of times before.
At the very least, I have the input vectors and I have the output images, so it's only a matter of time before I put two and two together.
To summarize for the viewers at home: artificially-generated artwork for artificially-generated cards! Hopefully it won't be too long before I have something to share with you.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Couldn't help myself. I came across this art last night while working on cards, and you could not have timed this better.