Well you can do paginate the result to make it load things by batch. If you do this though you might need to put the data first in a separate data file and have javascript do the retrieving and displaying. You can also make the image load only when the element is visible by checking if the card div is currently visible and using js to insert the image tag when it is visible. I would want to help you with this but my personal laptop is currently dead. (I am posting this using my office pc.)
@sirleandenor that would be great, I was thinking pulling the forum styled code into a floating div by the name like a "[f]" with the creativity info next to it as [sim]. I have been thinking of breaking up the text in logical ways, so I have been thinking how to do about sorting it, I would steal most of the sortcards.py script but it duplicates content drastically and without already having the Iframe structure designed yet I'm not ready to try to implement something like that.
-----------------------------
Also I wont be as active as Fallout 4 has been released But I should be back before too long.
And I'm really excited that it has a graph visualization tool, TensorBoard. With Torch I can get visualizations, but they're just static diagrams that I eyeball whenever something goes horribly wrong. With TensorBoard, I can actually see what's going on in a very interactive way; that'll make neural net surgery go much more smoothly.
Seriously, look at that image I've attached! That's awesome! I'm very tempted to move my dissertation research over to TensorFlow, if only for that reason.
That's probably Fernanda Viégas doing. Her data visualization research is crazy good.
EDIT: Talcos, if you're still doing the style transfer thing (and it isn't much of a chore to do so), could you try this forest as a mix of these artworks (1, 2, 3)?
I'm 6 pages behind, but I'm pleased to announce another LoadingReadyRun shoutout:
Their charity fundraiser event Desert Bus is currently happening (streaming on Twitch), and yesterday as part of a challenge, many RoboRosewater cards were read out. Hilarity ensued. https://m.youtube.com/watch?v=RYZP1p1N9yU
EDIT: Talcos, if you're still doing the style transfer thing (and it isn't much of a chore to do so), could you try this forest as a mix of these artworks (1, 2, 3)?
Sure. Not tonight, don't have access to the servers. But I will tomorrow morning!
Sorry that it took me so long to reply to you. I had back-to-back research journal submission deadlines. I just put out about 100 double-spaced-pages worth of material and now I get to wait to hear back from the review committees to see whether they like my findings. I'm hopeful that they will.
And I'll hopefully have some neural-storyteller-related results to share soon. That should be amusing.
---
Also, YeGoblynQueenne, I love the results. Keep up the good work! I look forward to seeing more from you. Let me know if there's anything I can do to help.
And Mustard_Fountain, thanks for sharing!
EDIT: AlukSky, I uploaded the results that I got. I might be able to improve upon them further with some fine-tuning of the parameters. In particular, I think the Erin Hanson version could look even better with some work.
EDIT(1): I believe that I've identified the issues that are keeping me from getting neural-storyteller working. I sent out an e-mail to someone who can resolve those issues for me.
Hullo hullo. Time for an update
...
In any case, there's a lot to experiment with and like I say, assignments and work assignments. But hey, seems everyone else here is busy also
So is that attachment a graph, or did you tell neural style to start drawing dragons?
I'm still working on getting everything working for the neural-storyteller. There are still some configuration issues where I can't get the disk space I want, and it looks like we might have to nuke the virtual container system and reconfigure it in order to fix the problem. I'm fine with that, since I back-up everything religiously, but we have to be careful that we don't interfere with other people's work. Once I have that, I expect all the code to work just fine.
Meanwhile, a colleague in Brazil has contacted me and mentioned his interest in moving forward with research related to this project, so that makes yet another graduate student who is spinning off the work we've done here into something of academic value. I look forward to seeing how that turns out.
EDIT: Oh, and on an unrelated note, I was tapped to be a program committee member for a research conference... in Paris. Interesting choice. Well, I'm sure the security situation will be better in a few months.
Good news; I have obtained an uber-powerful GPU (980 Ti, specifically) and I can therefore do a hell of a lot more machine learning stuff than I could with my old card. I downloaded the neural style code and ran it, but despite the 6gb of RAM this GPU has, I could only generate an image about 650 pixels wide before running out of RAM completely. @Talcos, is there any way of reducing the memory footprint of this? Does the size of the source/content images matter at all? I'm also going to try to install the neural storyteller, since I don't have any particular hard drive space issues.
If anyone has art style merging requests, I can do those too
Good news; I have obtained an uber-powerful GPU (980 Ti, specifically) and I can therefore do a hell of a lot more machine learning stuff than I could with my old card.
I downloaded the neural style code and ran it, but despite the 6gb of RAM this GPU has, I could only generate an image about 650 pixels wide before running out of RAM completely. @Talcos, is there any way of reducing the memory footprint of this? Does the size of the source/content images matter at all?
EDIT: Yes, the size of the source and content images both matter, but I'll have to look into the details.
I'm 100% convinced that there is, but it'd probably take some work. I've got some of the best hardware on the market, but I struggle to handle images bigger than a 1000 pixels wide.
On one hand, I've seen papers out there that claim that you can reduce the size of nets like the VGG one that is being used using optimization techniques to prune unnecessary and redundant parts of the architecture. But I'd have to look into that more.
On the other hand, I'm all but certain that there has to be a way to do all this in a piece-wise fashion. I don't think you can just split the image up into squares and do it one piece at a time simply because you'd be breaking up the salient features of the image (the result would look ugly). However, considering that the whole purpose of convnet is to decompose the image into distinct regions and objects, I could envision a similar decomposition of the image generation problem into smaller subproblems. That being said, I'm not exactly sure how that would be accomplished.
At the very least, I can assure you that someone will figure out a more resource-efficient way of doing all of this. These are temporary setbacks.
I'm also going to try to install the neural storyteller, since I don't have any particular hard drive space issues.
Please do! The bureaucracy has my hands tied right now. You're more likely to get things working than I am at this point, seeing as we're coming up on the Thanksgiving break. My main issue has been disk space, because you have to have the object recognition network for the images, the network for the text comprehension and generation, and then all the libraries and such, and that pushes me over the 10 GB limit that is in place right now. Mind you, you'll have to set up Theano and Lasagne . I had written a list of instructions for the virtual container system to prepare that environment for me, but I haven't been able to test and see if I installed all the dependencies correctly. That being said, you might consider doing what I've done below, namely grabbing Anaconda which comes with Theano and then installing Lasagne on top of it. That might work. Note that my script was written for a CentOS7 environment, so how you end up doing things may look different.
RUN yum install -y git
RUN yum install -y wget
RUN yum install -y make
COPY install-deps /tmp/
RUN /tmp/install-deps
#Install CUDA libraries
COPY cuda-repo-rhel7-7-0-local-7.0-28.x86_64.rpm /tmp/
RUN rpm -i /tmp/cuda-repo-rhel7-7-0-local-7.0-28.x86_64.rpm
RUN yum clean all -y
RUN yum install -y cuda
RUN export PATH=/usr/local/cuda-7.0/bin:$PATH; export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:$LD_LIBRARY_PATH
Is there an Ubuntu equivalent of that script? I know 'yum' is purely a CentOS7 thing so that script wouldn't work out for me. I already have the vgg19 ConvNet models for the image style transfer part, which the Neural Storyteller git page told me to download (and that I see the script has as the last line), as well as CUDA, which the script also apparently installs.
Is there an Ubuntu equivalent of that script? I know 'yum' is purely a CentOS7 thing so that script wouldn't work out for me. I already have the vgg19 ConvNet models for the image style transfer part, which the Neural Storyteller git page told me to download (and that I see the script has as the last line), as well as CUDA, which the script also apparently installs.
Actually, yes, there should be... Here's one, and that just installs Lasagne using one line of code. It depends upon another dockerfile here. Not sure if that's quite what you'll need, but it does list out the necessary dependencies aside from CUDA and the neural-storyteller related files.
So I was just kinda browsing this forum and decided I would make some cards out of the rnn results I liked, I don't really have the patience to do it myself and I don't know wether or not this one generator site will work on my computer(because the hearthstone version won't) so I'll just be making cards out of interesting results from other people, starting with the one that kinda started me doing this when I saw one of the generated flavor texts(which this one has of course)
Awesome stuff. What software are you using for this? It kinda looks like MSE except for the text box is a bit small...
edit: thought I'd share some of my initial neural style experiments! I used the image from Jace, the Living Guildpact and Starry Night as well as a Picasso painting. I used two different scales for the styling; normal and half, I'm curious to hear what people thinks looks better? I really like the simplicity of the full-scale Picasso Jace, honestly.
I literally just used the MTG card maker website, it has limitations(for instance no hybrid mana) but its much easier for me then using something that needs cards to be put together piece by piece, so I do have to make a few alterations to perfectly good cards besides rewording ones that just have unnecessary excess text on them, like Saffrollusion, which used to use all hybrid mana, but instead I had to cut it down to just black and red mana
I also did a couple others, such as a sliver for Tromple and a pair of other cards I got off of the Roborosewater twitter. This also means I can do them a lot faster then some of the other people can, at the cost of maybe having to alter the card to make it work
*edit, I just found MTG cardsmith so now I can do much more advanced creatures(I did most of them while I was tired at night so I kinda just went for the site I knew) as such I gave saffrollusion its proper cost
Awesome stuff. What software are you using for this? It kinda looks like MSE except for the text box is a bit small...
edit: thought I'd share some of my initial neural style experiments! I used the image from Jace, the Living Guildpact and Starry Night as well as a Picasso painting. I used two different scales for the styling; normal and half, I'm curious to hear what people thinks looks better? I really like the simplicity of the full-scale Picasso Jace, honestly.
I am in love with the Picasso results, those look absolutely amazing.
Visually, I think the less elaborate one looks better when rendered at small sizes (e.g. on a card). At the same time, with regards to the more detailed version, I'm in love with the fine details of Jace's outfit and the visual texture of the reflections on the polished floor. It's also very interesting how it fills in the dark corners and edges with vivid details, it refuses to leave them untouched. Fascinating.
The Picasso one is the best, yeah. I really like how the horns on the helmet of the rightmost statue have blended into the background somehow.
@failbird105, try using a piece of software called Magic Set Editor 2 for making the cards? It's better than all the web-based card generators and a lot more powerful (we even have an RNN set symbol quite a few pages back that someone made, that MSE2 lets you include).
I just downloaded it, it seems definitely a lot more complicated, but it should make better cards then what I've already done, so I'll just post my last group of cards(which where made in MTG Cardsmith) here before continuing to make any new ones
@failbird105, try using a piece of software called Magic Set Editor 2 for making the cards? It's better than all the web-based card generators and a lot more powerful (we even have an RNN set symbol quite a few pages back that someone made, that MSE2 lets you include).
I should really get that symbol and add it to the set files mtgencode can automatically generate from dumps. Any idea where it is?
Oh, @Failbird, in case you didn't know, that's a thing. My code here can take raw dump files as they come out of the neural network and turn them into a MSE2 set file for you automatically. There's no art (yet) and it doesn't do much to clean up the text, but it might be easier than typing everything in yourself. I'm not sure how hard it is to cut and paste cards between sets - I think it should just work, though I don't know how it handles the art.
@failbird105, try using a piece of software called Magic Set Editor 2 for making the cards? It's better than all the web-based card generators and a lot more powerful (we even have an RNN set symbol quite a few pages back that someone made, that MSE2 lets you include).
I should really get that symbol and add it to the set files mtgencode can automatically generate from dumps. Any idea where it is?
I don't actually use the RNN, I'm just fixing up and making cards out of previous results from this thread and the robo-rosewater Twitter. Some of the results from before are pretty good just need balancing, or just good in general, also expect a huge WAVE of them soon (and also I made a flavor character to go with some of the different cards)
With a ton of Talcos's help, I've managed to get the Neural Storyteller up and running. I tested it using the default settings (as in, all the stories come from a collection of romance novels) on Den Protector and Collected Company, as two first-time random test cards.
Collected Company nearest captions:
A ripe banana with an image of monkey carved into it .
A rack of several bundles of ripe banana 's .
a bunch of bananas are leaning against a wall
A drawing of a group of monkeys on a banana .
Bunches of over ripe bananas hanging next to each other .
Collected Company story:
I had to share the size of bits of fruit , and I felt sick to my core . In a few short sentences , they all looked at each other , as if she were excited by the fact that no matter what she was doing . The last thing I wanted was for him to come out of the box . In fact , I could n't help but watch from the back of my mind . The longer he stood , the more powerful I was . My heart thumped against my ribs and I cast a circle around him . I 'd have no clue what to do with those kinds of ingredients , because they were very rare and collected .
Den Protector nearest captions:
A couple of men standing around a large bird face vase .
A bunch of guys standing around a statue with face .
A man is tying a scarf around the neck of a statue .
The back view of a statue of a man that is holding a hat one hand , with a black bird perched on the head of the statue .
Several people are climbing on a white object .
Den Protector story:
I drew in a few more glances , pretending to be my hero . The man who had captured her was on the other side of the band , and I could n't help but notice that I had no intention of letting him go . In fact , it was as if they were going to tear up every limb from limb . It was a simple matter of hours . In fact , it took a lot of time to figure out what he truly wanted , he murmured under his breath and closed his eyes .
So... the captions show there's a bit of confusion going on here. There's a ton of parameters to tweak, and I want to figure out how to get it trained on fantasy novels rather than romance ones, as well as how to adjust the length, so we can get some Magic-appropriate flavour text going on. But in the meantime, if anyone wants me to generate a romance story of any other image, I can do that.
So... the captions show there's a bit of confusion going on here. There's a ton of parameters to tweak, and I want to figure out how to get it trained on fantasy novels rather than romance ones, as well as how to adjust the length, so we can get some Magic-appropriate flavour text going on. But in the meantime, if anyone wants me to generate a romance story of any other image, I can do that.
We'll need a corpus of fantasy literature. We can train an RNN decoder to map the fantasy text sentences to thought vectors, sort of like we did for word2vec.
My question is how sensitive it is to stylistic differences in the artwork. How do the nearest captions change when you give it the Picasso or Van Gogh Jace art you produced vs. the regular Jace art?
EDIT: The banana interpretation makes sense to me. I can see the resemblance. The Den Protector interpretations also make sense in that you could say she looks like she's climbing over something, and the network is having to find captions that at least somewhat approximate what's going on. And the banana story is absolutely hilarious.
Good idea. Here's the story/captions for Original Jace:
NEAREST-CAPTIONS:
A statue of a man holding an umbrella is pictured .
A statue of a man holding an umbrella on the street .
A colorful umbrella stands apart from a black and white photo of people standing in front of a statue .
a statue of a man holding an umbrella .
A statue is of a man holding an umbrella .
OUTPUT:
Every man possessed a statue of a woman , wanting to tear her eyes open . In fact , it was as if he had no idea what was going on in this town . He did n't even know how to walk through the streets of New York , and they both laughed at the sight of him . He held out his hat and hat . In fact , it was a pleasant sensation that drew a woman 's blood from her body , giving her a chance to learn more than I could count . `` Now , I m the man who kept walking and wearing a moment .
Starry Night Jace:
NEAREST-CAPTIONS:
A beautiful colorful angel portrait in on the back of a vehicle outside with nobody around .
A colorful graffiti painting on the side of a building .
This is a painting of a boy painting .
A group of flowers of various types in a painting .
This mural shows a person drawing a mural .
OUTPUT:
Like the graphic design , I had to tear my eyes open . There was no trace of what I wanted to do with her . In fact , it was one of the most beautiful things I 'd ever seen in a painting , and that may have been a while ago , but it also made me feel better . His chest puffed out as he rose from the bench and began to walk away . I m so sorry for that , Elena thought . It was only a matter of seconds before the Dark Sisters painted it on him .
Picasso Jace:
NEAREST-CAPTIONS:
There are many Japanese patterned umbrella designs arranged
Three umbrellas with unique designs sitting next to each other .
A portrait of Mary mother of Jesus holding him
a decorative umbrella and various other items on display
Foreign language and cartoon characters painted on a bus .
OUTPUT:
Like the most elaborate display of mirrors , I had to put my head back and forth . I wanted to bury my nose in her hair , and make sure she was safe and collected . In fact , it was one of the most beautiful things I 'd ever seen . The only thing that mattered to him in New York City , as well as the art of art . But then he reached for his coat pocket , pulled it out and cast it aside so that I could see what he was saying . She would have been less than a thousand years old , similar to the effect it made .
Wow, that's radically different. Style totally matters. It's hilarious how the Starry Night Jace's story involves some creepy graphics designer, who seems to have gotten involved with 'the dark sisters', whoever they are.
edit: @Failbird, I like those cards. The big-ass cannon seems really overpowered though; a ramp deck could slap that down on turn 5 and ping the opponent for 13 right off the bat. But then again, these are RNN cards
ATTACHMENTS
jaceDefault
jaceStarry
jacePicasso
Private Mod Note
():
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
-----------------------------
Also I wont be as active as Fallout 4 has been released But I should be back before too long.
That's probably Fernanda Viégas doing. Her data visualization research is crazy good.
EDIT: Talcos, if you're still doing the style transfer thing (and it isn't much of a chore to do so), could you try this forest as a mix of these artworks (1, 2, 3)?
Their charity fundraiser event Desert Bus is currently happening (streaming on Twitch), and yesterday as part of a challenge, many RoboRosewater cards were read out. Hilarity ensued.
https://m.youtube.com/watch?v=RYZP1p1N9yU
Sure. Not tonight, don't have access to the servers. But I will tomorrow morning!
Sorry that it took me so long to reply to you. I had back-to-back research journal submission deadlines. I just put out about 100 double-spaced-pages worth of material and now I get to wait to hear back from the review committees to see whether they like my findings. I'm hopeful that they will.
And I'll hopefully have some neural-storyteller-related results to share soon. That should be amusing.
---
Also, YeGoblynQueenne, I love the results. Keep up the good work! I look forward to seeing more from you. Let me know if there's anything I can do to help.
And Mustard_Fountain, thanks for sharing!
EDIT: AlukSky, I uploaded the results that I got. I might be able to improve upon them further with some fine-tuning of the parameters. In particular, I think the Erin Hanson version could look even better with some work.
EDIT(1): I believe that I've identified the issues that are keeping me from getting neural-storyteller working. I sent out an e-mail to someone who can resolve those issues for me.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
https://youtube.com/watch?v=KM4yI0Q79XM
"Is it Playable?"
So is that attachment a graph, or did you tell neural style to start drawing dragons?
Meanwhile, a colleague in Brazil has contacted me and mentioned his interest in moving forward with research related to this project, so that makes yet another graduate student who is spinning off the work we've done here into something of academic value. I look forward to seeing how that turns out.
EDIT: Oh, and on an unrelated note, I was tapped to be a program committee member for a research conference... in Paris. Interesting choice. Well, I'm sure the security situation will be better in a few months.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
If anyone has art style merging requests, I can do those too
Haha, nice!
EDIT: Yes, the size of the source and content images both matter, but I'll have to look into the details.
I'm 100% convinced that there is, but it'd probably take some work. I've got some of the best hardware on the market, but I struggle to handle images bigger than a 1000 pixels wide.
On one hand, I've seen papers out there that claim that you can reduce the size of nets like the VGG one that is being used using optimization techniques to prune unnecessary and redundant parts of the architecture. But I'd have to look into that more.
On the other hand, I'm all but certain that there has to be a way to do all this in a piece-wise fashion. I don't think you can just split the image up into squares and do it one piece at a time simply because you'd be breaking up the salient features of the image (the result would look ugly). However, considering that the whole purpose of convnet is to decompose the image into distinct regions and objects, I could envision a similar decomposition of the image generation problem into smaller subproblems. That being said, I'm not exactly sure how that would be accomplished.
At the very least, I can assure you that someone will figure out a more resource-efficient way of doing all of this. These are temporary setbacks.
Please do! The bureaucracy has my hands tied right now. You're more likely to get things working than I am at this point, seeing as we're coming up on the Thanksgiving break. My main issue has been disk space, because you have to have the object recognition network for the images, the network for the text comprehension and generation, and then all the libraries and such, and that pushes me over the 10 GB limit that is in place right now. Mind you, you'll have to set up Theano and Lasagne . I had written a list of instructions for the virtual container system to prepare that environment for me, but I haven't been able to test and see if I installed all the dependencies correctly. That being said, you might consider doing what I've done below, namely grabbing Anaconda which comes with Theano and then installing Lasagne on top of it. That might work. Note that my script was written for a CentOS7 environment, so how you end up doing things may look different.
RUN yum install -y git
RUN yum install -y wget
RUN yum install -y make
COPY install-deps /tmp/
RUN /tmp/install-deps
#Install CUDA libraries
COPY cuda-repo-rhel7-7-0-local-7.0-28.x86_64.rpm /tmp/
RUN rpm -i /tmp/cuda-repo-rhel7-7-0-local-7.0-28.x86_64.rpm
RUN yum clean all -y
RUN yum install -y cuda
RUN export PATH=/usr/local/cuda-7.0/bin:$PATH; export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:$LD_LIBRARY_PATH
#Anaconda install
RUN yum install -y bzip2
RUN echo 'export PATH=/opt/conda/bin:$PATH' > /etc/profile.d/conda.sh && \
wget --quiet https://repo.continuum.io/archive/Anaconda2-2.4.0-Linux-x86_64.sh && \
/bin/bash /Anaconda2-2.4.0-Linux-x86_64.sh -b -p /opt/conda && \
rm /Anaconda2-2.4.0-Linux-x86_64.sh && \
/opt/conda/bin/conda install --yes conda==3.18.3
RUN yum install -y atlas-devel
#Lasagne install
RUN /opt/conda/bin/pip install -r https://raw.githubusercontent.com/Lasagne/Lasagne/v0.1/requirements.txt
RUN cd ~; git clone https://github.com/ryankiros/neural-storyteller.git; mkdir models;
RUN cd ~/models; wget http://www.cs.toronto.edu/~rkiros/neural_storyteller.zip; unzip neural_storyteller.zip; rm neural_storyteller.zip
RUN cd ~/models; wget http://www.cs.toronto.edu/~rkiros/models/dictionary.txt
RUN cd ~/models; wget http://www.cs.toronto.edu/~rkiros/models/btable.npy
RUN cd ~/models; wget http://www.cs.toronto.edu/~rkiros/models/uni_skip.npz
RUN cd ~/models; wget http://www.cs.toronto.edu/~rkiros/models/uni_skip.npz.pkl
RUN cd ~/models; wget http://www.cs.toronto.edu/~rkiros/models/bi_skip.npz
RUN cd ~/models; wget http://www.cs.toronto.edu/~rkiros/models/bi_skip.npz.pkl
RUN cd ~/models; wget http://www.cs.toronto.edu/~rkiros/models/utable.npy
RUN cd ~/models; wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg19.pkl
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Actually, yes, there should be... Here's one, and that just installs Lasagne using one line of code. It depends upon another dockerfile here. Not sure if that's quite what you'll need, but it does list out the necessary dependencies aside from CUDA and the neural-storyteller related files.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
edit: thought I'd share some of my initial neural style experiments! I used the image from Jace, the Living Guildpact and Starry Night as well as a Picasso painting. I used two different scales for the styling; normal and half, I'm curious to hear what people thinks looks better? I really like the simplicity of the full-scale Picasso Jace, honestly.
I also did a couple others, such as a sliver for Tromple and a pair of other cards I got off of the Roborosewater twitter. This also means I can do them a lot faster then some of the other people can, at the cost of maybe having to alter the card to make it work
*edit, I just found MTG cardsmith so now I can do much more advanced creatures(I did most of them while I was tired at night so I kinda just went for the site I knew) as such I gave saffrollusion its proper cost
I am in love with the Picasso results, those look absolutely amazing.
Visually, I think the less elaborate one looks better when rendered at small sizes (e.g. on a card). At the same time, with regards to the more detailed version, I'm in love with the fine details of Jace's outfit and the visual texture of the reflections on the polished floor. It's also very interesting how it fills in the dark corners and edges with vivid details, it refuses to leave them untouched. Fascinating.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
@failbird105, try using a piece of software called Magic Set Editor 2 for making the cards? It's better than all the web-based card generators and a lot more powerful (we even have an RNN set symbol quite a few pages back that someone made, that MSE2 lets you include).
Oh and for anyone who's interested, I've started a Twitter page with RNN-generated haikus. None of them obey the 5-7-5 rule, but apparently that's more of a suggestion anyway. Some of them are pretty funny.
I should really get that symbol and add it to the set files mtgencode can automatically generate from dumps. Any idea where it is?
Oh, @Failbird, in case you didn't know, that's a thing. My code here can take raw dump files as they come out of the neural network and turn them into a MSE2 set file for you automatically. There's no art (yet) and it doesn't do much to clean up the text, but it might be easier than typing everything in yourself. I'm not sure how hard it is to cut and paste cards between sets - I think it should just work, though I don't know how it handles the art.
Here it is.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Collected Company nearest captions:
A ripe banana with an image of monkey carved into it .
A rack of several bundles of ripe banana 's .
a bunch of bananas are leaning against a wall
A drawing of a group of monkeys on a banana .
Bunches of over ripe bananas hanging next to each other .
Collected Company story:
I had to share the size of bits of fruit , and I felt sick to my core . In a few short sentences , they all looked at each other , as if she were excited by the fact that no matter what she was doing . The last thing I wanted was for him to come out of the box . In fact , I could n't help but watch from the back of my mind . The longer he stood , the more powerful I was . My heart thumped against my ribs and I cast a circle around him . I 'd have no clue what to do with those kinds of ingredients , because they were very rare and collected .
Den Protector nearest captions:
A couple of men standing around a large bird face vase .
A bunch of guys standing around a statue with face .
A man is tying a scarf around the neck of a statue .
The back view of a statue of a man that is holding a hat one hand , with a black bird perched on the head of the statue .
Several people are climbing on a white object .
Den Protector story:
I drew in a few more glances , pretending to be my hero . The man who had captured her was on the other side of the band , and I could n't help but notice that I had no intention of letting him go . In fact , it was as if they were going to tear up every limb from limb . It was a simple matter of hours . In fact , it took a lot of time to figure out what he truly wanted , he murmured under his breath and closed his eyes .
So... the captions show there's a bit of confusion going on here. There's a ton of parameters to tweak, and I want to figure out how to get it trained on fantasy novels rather than romance ones, as well as how to adjust the length, so we can get some Magic-appropriate flavour text going on. But in the meantime, if anyone wants me to generate a romance story of any other image, I can do that.
We'll need a corpus of fantasy literature. We can train an RNN decoder to map the fantasy text sentences to thought vectors, sort of like we did for word2vec.
My question is how sensitive it is to stylistic differences in the artwork. How do the nearest captions change when you give it the Picasso or Van Gogh Jace art you produced vs. the regular Jace art?
EDIT: The banana interpretation makes sense to me. I can see the resemblance. The Den Protector interpretations also make sense in that you could say she looks like she's climbing over something, and the network is having to find captions that at least somewhat approximate what's going on. And the banana story is absolutely hilarious.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
set one
set two
non-RNN made Planeswalker
NEAREST-CAPTIONS:
A statue of a man holding an umbrella is pictured .
A statue of a man holding an umbrella on the street .
A colorful umbrella stands apart from a black and white photo of people standing in front of a statue .
a statue of a man holding an umbrella .
A statue is of a man holding an umbrella .
OUTPUT:
Every man possessed a statue of a woman , wanting to tear her eyes open . In fact , it was as if he had no idea what was going on in this town . He did n't even know how to walk through the streets of New York , and they both laughed at the sight of him . He held out his hat and hat . In fact , it was a pleasant sensation that drew a woman 's blood from her body , giving her a chance to learn more than I could count . `` Now , I m the man who kept walking and wearing a moment .
Starry Night Jace:
NEAREST-CAPTIONS:
A beautiful colorful angel portrait in on the back of a vehicle outside with nobody around .
A colorful graffiti painting on the side of a building .
This is a painting of a boy painting .
A group of flowers of various types in a painting .
This mural shows a person drawing a mural .
OUTPUT:
Like the graphic design , I had to tear my eyes open . There was no trace of what I wanted to do with her . In fact , it was one of the most beautiful things I 'd ever seen in a painting , and that may have been a while ago , but it also made me feel better . His chest puffed out as he rose from the bench and began to walk away . I m so sorry for that , Elena thought . It was only a matter of seconds before the Dark Sisters painted it on him .
Picasso Jace:
NEAREST-CAPTIONS:
There are many Japanese patterned umbrella designs arranged
Three umbrellas with unique designs sitting next to each other .
A portrait of Mary mother of Jesus holding him
a decorative umbrella and various other items on display
Foreign language and cartoon characters painted on a bus .
OUTPUT:
Like the most elaborate display of mirrors , I had to put my head back and forth . I wanted to bury my nose in her hair , and make sure she was safe and collected . In fact , it was one of the most beautiful things I 'd ever seen . The only thing that mattered to him in New York City , as well as the art of art . But then he reached for his coat pocket , pulled it out and cast it aside so that I could see what he was saying . She would have been less than a thousand years old , similar to the effect it made .
Wow, that's radically different. Style totally matters. It's hilarious how the Starry Night Jace's story involves some creepy graphics designer, who seems to have gotten involved with 'the dark sisters', whoever they are.
edit: @Failbird, I like those cards. The big-ass cannon seems really overpowered though; a ramp deck could slap that down on turn 5 and ping the opponent for 13 right off the bat. But then again, these are RNN cards