I like your new format, I have a question: the choice spells are formatted as [#ofchoices =choice =choice] how will this hold up to the new confluence cycle? will it need custom processing? maybe a each choice can be selected more than once\[3 =option =option =option]
Yes, the current syntax doesn't really have a great way of representing that. I suspect that given the different wording, the encoding will just leave it alone and not put it into [ # = choice ] syntax at all. The clever thing to do is probably make the choice syntax more english-friendly so there's some general way to attach modifiers to it.
EDIT: Talcos, have you been posting things on reddit again? The number of stars I have on mtgencode has gone way up in the past couple of days, and in the past that's been tied to the free advertising you provide on reddit.
EDIT: Talcos, have you been posting things on reddit again? The number of stars I have on mtgencode has gone way up in the past couple of days, and in the past that's been tied to the free advertising you provide on reddit.
Very interesting! I haven't mentioned it specifically, but I did make some art posts involving the style transfer and they might have recognized me as "the Magic Neural Network Guy", did a Google search, and fell into this thread and then found their way to the github. That could be a contributing factor.
Oh, wait, there's also the fact that I gave that big talk to all the computer science undergraduate students about the project. They would have been more likely to seek out the github page. I mentioned you by name and showered praise on you. That might be why.
Oh, wait, there's also the fact that I gave that big talk to all the computer science undergraduate students about the project. They would have been more likely to seek out the github page. I mentioned you by name and showered praise on you. That might be why.
I thought of this as well, though I'd have expected the spike to be closer to when you gave the talk.
In any case, I certainly won't complain about the free publicity.
I've done some work on expressing my architecture in terms of tensors. This shooooooould make it possible to express the training process in purely vectorizable terms, which, when combined with either NumPy or the new SIMD support in PyPy, should give my code a shot in the arm. Or I could try porting the same algorithm to Rust or Haskell.
The major roadblock is translating the new high-level logic back into tree form and figure out how to take gradients of this stuff. Haven't really tried to do that yet, but it seems a lot less painful than figuring out tensors was. It's incredibly hard to find a good, relevant tensor tutorial. I'd like to try to rectify that, but I'm not sure I'd do any better.
so I just tried to whisper to my network with bodytext_prepend with the new format. It was prepending rebound in to the mane field. Does the sample_hs_v3.lua m ap th the labled fields?
I've done some work on expressing my architecture in terms of tensors. This shooooooould make it possible to express the training process in purely vectorizable terms, which, when combined with either NumPy or the new SIMD support in PyPy, should give my code a shot in the arm. Or I could try porting the same algorithm to Rust or Haskell.
I've not worked much with Rust, but I recall having lunch with some people who did development for the language, and they had nothing but good things to say about it.
The major roadblock is translating the new high-level logic back into tree form and figure out how to take gradients of this stuff. Haven't really tried to do that yet, but it seems a lot less painful than figuring out tensors was. It's incredibly hard to find a good, relevant tensor tutorial. I'd like to try to rectify that, but I'm not sure I'd do any better.
Well, part of the problem with fields like machine learning and the like is that its a relatively new area (especially deep learning), and you have lots of people from different backgrounds using different vocabularies. Statisticians, AI researchers, mathematicians of different stripes, and then there's a potpourri of folks from every discipline trying to take the tools and techniques and apply them to their particular problem domain. The textbooks are still being written for a lot of this stuff. I'd say give it another decade, maybe by then the dust will have settled and you'll be able to find good tutorials for everything, lol.
EDIT: I shared the text-captioning-style-transfer with everyone in my lab at our weekly group meeting today. A lot of good discussion was had. I'm going to have to get everything set up so that I can do some tests with it.
so I just tried to whisper to my network with bodytext_prepend with the new format. It was prepending rebound in to the mane field. Does the sample_hs_v3.lua m ap th the labled fields?
No, I believe it just counts '|' characters and assumes the fields are ordered according to 'old' or 'noname'. In theory it will still work if you give it the name of the field in the right position, and remember to include the identifier number first or you'll confuse the network a lot. That's another thing that I should rewrite and maintain with the rest of the mtg-rnn codebase. I probably won't get around to that for a few days though.
Talcos, how hard is it to change that script? Ideally it could a setting to look for identifier characters like ('0', '1', '2', etc.) and then you could just tell it a sequence of identifiers and what you want it to insert for each one, and then it doesn't need to know the order ahead of time.
Talcos, how hard is it to change that script? Ideally it could a setting to look for identifier characters like ('0', '1', '2', etc.) and then you could just tell it a sequence of identifiers and what you want it to insert for each one, and then it doesn't need to know the order ahead of time.
It wouldn't be very difficult. It's very bare-bones. You're correct in your interpretation of how it works.
The major roadblock is translating the new high-level logic back into tree form and figure out how to take gradients of this stuff. Haven't really tried to do that yet, but it seems a lot less painful than figuring out tensors was. It's incredibly hard to find a good, relevant tensor tutorial. I'd like to try to rectify that, but I'm not sure I'd do any better.
Well, part of the problem with fields like machine learning and the like is that its a relatively new area (especially deep learning), and you have lots of people from different backgrounds using different vocabularies. Statisticians, AI researchers, mathematicians of different stripes, and then there's a potpourri of folks from every discipline trying to take the tools and techniques and apply them to their particular problem domain. The textbooks are still being written for a lot of this stuff. I'd say give it another decade, maybe by then the dust will have settled and you'll be able to find good tutorials for everything, lol.
I really more mean tensors in general. The basic idea is that they're multidimensional, but it's possible for the dimensions to either cancel or multiply independently, and what actually happens is a property of the tensors involved. That's more than you'll get out of some tutorials, but it's still not enough. I'm not sure how to express the relevant insights.
SO i figured it out, and the text box is th rarity field. I am generating 100k dumps for both a network trained on only spells and one trained on all cards, generating dumps for
flashback
transmute
uncast
evoke (all cards network only)
flash (also all cards network only)
They will be on my drive within the next hour or two (as long as my daisy chaining of ";"'s doesn't blow up)
I just reread the black prism books. One of the setting details is a card game fusion between the tarot and MtG.
Would you mind whispering "Andross the Red" and "The Lightbringer" and "The Color Prince’s Rifle" to the net as names, and “If Lightsplitter, grants invisibility except against sub-red and superviolet.” as rules text? I'm curious what the results will be.
I just reread the black prism books. One of the setting details is a card game fusion between the tarot and MtG.
Would you mind whispering "Andross the Red" and "The Lightbringer" and "The Color Prince’s Rifle" to the net as names, and “If Lightsplitter, grants invisibility except against sub-red and superviolet.” as rules text? I'm curious what the results will be.
I tried that for you. The "the" in the name usually makes the cards legendary, from what I can tell. The Rifle is usually a creature (because the network turns "rifle" into "rifler", a name of agent, since the network has never heard of a rifle before). Nothing especially interesting with regards to the body text, insofar as it's a complete sentence and the network can just move on from it. One humorous result was...
The Colored Prince's Rifling 1BB
Instant (Mythic Rare)
You may shuffle the cards in your hand rather than pay The Colored Prince's Rifling's mana cost.
You draw four cards and lose nine life.
If Lightsplitter, grants invisibility except against sub-red and superviolet.
#Yes, you read that right. You can just rearrange the cards in your hand and that pays for the mana cost.
However, there's a lot to be done if you just throw in relevant words into the body text and see how the network copes. For example:
"superviolet permanents you control have hexproof."
"superviolet ~ when @ enters the battlefield, each opponent loses &^^ life."
"if lightsplitter, it has wither as long as you control a forest or a flame."
"if you control a sub-red creature named Devoured Flugger,"
"whenever a superviolet creature is dealt damage, put a +&^/+&^ counter on @."
"lightsplitter players can't have hexproof." (That'll learn 'em)
--------------------
I was trying to get the neural-storyteller stuff working, but I ran into a disk space issue. The partition set aside for the virtual machines is too small (I need like 16-18 gigabytes for all the libraries and network models, and I can only get around 8 gigabytes), and I've emailed the people in charge of the machine to see about fixing that issue (we have no shortage of disk space, I assure you).
I really, really want to try that out. You can work with any kind of text, so there's great comedic potential. Aside from Magic flavor text, I'd love to try to generate news stories from images.
I have finished the HTML spoiler feature for decode.py, examples are in my drive in the new cards folder for the whispered data. I've submitted it to be pulled into Hardcast's repo and he has assured me he will when he has the chance.
things I would still like to do with this, but others can help (in order of importance, I feel)
Have sorting on the cards, probably borrowing heavily from the sortcards.py
have a hover panel with dumped data for each card, (vdump data, creativity info, maybe even a text box with forum code)
color the border of the according to their card border (gradient borders with CSS3)
Maybe generate a folder with files of specific types of cards in them so it can load faster (generate an Index file with a nav bar and subfiles display in an Iframe maybe)
Google just released their new machine learning system, Tensor Flow, as an open-source project with Apache 2.0 license. Looks like it has a bunch of example model architectures and that soon they'll be releasing their bleeding edge ImageNet computer vision model, as well.
I have finished the HTML spoiler feature for decode.py, examples are in my drive in the new cards folder for the whispered data. I've submitted it to be pulled into Hardcast's repo and he has assured me he will when he has the chance.
things I would still like to do with this, but others can help (in order of importance, I feel)
Have sorting on the cards, probably borrowing heavily from the sortcards.py
have a hover panel with dumped data for each card, (vdump data, creativity info, maybe even a text box with forum code)
color the border of the according to their card border (if possible, I'm not sure if gradients like that can be done with the border property.)
Woot! Thanks. I'll definitely take a look at that later.
Google just released their new machine learning system, Tensor Flow, as an open-source project with Apache 2.0 license. Looks like it has a bunch of example model architectures and that soon they'll be releasing their bleeding edge ImageNet computer vision model, as well.
I know! I was just reading about that this morning. It's exciting to have a fast, flexible machine learning library that has the backing of a powerful company like Google. And I'm really excited that it has a graph visualization tool, TensorBoard. With Torch I can get visualizations, but they're just static diagrams that I eyeball whenever something goes horribly wrong. With TensorBoard, I can actually see what's going on in a very interactive way; that'll make neural net surgery go much more smoothly.
Seriously, look at that image I've attached! That's awesome! I'm very tempted to move my dissertation research over to TensorFlow, if only for that reason.
Holy carp, that's awesome. As soon as I get my upgraded GPU I'll be playing around with that for sure... or even before that, as it looks like the website has some explanation and tutorials that are really really helpful. I love Google's habit of open-sourcing everything.
I think I solved the disk space allocation problem. I'll probably have some issues here and there getting all the libraries configured for the neural-storyteller, but that shouldn't be too much trouble.
It'll be just a bit before I do that though. I just spent the last 72 hours producing a review of a submission to an international journal (I was called in because evidently I'm one of the few who would be familiar with the topic covered in the paper). But I promise that I'll get right on it, haha.
Alright, html spoilers are pushed to github, huge thanks to reimannsum for figuring most of it out. I'm not an html wizard, so if anyone has any complaints / suggestions I'm happy to hear them. There are a few tweaks I want to add myself, mostly a hover-over spoiler (like the [ card ] tag on here) to make the closest-card and closest-name info more useful.
An example of a spoiler is here, it's another 10MB dump from my most recent size-768 network sampled at temperature 0.8.
EDIT: I added some new technology to display visual spoilers on hovering over the names of the nearest cards by function or by name, as they appear in the creativity analysis. Currently I'm working on computing those statistics for my existing dumps (I have 60MB total) which is going to take a while even parallelized on my 8-core intel machine. I'll of course post all of those once they're done cooking. Hooray for strong processors - if only Python were as easy to accelerate on a GPU as the training of neural nets.
EDIT2: After several attempts, this version of my parallel computation of the distance metrics actually makes things faster! Here's the numbers from my i7-3770k (4 cores, 8 threads) processing 960 cards:
real 14m14.701s
user 14m13.521s
sys 0m0.284s
real 3m25.739s
user 24m52.410s
sys 0m3.636s
As you can see, we actually get about a 4.15x speedup in real time - better than linear! - due to the miracles of Intel's hyperthreading. The total user time increases by almost a factor of two, though, because the threads have to wait for each other to share a smaller number of physical cores.
Anyway, it's great to see numbers like that for real applications. Due to the intricacies of Python's Global Interpreter Lock and the overhead of creating multiple processes to get around it, I had to jump through some hoops to exploit parallelism as coarsely as possible. But, it is definitely cool to have a high level language that lets me express parallelism easily when I want it, even if there are some conditions attached to take full advantage of it.
things I would still like to do with this, but others can help (in order of importance, I feel)
Have sorting on the cards, probably borrowing heavily from the sortcards.py
have a hover panel with dumped data for each card, (vdump data, creativity info, maybe even a text box with forum code)
color the border of the according to their card border (gradient borders with CSS3)
Maybe generate a folder with files of specific types of cards in them so it can load faster (generate an Index file with a nav bar and subfiles display in an Iframe maybe)
Quick comments on these ideas.
The second is already implemented, just not as a hover panel. I just stick all of the additional info directly into the div, including creativity distance metrics with visual spoilers on hover, and a forum-encoded spoiler you can copy to share cards on here if you ask for it with -f. I think it's ok as it is, but I have many enormous monitors to display lots of large divs. If anyone has a better way to format things, I'm open to ideas.
The third is something I think I know how to do (not gradients :p) but I haven't gotten around to yet. Basically you make a few little style classes that change the color of the border and specify one of them based on what colors the card thinks it is.
Sorting and folder structure will both also be useful. Now that we've got human-readable output pretty much under control (the HTML spoilers are awesome to read) I'll be switching most of my attention to the very vague problem of evaluating the neural nets, which will certainly involve some sorting. As usual I'll post any major updates here, and probably stick the code in the scripts/ folder on github once it's stable.
I was browsing Hardcast's example html dump (which is awesome) and I found a very interesting but sadly slightly mangled card:
brand of light2
artifact (uncommon) 5, T: target player chooses a creature type. if a creature card is revealed this way, @ deals damage to each player equal to the number of times it was kicked.
multikicker 2
If it instead said "5, T: target player chooses a creature type. @ deals damage to each creature of that type equal to the number of times it was kicked." it would be fantastic. RNNs may not always design perfect cards but they sure give human designers really interesting ideas.
First full spoiler with copyable forum code and hoverable names is here.
As a warning, if you open that file in a browser it will spin forever trying to preload 500,000 images from magiccards.info. I'll look into a fix for that (namely, loading the images on demand) and improvements to general readability. Any feedback is appreciated.
First full spoiler with copyable forum code and hoverable names is here.
As a warning, if you open that file in a browser it will spin forever trying to preload 500,000 images from magiccards.info. I'll look into a fix for that (namely, loading the images on demand) and improvements to general readability. Any feedback is appreciated.
Awesome! Love it.
-----
Still working out a some technical snags with the whole neural-storyteller thing. I have a few deadlines this week, so I'm fitting in time to work on that in between everything else, but I'll get it figured out.
Oh, and yet another breakthrough that could benefit us. Mansimov et al. of the University of Toronto put out a paper entitled "Generating Images From Captions with Attention" (link to arxiv page). The results are proof-of-concept at this point; I'd like to see how well it scales to larger image sizes, without the upscaling techniques suggested in the paper. Anyway, what's fascinating is that it can generate images from captions while taking into consideration...
* The background ("A rider on a blue motorcycle in the desert.")
* The foreground ("A rider on a blue motorcycle in the desert.")
* Different objects in the scene ("A rider on a blue motorcycle in the desert.")
* The qualities of the objects in the scene ("A rider on a blue motorcycle in the desert.")
That more or less solves the problem of generating art, assuming we can come up with a caption or description of what we want (and that seems do-able). If you can at least get a well-composed image as a starting point, then style-transfer-like techniques can fill in missing details and add embellishments.
Oy, these papers are coming out faster than I can analyze them, haha. Exciting stuff though.
Oh, and on that note, if you haven't seen Facebook's AI tech demo video on Youtube, I'd recommend taking a look. For those of you who have been following along with this thread, you might recognize the sorts of techniques we've been discussing and how they're being put to use by the company.
Wow, I opened that in my browser and it was not happy. The huge file size combined with the 500k images took a bit of a toll. I wish I knew the code to make it load images on demand only... apart from that, it's a terrific improvement being able to see exactly what cards the network takes 'inspiration' from
EDIT: Talcos, have you been posting things on reddit again? The number of stars I have on mtgencode has gone way up in the past couple of days, and in the past that's been tied to the free advertising you provide on reddit.
Very interesting! I haven't mentioned it specifically, but I did make some art posts involving the style transfer and they might have recognized me as "the Magic Neural Network Guy", did a Google search, and fell into this thread and then found their way to the github. That could be a contributing factor.
Oh, wait, there's also the fact that I gave that big talk to all the computer science undergraduate students about the project. They would have been more likely to seek out the github page. I mentioned you by name and showered praise on you. That might be why.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
In any case, I certainly won't complain about the free publicity.
The major roadblock is translating the new high-level logic back into tree form and figure out how to take gradients of this stuff. Haven't really tried to do that yet, but it seems a lot less painful than figuring out tensors was. It's incredibly hard to find a good, relevant tensor tutorial. I'd like to try to rectify that, but I'm not sure I'd do any better.
Oh really? Did not know that. That might also be a contributing factor.
Of course! Go right ahead. I don't mind at all.
I've not worked much with Rust, but I recall having lunch with some people who did development for the language, and they had nothing but good things to say about it.
Well, part of the problem with fields like machine learning and the like is that its a relatively new area (especially deep learning), and you have lots of people from different backgrounds using different vocabularies. Statisticians, AI researchers, mathematicians of different stripes, and then there's a potpourri of folks from every discipline trying to take the tools and techniques and apply them to their particular problem domain. The textbooks are still being written for a lot of this stuff. I'd say give it another decade, maybe by then the dust will have settled and you'll be able to find good tutorials for everything, lol.
EDIT: I shared the text-captioning-style-transfer with everyone in my lab at our weekly group meeting today. A lot of good discussion was had. I'm going to have to get everything set up so that I can do some tests with it.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
Talcos, how hard is it to change that script? Ideally it could a setting to look for identifier characters like ('0', '1', '2', etc.) and then you could just tell it a sequence of identifiers and what you want it to insert for each one, and then it doesn't need to know the order ahead of time.
It wouldn't be very difficult. It's very bare-bones. You're correct in your interpretation of how it works.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
I really more mean tensors in general. The basic idea is that they're multidimensional, but it's possible for the dimensions to either cancel or multiply independently, and what actually happens is a property of the tensors involved. That's more than you'll get out of some tutorials, but it's still not enough. I'm not sure how to express the relevant insights.
Would you mind whispering "Andross the Red" and "The Lightbringer" and "The Color Prince’s Rifle" to the net as names, and “If Lightsplitter, grants invisibility except against sub-red and superviolet.” as rules text? I'm curious what the results will be.
I tried that for you. The "the" in the name usually makes the cards legendary, from what I can tell. The Rifle is usually a creature (because the network turns "rifle" into "rifler", a name of agent, since the network has never heard of a rifle before). Nothing especially interesting with regards to the body text, insofar as it's a complete sentence and the network can just move on from it. One humorous result was...
The Colored Prince's Rifling
1BB
Instant (Mythic Rare)
You may shuffle the cards in your hand rather than pay The Colored Prince's Rifling's mana cost.
You draw four cards and lose nine life.
If Lightsplitter, grants invisibility except against sub-red and superviolet.
#Yes, you read that right. You can just rearrange the cards in your hand and that pays for the mana cost.
However, there's a lot to be done if you just throw in relevant words into the body text and see how the network copes. For example:
"superviolet permanents you control have hexproof."
"superviolet ~ when @ enters the battlefield, each opponent loses &^^ life."
"if lightsplitter, it has wither as long as you control a forest or a flame."
"if you control a sub-red creature named Devoured Flugger,"
"whenever a superviolet creature is dealt damage, put a +&^/+&^ counter on @."
"lightsplitter players can't have hexproof." (That'll learn 'em)
--------------------
I was trying to get the neural-storyteller stuff working, but I ran into a disk space issue. The partition set aside for the virtual machines is too small (I need like 16-18 gigabytes for all the libraries and network models, and I can only get around 8 gigabytes), and I've emailed the people in charge of the machine to see about fixing that issue (we have no shortage of disk space, I assure you).
I really, really want to try that out. You can work with any kind of text, so there's great comedic potential. Aside from Magic flavor text, I'd love to try to generate news stories from images.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
things I would still like to do with this, but others can help (in order of importance, I feel)
Woot! Thanks. I'll definitely take a look at that later.
I know! I was just reading about that this morning. It's exciting to have a fast, flexible machine learning library that has the backing of a powerful company like Google. And I'm really excited that it has a graph visualization tool, TensorBoard. With Torch I can get visualizations, but they're just static diagrams that I eyeball whenever something goes horribly wrong. With TensorBoard, I can actually see what's going on in a very interactive way; that'll make neural net surgery go much more smoothly.
Seriously, look at that image I've attached! That's awesome! I'm very tempted to move my dissertation research over to TensorFlow, if only for that reason.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
It'll be just a bit before I do that though. I just spent the last 72 hours producing a review of a submission to an international journal (I was called in because evidently I'm one of the few who would be familiar with the topic covered in the paper). But I promise that I'll get right on it, haha.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
An example of a spoiler is here, it's another 10MB dump from my most recent size-768 network sampled at temperature 0.8.
EDIT: I added some new technology to display visual spoilers on hovering over the names of the nearest cards by function or by name, as they appear in the creativity analysis. Currently I'm working on computing those statistics for my existing dumps (I have 60MB total) which is going to take a while even parallelized on my 8-core intel machine. I'll of course post all of those once they're done cooking. Hooray for strong processors - if only Python were as easy to accelerate on a GPU as the training of neural nets.
EDIT2: After several attempts, this version of my parallel computation of the distance metrics actually makes things faster! Here's the numbers from my i7-3770k (4 cores, 8 threads) processing 960 cards:
Anyway, it's great to see numbers like that for real applications. Due to the intricacies of Python's Global Interpreter Lock and the overhead of creating multiple processes to get around it, I had to jump through some hoops to exploit parallelism as coarsely as possible. But, it is definitely cool to have a high level language that lets me express parallelism easily when I want it, even if there are some conditions attached to take full advantage of it.
EDIT3: Quick comments on these ideas.
The second is already implemented, just not as a hover panel. I just stick all of the additional info directly into the div, including creativity distance metrics with visual spoilers on hover, and a forum-encoded spoiler you can copy to share cards on here if you ask for it with -f. I think it's ok as it is, but I have many enormous monitors to display lots of large divs. If anyone has a better way to format things, I'm open to ideas.
The third is something I think I know how to do (not gradients :p) but I haven't gotten around to yet. Basically you make a few little style classes that change the color of the border and specify one of them based on what colors the card thinks it is.
Sorting and folder structure will both also be useful. Now that we've got human-readable output pretty much under control (the HTML spoilers are awesome to read) I'll be switching most of my attention to the very vague problem of evaluating the neural nets, which will certainly involve some sorting. As usual I'll post any major updates here, and probably stick the code in the scripts/ folder on github once it's stable.
havengul nightpine 1WB
enchantment (common)
pay 3 life: you gain 2 life.
So good with Karlov of the Ghost Council.
EDIT: and this flavor win:
kabara, goblin rider 4RR
creature ~ goblin rogue (rare)
trample
hexproof
sacrifice a creature: @ gets +2/+0 until end of turn.
(5/2)
brand of light 2
artifact (uncommon)
5, T: target player chooses a creature type. if a creature card is revealed this way, @ deals damage to each player equal to the number of times it was kicked.
multikicker 2
If it instead said "5, T: target player chooses a creature type. @ deals damage to each creature of that type equal to the number of times it was kicked." it would be fantastic. RNNs may not always design perfect cards but they sure give human designers really interesting ideas.
As a warning, if you open that file in a browser it will spin forever trying to preload 500,000 images from magiccards.info. I'll look into a fix for that (namely, loading the images on demand) and improvements to general readability. Any feedback is appreciated.
Awesome! Love it.
-----
Still working out a some technical snags with the whole neural-storyteller thing. I have a few deadlines this week, so I'm fitting in time to work on that in between everything else, but I'll get it figured out.
Oh, and yet another breakthrough that could benefit us. Mansimov et al. of the University of Toronto put out a paper entitled "Generating Images From Captions with Attention" (link to arxiv page). The results are proof-of-concept at this point; I'd like to see how well it scales to larger image sizes, without the upscaling techniques suggested in the paper. Anyway, what's fascinating is that it can generate images from captions while taking into consideration...
* The background ("A rider on a blue motorcycle in the desert.")
* The foreground ("A rider on a blue motorcycle in the desert.")
* Different objects in the scene ("A rider on a blue motorcycle in the desert.")
* The qualities of the objects in the scene ("A rider on a blue motorcycle in the desert.")
That more or less solves the problem of generating art, assuming we can come up with a caption or description of what we want (and that seems do-able). If you can at least get a well-composed image as a starting point, then style-transfer-like techniques can fill in missing details and add embellishments.
Oy, these papers are coming out faster than I can analyze them, haha. Exciting stuff though.
Oh, and on that note, if you haven't seen Facebook's AI tech demo video on Youtube, I'd recommend taking a look. For those of you who have been following along with this thread, you might recognize the sorts of techniques we've been discussing and how they're being put to use by the company.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
fire spring 1R
sorcery (common)
return target creature card from your graveyard to your hand. if @ was kicked, destroy all lands instead.
kicker R