Oh, and Talcos: Would it be possible to get it to generate a "texture" that looks like a background (forest, desert, ocean, etc.) and then using a separate process (the previous ones you've been using) generate a creature on top?
Oh, and Talcos: Would it be possible to get it to generate a "texture" that looks like a background (forest, desert, ocean, etc.) and then using a separate process (the previous ones you've been using) generate a creature on top?
I think so, but I'd need to do some testing. That's one of the many questions that I have.
It's clear that we can hallucinate content, just as we saw with Google's DeepDream, but we tend to get results that are very chaotic and unplanned. That's the nature of dreams. I'm curious as to what kinds of tricks we could use to give the network more lucidity and coherence. The results don't have to be perfect because we can apply a style transfer afterwards; we can smooth over minor irregularities when we bring the image in line with the style we want.
The results don't have to be perfect because we can apply a style transfer afterwards; we can smooth over minor irregularities when we bring the image in line with the style we want.
Just speculating here, but I'm pretty sure we could do the image creation and style transfer at the same time. One could guide the image creation to match a specified style (if we were hallucinating the content) or content (if we were hallucinating the style).
The results don't have to be perfect because we can apply a style transfer afterwards; we can smooth over minor irregularities when we bring the image in line with the style we want.
Just speculating here, but I'm pretty sure we could do the image creation and style transfer at the same time. One could guide the image creation to match a specified style (if we were hallucinating the content) or content (if we were hallucinating the style).
I agree completely. I was just imagining these steps as separate for the sake of simplicity; I'm sure an efficient pipeline could be constructed by folding these processes together.
so with a batch size of 35 I can train my 512x3 network. 2.5 hours!! I'm ecstatic. I need to figure out tests to run now that I can train this quickly. Maybe I'll test which orders fields go best in, that way if I want to generate creatures, I can have the Type field be first so the whole card is defined by the characteristic I choose, if I decide I want a certain mana cost I would need a network where mana is the first field so the card doesn't have to fit what it was planning on making to my whispering. I also set up my network nodes to receive SSH tunnels so I can control them all remotely. I've thought about building a cron that would push info from the RPi to the network nodes, so I can push new commands and generate new data that gets pushed back.
so with a batch size of 35 I can train my 512x3 network. 2.5 hours!! I'm ecstatic.
You should be! That's awesome!
---
By the way, I made some progress this evening on getting everything set up with that latest art generation code. I still have some minor configuration issues to work out, hopefully I'll have that done before too long (and then I'll have pictures to share with you).
Of course, I have several other pressing objectives at the moment which may delay me somewhat. In particular, I'm having to revise a grant proposal that's due by Friday. The long story short is that I have to convince NASA that if they want to tame Mars they will need to give my people the money needed to do vital investigations. As the great^17-grand-student of Issac Newton and the great^19-grand-student of Galileo Galilei, I like to think that I'm doing my part to advance my ancestors' research.
I don't know if its relevant to anyone here or if google actually wrote a technical paper on it but just something to share: link. As the article stated, its coming out soon.
I don't know if its relevant to anyone here or if google actually wrote a technical paper on it but just something to share: link. As the article stated, its coming out soon.
I heard about that!
The mapping of the e-mail to a "thought vector" is logically/mathematically similar to what we did with word2vec and Magic cards. You take your input (a Magic card, e-mail, etc.), and encode it as a series of numbers that captures the "essence" of the input, such that semantically similar inputs give you similar looking numbers. Meanwhile, the neural architecture is LSTM-based, much like what we've been using. Evidently they have incorporated some tricks to get them to remember long-term dependencies better - I'd be interested to learn more about that.
Suggesting a response to an e-mail is a good example of a problem that is easy to describe, easy for human to do, but incredibly difficult for a human to come up with a series of rules for solving it. But when you have tons of data (e.g. e-mails and responses) like Google does, advancements in machine learning have made it possible to teach a system to solve the problem, discovering the countless thousands of rules on its own. That's why companies like Google are buying out all of the AI/ML start-up companies that they can and pulling in all the available talent in academia. They can now rapidly roll out solutions to very complex problems with a minimal investment of time and labor. That's a game-changer.
All of the things that we've discussed here over the past few months will be commonplace in the very near future. It's going to be a very disruptive force in a lot of different industries.
So is anyone interested in what I have done with my Raspberry Pi? Would anyone like a tutorial on how to set every thing up?
I'm interested in learning more, though I don't think I'll be putting that knowledge to use in the near future. In any case, I'm all for making information available so that others can follow in our footsteps. If you have the time to spare to put up at least a brief tutorial, I'd say go for it. I can put a link to the tutorial on the first post.
The mapping of the e-mail to a "thought vector" is logically/mathematically similar to what we did with word2vec and Magic cards. You take your input (a Magic card, e-mail, etc.), and encode it as a series of numbers that captures the "essence" of the input, such that semantically similar inputs give you similar looking numbers. Meanwhile, the neural architecture is LSTM-based, much like what we've been using. Evidently they have incorporated some tricks to get them to remember long-term dependencies better - I'd be interested to learn more about that.
They also seem to have utilized the semantic vectors of the responses themselves as a pre-filter, so they wouldn't converge too much - you want options that, while meaningful to the context, have some degree of variation.
Oh! soo close! This sounds like it would have been interesting.
skarrg, god of preging (mythic rare) RG
legendary creature ~ human artificer 1BR, T: target creature gets +2/+2 until end of turn. note the number of creatures on the battlefield with a +1/+1 counter on it and put it onto the battlefield. if you do, shuffle your library.
(2/2)
well apart from having the rarity backwards I think the networks are getting the hang of making cards that, mostly, make sense
They also seem to have utilized the semantic vectors of the responses themselves as a pre-filter, so they wouldn't converge too much - you want options that, while meaningful to the context, have some degree of variation.
Exactly. We want exactly the same things here - new cards that are similar enough to existing cards to be plausible, but different enough to be interesting.
I'm hoping I can address this a bit with my class project. The big question is, how do you define 'different enough,' and then once you do, how do you provide it? One option is just to build a filter, as Google is doing. I can do a similar thing with my own word2vec data, basically just taking all of the cards from a dump and separating out the ones that are too similar to existing cards or are obviously invalid. The hardest part there is trying to identify word salad.
Really what you want, though, is to somehow feed this information back into the training process, and I have no idea how to do that, other than by looking for parameters that produce good output. The thing with the word2vec distance is that although it does seem to give us a pretty good idea of how creative a card is, it's very expensive to compute.
so I was looking through and I found a transform card that correctly uses X!!
ruthless exwopter (rare) X
artifact creature ~ construct
@ enters the battlefield with X +1/+1 counters on it.
at the beginning of each upkeep, if no spells were cast last turn, transform @.
(0/0)
~~~~~~~~
ravenous replica (uncommon) 8
artifact creature ~ thopter
flying U: @ gets +2/+2 until end of turn.
(7/7)
probably worth being a mythic with how powerful it is, but it isn't broken
Uh, yeah, it's totally broken. Drop it for 1 first turn, attack for 8 turn two if they don't have a 1-drop, turn three if they do. That's pretty broken in my mind.
Uh, yeah, it's totally broken. Drop it for 1 first turn, attack for 8 turn two if they don't have a 1-drop, turn three if they do. That's pretty broken in my mind.
I swear when I looked at it it was costed XX that way you couldn't drop it turn 1
------------------------------------------------------------------------
anyone ever see this error before?
~/torch/install/bin/luajit: ./util/CharSplitLMMinibatchLoader.lua:175: attempt to index global 'f' (a nil value)
stack traceback:
./util/CharSplitLMMinibatchLoader.lua:175: in function 'text_to_tensor'
./util/CharSplitLMMinibatchLoader.lua:57: in function 'create'
train.lua:113: in main chunk
[C]: in function 'dofile'
...~/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:133: in main chunk
[C]: at 0x00405ea0
--------------------------------------
nevermind, I had inputs.txt instead of input.txt
The big question is, how do you define 'different enough,' and then once you do, how do you provide it? One option is just to build a filter, as Google is doing. I can do a similar thing with my own word2vec data, basically just taking all of the cards from a dump and separating out the ones that are too similar to existing cards or are obviously invalid. The hardest part there is trying to identify word salad.
Could you measure the acummulated likelihood for the activated cells during a certain card gen? It wouldn't give you how creative a card is, but it would give you (after some normalization) how far from the most-common baseline card a given card is.
Lately I've been super busy with big (but exciting!) changes including a new job and getting engaged, but pretty much ever free chance I get I've been thinking about the problem/idea of training a net to read the card forward and backwards while still being generative. Recently though I've been taking a new approach and I wanted to see what you guys think. Lets say you're reading a book which has the following sentence: "The hen house is red". As you read it, each word changes your understanding. You read "The hen" and get an image of a hen because at this point you think it's a noun. However, once you get to "The hen house" you realize hen is an adjective and so the meaning is different. If by the time you finish the sentence you're confused, you wouldn't read it backwards ("red is house hen The", ah, now it makes perfect sense), but instead you read it forwards again while keeping in mind what you thought the first time you read it.
So basically the idea is this: Take the normal net (net1) structure and do a forward pass. Then do the same thing with a different net (net2), except the hidden states in net2 take the final hidden state from net1 as additional input. Thoughts?
The big question is, how do you define 'different enough,' and then once you do, how do you provide it? One option is just to build a filter, as Google is doing. I can do a similar thing with my own word2vec data, basically just taking all of the cards from a dump and separating out the ones that are too similar to existing cards or are obviously invalid. The hardest part there is trying to identify word salad.
Could you measure the acummulated likelihood for the activated cells during a certain card gen? It wouldn't give you how creative a card is, but it would give you (after some normalization) how far from the most-common baseline card a given card is.
I'm trying to make sure that I follow you. Are you suggesting taking the sequence of activations that occur when the network generates a novel card and comparing them to a table of real cards and the activation sequences associated with them?
If that's what you mean, then yes, in principle you could. It'd be similar to the work of Nishimoto et al. in 2011 where they did something similar with human test subjects. The participants watched several hours of movie trailers while researchers recorded their brain activity. The researchers then trained a model that mapped brain activity data to images. They then were able to use that model to reconstruct what a participant was seeing or imagining at any moment, even when they were seeing/imagining novel things*. Here's a link to a video of some examples. That work has been cited over 200 times since then, so the research in that area is very much ongoing and active. With it, one could literally turn dreams into realities.
Anyway, I'd imagine that something similar could be done for artificial neural networks. We could take a trained model, capture the activations that occur as we present real cards to it, and then we could use that data to analyze the directions that the network is going in when it generates cards (e.g. where is it drawing inspiration from?). It'd be more computationally expensive than embedding the novel card into a vector and then comparing vectors, but we'd get more in-depth results.
* EDIT: I should note that that's not quite what they did in that particular study, but I'm summarizing the stuff that they and and others are doing with that technology.
Lately I've been super busy with big (but exciting!) changes including a new job and getting engaged, but pretty much ever free chance I get I've been thinking about the problem/idea of training a net to read the card forward and backwards while still being generative. Recently though I've been taking a new approach and I wanted to see what you guys think. Lets say you're reading a book which has the following sentence: "The hen house is red". As you read it, each word changes your understanding. You read "The hen" and get an image of a hen because at this point you think it's a noun. However, once you get to "The hen house" you realize hen is an adjective and so the meaning is different. If by the time you finish the sentence you're confused, you wouldn't read it backwards ("red is house hen The", ah, now it makes perfect sense), but instead you read it forwards again while keeping in mind what you thought the first time you read it.
So basically the idea is this: Take the normal net (net1) structure and do a forward pass. Then do the same thing with a different net (net2), except the hidden states in net2 take the final hidden state from net1 as additional input. Thoughts?
First, congratulations for the new job and for your engagement!
As for your idea, that's a very interesting way of approaching the problem. So, as a generative model, would it be like sketching out the idea for the card first, and then making a second pass to hone in on the fine details? If so, that'd be fun because it'd force the second net to work within the constraints set by the first net.
EDIT(2): For the record, I got the texture generation working. I have some tests that I need to try with it before I have anything worth sharing, but I can say that it definitely works.
So as I was reading through this extensive thread I remember reading some about card formatting and the position on the elements. I do not however remember there being any testing on which format works best, so I am running some tests, reordering the fields in an effort to test to see if reordering could be useful for card generation. If you want a red card and try to whisper color, you get a mangled card that might have types that don't suit it, or a p/t that aren't useful. but if the cost came first maybe that wouldn't effect the card learning ability but allow easier generation of red cards as there is no card information the network tries to integrate into the card. now I don;t know if the modified sampling script will be able to gracefully handle these changes or if the color whisper will have to use the -name flag. I use the field labels Hardcast assigned when he implemented his random field option to the encoder. the two options I am currently training with and testing are:
345697081
First, congratulations for the new job and for your engagement!
As for your idea, that's a very interesting way of approaching the problem. So, as a generative model, would it be like sketching out the idea for the card first, and then making a second pass to hone in on the fine details? If so, that'd be fun because it'd force the second net to work within the constraints set by the first net.
Thank you very much! One exciting thing about the new job is that half of what I'll be doing is natural language processing, so this project will help my work and my work will help this project.
And yes, that's the idea. It's (very roughly) similar to the idea in this paper. In this paper, they're training a net to locate people's joints in images. First they do a "coarse" prediction from a convnet, then take some of the hidden states from that net and use those in a "precise" convnet. The "coarse" net gets the general location (in our case the general idea for each of the fields) and then the "precise" net has a much easier time of zeroing in (in our case, making sure all the fields agree). As for specifics: for training, both nets need to try to predict the card (though the second net has a higher weight than the first) and for generating we do the first net normally until generates a card, then do the second net normally (making sure it gets the same first character as before) until it generates a card. For "whispering" to the net, we just whisper in both parts.
Are you suggesting taking the sequence of activations that occur when the network generates a novel card and comparing them to a table of real cards and the activation sequences associated with them?
Essentially, yeah. My initial thought was that the activation sequence is not unlike a signature/fingerprint created at generation. If one were to attach this signature to the generated card at creation, you'd skip the additional step of vectorizing the card to compare it and, as you said, it would provide more information about the generation process itself.
So as I was reading through this extensive thread I remember reading some about card formatting and the position on the elements.
This is definitely an interesting area of study. Changing the order of the fields should have no effect on the training sampling process, and it should be pretty easy to do if you're comfortable opening up encode.py and decode.py and changing the arguments they give to the Card class. As long as the fields are all labeled, then decode.py should work fine no matter what order they're in. If you don't include the explicit labels, then you'll have to make sure decode.py knows the order or it will be confused. If you're having a hard time with the code, I can try to explain or provide some examples.
The mtgencode changeset that I'm working on right now (and that was used to create the two 10MB dumps I posted most recently) changes the field order to be better than it was before, but I've by no means done all of the necessary experimentation to figure out what's best.
Also, I'll note that if you want to be able to prime cards in any order, then you can train a network while randomizing the order of the (labeled) fields, and this works pretty well in practice, though on average cards are less likely to be well-formed and there are some interesting effects on remembering X and things like that.
EDIT: the latest mtgencode and mtg-rnn versions have been pushed to github. It's quite likely that something in there is a little broken, if anyone runs into any issues I'll try to get them fixed as quickly as possible.
I like your new format, I have a question: the choice spells are formatted as [#ofchoices =choice =choice] how will this hold up to the new confluence cycle? will it need custom processing? maybe a each choice can be selected more than once\[3 =option =option =option]
As an aside on the subject of flavor generation, I was doing some investigating into the neural-storyteller program (code here, blogpost about it here) that I discovered off a link on the machine learning subreddit. It's very interesting. You present images to it and it attempts to weave a story based on what it sees in the image. The example given on the github page shows a picture of people standing on a beach at sunrise, and when given the image, the neural-storyteller spits out:
"We were barely able to catch the breeze at the beach, and it felt as if someone stepped out of my mind. She was in love with him for the first time in months, so she had no intention of escaping. The sun had risen from the ocean, making her feel more alive than normal. She's beautiful , but the truth is that I don't know what to do . The sun was just starting to fade away, leaving people scattered around the Atlantic Ocean . I'd seen the men in his life, who guided me at the beach once more."
That is unusually coherent. I'm definitely going to have to try this out with Magic art when I get the chance.
Oh, and I did some tests with the texture synthesis stuff last night. I might be able to repurpose that, but it'll take some work. I gave it a self-portrait of mine and it generated a face that only a mother could love. More on that later.
EDIT: Oh, and the storyteller is based on a re-purposed style transfer (for image->text, instead of image->image). You can use it to generate text based on, say, Taylor Swift lyrics, or romantic novels, etc. So we should be able to work with Magic flavor text in the same way. That excites me greatly.
EDIT(2): Yes. This... this should solve most of our problems when it comes to flavor text. At the very least, there will be a clear connection between the art and the flavor text (if not the art and the rules text). If we can figure out the content generation for the art and key that to some representation of the rules text of the card, then we would be good to go. But that can come later.
EDIT(3): I still can't get over how poignant that piece is. He's in love with her, but she's with another man. They're all at the beach together (he's right there with her!) and he can't tell her how he feels. She rises with the sun as he sinks into the depths of despair. How cruel! How unfair! And, for that matter, how amazing it is that something that has never known love (or the beach, for that matter) could write something so moving.
Oh, and Talcos: Would it be possible to get it to generate a "texture" that looks like a background (forest, desert, ocean, etc.) and then using a separate process (the previous ones you've been using) generate a creature on top?
I think so, but I'd need to do some testing. That's one of the many questions that I have.
It's clear that we can hallucinate content, just as we saw with Google's DeepDream, but we tend to get results that are very chaotic and unplanned. That's the nature of dreams. I'm curious as to what kinds of tricks we could use to give the network more lucidity and coherence. The results don't have to be perfect because we can apply a style transfer afterwards; we can smooth over minor irregularities when we bring the image in line with the style we want.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
I agree completely. I was just imagining these steps as separate for the sake of simplicity; I'm sure an efficient pipeline could be constructed by folding these processes together.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
You should be! That's awesome!
---
By the way, I made some progress this evening on getting everything set up with that latest art generation code. I still have some minor configuration issues to work out, hopefully I'll have that done before too long (and then I'll have pictures to share with you).
Of course, I have several other pressing objectives at the moment which may delay me somewhat. In particular, I'm having to revise a grant proposal that's due by Friday. The long story short is that I have to convince NASA that if they want to tame Mars they will need to give my people the money needed to do vital investigations. As the great^17-grand-student of Issac Newton and the great^19-grand-student of Galileo Galilei, I like to think that I'm doing my part to advance my ancestors' research.
But soon! I promise.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
I heard about that!
The mapping of the e-mail to a "thought vector" is logically/mathematically similar to what we did with word2vec and Magic cards. You take your input (a Magic card, e-mail, etc.), and encode it as a series of numbers that captures the "essence" of the input, such that semantically similar inputs give you similar looking numbers. Meanwhile, the neural architecture is LSTM-based, much like what we've been using. Evidently they have incorporated some tricks to get them to remember long-term dependencies better - I'd be interested to learn more about that.
Suggesting a response to an e-mail is a good example of a problem that is easy to describe, easy for human to do, but incredibly difficult for a human to come up with a series of rules for solving it. But when you have tons of data (e.g. e-mails and responses) like Google does, advancements in machine learning have made it possible to teach a system to solve the problem, discovering the countless thousands of rules on its own. That's why companies like Google are buying out all of the AI/ML start-up companies that they can and pulling in all the available talent in academia. They can now rapidly roll out solutions to very complex problems with a minimal investment of time and labor. That's a game-changer.
All of the things that we've discussed here over the past few months will be commonplace in the very near future. It's going to be a very disruptive force in a lot of different industries.
I'm interested in learning more, though I don't think I'll be putting that knowledge to use in the near future. In any case, I'm all for making information available so that others can follow in our footsteps. If you have the time to spare to put up at least a brief tutorial, I'd say go for it. I can put a link to the tutorial on the first post.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
They also seem to have utilized the semantic vectors of the responses themselves as a pre-filter, so they wouldn't converge too much - you want options that, while meaningful to the context, have some degree of variation.
RG
legendary creature ~ human artificer
1BR, T: target creature gets +2/+2 until end of turn. note the number of creatures on the battlefield with a +1/+1 counter on it and put it onto the battlefield. if you do, shuffle your library.
(2/2)
well apart from having the rarity backwards I think the networks are getting the hang of making cards that, mostly, make sense
I'm hoping I can address this a bit with my class project. The big question is, how do you define 'different enough,' and then once you do, how do you provide it? One option is just to build a filter, as Google is doing. I can do a similar thing with my own word2vec data, basically just taking all of the cards from a dump and separating out the ones that are too similar to existing cards or are obviously invalid. The hardest part there is trying to identify word salad.
Really what you want, though, is to somehow feed this information back into the training process, and I have no idea how to do that, other than by looking for parameters that produce good output. The thing with the word2vec distance is that although it does seem to give us a pretty good idea of how creative a card is, it's very expensive to compute.
I don't have much else to contribute now, but now that I've run into a snag of sorts in my other project, I might turn my attention back to this area.
X
artifact creature ~ construct
@ enters the battlefield with X +1/+1 counters on it.
at the beginning of each upkeep, if no spells were cast last turn, transform @.
(0/0)
~~~~~~~~
ravenous replica (uncommon)
8
artifact creature ~ thopter
flying
U: @ gets +2/+2 until end of turn.
(7/7)
probably worth being a mythic with how powerful it is, but it isn't broken
I swear when I looked at it it was costed XX that way you couldn't drop it turn 1
------------------------------------------------------------------------
anyone ever see this error before?
--------------------------------------
nevermind, I had inputs.txt instead of input.txt
Could you measure the acummulated likelihood for the activated cells during a certain card gen? It wouldn't give you how creative a card is, but it would give you (after some normalization) how far from the most-common baseline card a given card is.
So basically the idea is this: Take the normal net (net1) structure and do a forward pass. Then do the same thing with a different net (net2), except the hidden states in net2 take the final hidden state from net1 as additional input. Thoughts?
I'm trying to make sure that I follow you. Are you suggesting taking the sequence of activations that occur when the network generates a novel card and comparing them to a table of real cards and the activation sequences associated with them?
If that's what you mean, then yes, in principle you could. It'd be similar to the work of Nishimoto et al. in 2011 where they did something similar with human test subjects. The participants watched several hours of movie trailers while researchers recorded their brain activity. The researchers then trained a model that mapped brain activity data to images. They then were able to use that model to reconstruct what a participant was seeing or imagining at any moment, even when they were seeing/imagining novel things*. Here's a link to a video of some examples. That work has been cited over 200 times since then, so the research in that area is very much ongoing and active. With it, one could literally turn dreams into realities.
Anyway, I'd imagine that something similar could be done for artificial neural networks. We could take a trained model, capture the activations that occur as we present real cards to it, and then we could use that data to analyze the directions that the network is going in when it generates cards (e.g. where is it drawing inspiration from?). It'd be more computationally expensive than embedding the novel card into a vector and then comparing vectors, but we'd get more in-depth results.
* EDIT: I should note that that's not quite what they did in that particular study, but I'm summarizing the stuff that they and and others are doing with that technology.
First, congratulations for the new job and for your engagement!
As for your idea, that's a very interesting way of approaching the problem. So, as a generative model, would it be like sketching out the idea for the card first, and then making a second pass to hone in on the fine details? If so, that'd be fun because it'd force the second net to work within the constraints set by the first net.
EDIT(2): For the record, I got the texture generation working. I have some tests that I need to try with it before I have anything worth sharing, but I can say that it definitely works.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.
345697081
and
456397081
I shall post the 1M dumps to my drive folder here
And yes, that's the idea. It's (very roughly) similar to the idea in this paper. In this paper, they're training a net to locate people's joints in images. First they do a "coarse" prediction from a convnet, then take some of the hidden states from that net and use those in a "precise" convnet. The "coarse" net gets the general location (in our case the general idea for each of the fields) and then the "precise" net has a much easier time of zeroing in (in our case, making sure all the fields agree). As for specifics: for training, both nets need to try to predict the card (though the second net has a higher weight than the first) and for generating we do the first net normally until generates a card, then do the second net normally (making sure it gets the same first character as before) until it generates a card. For "whispering" to the net, we just whisper in both parts.
Essentially, yeah. My initial thought was that the activation sequence is not unlike a signature/fingerprint created at generation. If one were to attach this signature to the generated card at creation, you'd skip the additional step of vectorizing the card to compare it and, as you said, it would provide more information about the generation process itself.
The mtgencode changeset that I'm working on right now (and that was used to create the two 10MB dumps I posted most recently) changes the field order to be better than it was before, but I've by no means done all of the necessary experimentation to figure out what's best.
Also, I'll note that if you want to be able to prime cards in any order, then you can train a network while randomizing the order of the (labeled) fields, and this works pretty well in practice, though on average cards are less likely to be well-formed and there are some interesting effects on remembering X and things like that.
EDIT: the latest mtgencode and mtg-rnn versions have been pushed to github. It's quite likely that something in there is a little broken, if anyone runs into any issues I'll try to get them fixed as quickly as possible.
"We were barely able to catch the breeze at the beach, and it felt as if someone stepped out of my mind. She was in love with him for the first time in months, so she had no intention of escaping. The sun had risen from the ocean, making her feel more alive than normal. She's beautiful , but the truth is that I don't know what to do . The sun was just starting to fade away, leaving people scattered around the Atlantic Ocean . I'd seen the men in his life, who guided me at the beach once more."
That is unusually coherent. I'm definitely going to have to try this out with Magic art when I get the chance.
Oh, and I did some tests with the texture synthesis stuff last night. I might be able to repurpose that, but it'll take some work. I gave it a self-portrait of mine and it generated a face that only a mother could love. More on that later.
EDIT: Oh, and the storyteller is based on a re-purposed style transfer (for image->text, instead of image->image). You can use it to generate text based on, say, Taylor Swift lyrics, or romantic novels, etc. So we should be able to work with Magic flavor text in the same way. That excites me greatly.
EDIT(2): Yes. This... this should solve most of our problems when it comes to flavor text. At the very least, there will be a clear connection between the art and the flavor text (if not the art and the rules text). If we can figure out the content generation for the art and key that to some representation of the rules text of the card, then we would be good to go. But that can come later.
EDIT(3): I still can't get over how poignant that piece is. He's in love with her, but she's with another man. They're all at the beach together (he's right there with her!) and he can't tell her how he feels. She rises with the sun as he sinks into the depths of despair. How cruel! How unfair! And, for that matter, how amazing it is that something that has never known love (or the beach, for that matter) could write something so moving.
My LinkedIn profile... thing (I have one of those now!).
My research team's webpage.
The mtg-rnn repo and the mtg-encode repo.