2019 Holiday Exchange!
 
A New and Exciting Beginning
 
The End of an Era
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Okay, so, I've been messing around with some ideas involving convolutional autoencoders, and I'm trying to figure out if there's any sensible way forward.

    It seems to me that a convolutional autoencoder is cool because it means you don't need to use training to get output. Furthermore, because that makes it quick, you can pipeline it into other things.

    Now, I've read up on convolutional autoencoders, and it seems to me that it would be possible to implement them using, instead of pooling layers, a convolutional layer with a stride greater than 1. Because, for a specific image size, there's an equivalent fully-connected layer, it's possible to transpose a downsampling convolution, which I believe produces several smaller convolutions, but with the same number of parameters as the original layer.

    That's a neat idea I think, but I'm like, I was originally thinking about style transfer; style transfer plus high-level feature editing sounds cool. The papers I've found (one is linked above) express style in terms of the Gram matrix of the layers before the pool layer. Now, if you're getting images through training, then it's not a problem to add more score metrics and weight appropriately, but I think it'd be really cool to be able to get this stuff "in one step" as it were, and I'm really struggling to figure out how to constrain the deconvolution to match an arbitrary positive semidefinite matrix.

    I mean, it's clear to me that the specific fully-connected layer that we can imagine for a downsampling convolution of a specific size of image, that matrix will have a large kernel (a stride of 2 discards around 3/4 of the degrees of freedom!), and adding a vector from the kernel space won't change the downsampled value, but can alter the Gram matrix. So, "all" I need to do is find a vector in the kernel that, when added to the deconvolution result, alters the value of its Gram matrix to match an existing one.

    Does anyone know about doing stuff like this, or should I just focus on messing with the autoencoder setup I thought of? Actually putting that into code and testing it out.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    I just recently got cltorch working on my laptop, and messed around with a style transfer network that was last updated a few months ago. Iterations take between 4 and 6 seconds on my machine, at default settings.

    This has me wanting a dedicated machine for this, but first there are a bunch of questions I have, now that I've actually played with it.

    There are some undocumented features hanging around in the source code. How well do they work?
    The code relies some on colorspaces besides RGB, for certain features. Does anything interesting happen if the whole thing is done in, say, CIELAB?
    Is there any potential for modularity and reuse of the trained networks? (That is, could I train a set of style and content networks separately, and combine them cheaply? Would it be possible to manipulate style and content weights after-the-fact?) EDIT: Yes, sort of, and I'm still not sure.
    What would it take to apply this to animated content, looping or non-looping? EDIT: Step 1 is to find a proper network architecture. Step 2 (the hard one, probably) is to integrate that with a static style network
    It appears that the random initialization option uses white noise. What are the results from using a different form of noise, or from perturbing the input image with noise, rather than using it unaltered?
    Some of my tests show persistent artifacts. I can't tell if this is a result of downscaling the input image, properties of the style that aren't apparent to me, or something else. One thing it's not is the choice of initialization. The artifacts appear even if I initialize with the input image. EDIT: I suspect this is either a consequence of the reconstruction itself, or because my content images differ somewhat from the training data. EDIT The former option does not actually make sense, so I'm going to suppose that the content images are just too different.

    I'm sure some of this has papers on it already, plus I need to really thoroughly read over the source, but suddenly, I actually have a working network that does interesting things, I'd just like to look into tweaking it some.

    EDIT: Okay, I'm finally getting some important insights into the basic details of how this works. The system detailed in A Neural Algorithm of Artistic Style is a classifier hooked into several levels of a convolutional neural network, and it has to be pre-trained separately, I think. The image is generated by separating the error (I call it error, I think the paper calls it loss) function into terms relating to different levels of the network, then feeding each part a vector from a different image. The output of the system is the input to the classifier, and the image must be trained using the error function.

    Also, it looks like I'll only get so much insight from looking at the source of the net I found. If I understand right, it's extracting layers from a pre-trained net, and doesn't train the classifier at all.

    EDIT: Eesh, self-teaching this stuff is a great way to end up with a patchwork of knowledge. Was looking into autoencoders, and just now noticed that tied weights are a thing. Just now. Eesh.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    I found myself wondering if it might work to have a network try to generate an entire deck, from the cards up, for each thing. One the one hand, the individual entries become huge. On the other hand, there are many more possible decks than extant cards. I'm not sure what kind of power it would take to run a character-level network (or some kind of multi-set of parse trees structure) that goes all the way up to decks, but I feel like it would offer another angle on the design space. Like, the cards in a deck have to have positive interactions with each other. So, a deck-fabricating AI would need to somehow encode knowledge of card interactions. Prime it with a card or set of cards, and it can try to figure out what kinds of cards would work with that card, while also having a sensible power level.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    I had a look through gatherer to see what kind of things would appear, based on existing CMC 0 cards.

    There were a few 0/1 Kobolds, and some artifacts with activated effects. A bunch of these are pure X costs, or roughly equivalent. There's also Suspend, and at least one card with Splice onto Arcane, and the pacts. Also, the back half of every transform card; I heard that was changing? There's also every single Mox, so I guess it's a little hard to predict the results solely from existing CMC 0 cards.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Warning: Couldn't decode (bash: /decode.py: No such file or directory), so it's pretty tough to read.
    Is the slash part of the command you entered? Sorry, just taking the paranoid tech-support approach to this.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    reinstalling_torch dot txt:
    ==> installer: The upgrade was successful.
    ==> Sorry, try again.

    Not much else on my mind relevant to this. These results look pretty cool.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Dangit. I'm sure I remember someone in this thread mentioning something about a system for asking questions that structurally analyzed the sentence, then used it to put together a tree of neural nets, where each one is meant to address part of the question in some way. (Or maybe I saw it elsewhere and assumed it was here?)

    Can anyone tell what I'm referring to? I feel like this came up in the last few months.

    Besides wondering where stuff is, I am currently sneezing my brains out. And guilt-tripping myself for not working on extracting some of my code into an independent utility.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Just now took a closer look at the Stack-RNN paper. It looks to me like it should be possible to use some really fun data structures. Like, you mostly just need a zipper form of whatever structure you're interested in, I think. I'm refreshing my memory on zipper structures now-ish.

    (I really hope nobody is explicitly waiting on the stuff I talk about. I'm working on whatever I feel interested in, which might be this stuff, writing a parser, converting RPG rulebooks into wikis, game programming... whatever make me happy at the moment, I guess.)
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Quote from pbtenchi »
    Could it be possible to teach the AI to play magic and teach it through experience? Perhaps you could have a judge rate the game. Just an idea.
    I think this has been floated before. IIRC, while it is true that there is some computer-consumable form of Magic (otherwise the videogames wouldn't exist), it's not something we have easy access to, and training would be quite slow.

    For my part, I'm poking at what TensorFlow has to offer. I don't see any particular barrier to setting up a DRAW-style VAE, I just don't know TensorFlow and understand some of the nuances of the paper well enough. (Like, do I just assume everything that doesn't have an explicit formula is a trainable variable, including some of the matrices in the read function? I mean, it's not written read-sub-t, so, is it just a single matrix for every time step, as opposed to one-per?)
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Let me see exactly what I've got in here. It looks like a pair of feedforward neural networks, with two hidden layers each. They're rectifying with softplus. The "recog" network is responsible for encoding the image into vectors representing a mean and a log sigma squared. (Something looks weird here. The original author is doing sqrt(exp(foo)). Why not exp(foo/2)?) The sampling is done by using a constant random tensor with dimensions equal to the dimensionality of the latent space, and the batch size. Oh, it's getting the whole thing at once; that's how this can be so relatively zippy with something like 2000 neurons on CPU. Anyway, it uses that to get a point in latent space with "z = mu + sigma*epsilon". Then, the "gener" network just transforms a point in latent space back into something input-sized.

    The original author wrote some helpful mini-essays in the comments about how the network calculates the cost. I need to read these a few more times.

    All in all, I think this is pretty typical for a VAE (except for that one weird bit with the sqrt instead of dividing by 2), but I needed to work through it myself.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Rewrote a bunch of the VAE code I swiped. The result looks exactly the same (which is why I didn't attach an image). Which is good! I once wrote some code for someone else to use in a project, and they came back the next day and said "I refactored your code. By the way, it doesn't work, what's up with that?"

    The basic idea right now is to iterate on the code until I'm really satisfied with its structure. Then, I can start to make statements like "I need to replace these parts of the code with a network with these properties." For now, though, I've got to wrangle everything into, my house style, I guess. Should have negligible performance differences, but be easier to reason about.

    EDIT: Incremental iterative stuff. I think the main classes are clean enough to try to rewrite, now.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Trying to put together that whole DRAW-GAN idea I articulated (and oversold to myself in terms of immediate capabilities. Initial canvas state is a trained variable in DRAW, which means the canvas has a fixed size, to the surprise of nobody but me. I just... I really liked my recurrent(ish?) autoencoder...).

    Step one is to take some VAE code out of a blog post, and see if I copied it out correctly. If it works, then I assume I've got everything I need.
    Step two is... probably rewrite the code until I really get it.

    In the end, I think I want to try to replicate DRAW before I try to replicate VAE-GAN. Not sure I can justify this intuition, but there it is.

    EDIT: Well, the VAE works. That's enough for today.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    So far as more general image generation goes, I wonder if using DRAW specifically for the VAE in VAE-GAN would allow it to cope with arbitrarily-sized images. (Presumably the discriminator would need a similar system for focusing its attention on details)

    I've got tensorflow working, and my own code might work on this kind of thing (might). I won't know for sure until I actually grok both papers.
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    In the end, I decided against trying to work with the unitary stuff for now, until I can come up with an activation function that's easier for me to have my code analyze (or just, you know, use an existing framework...). The basic issue is that I designed this stuff to work with addition, subtraction, multiplication, and non-linear functions. Their rectifier involves division, and piecewise functions on top of that. If I want something that my code can work with, I need something else.

    Besides that, I've been messing with OpenCL stuff. I've got simple demos from Apple working, so that seems okay. Running the tests in pygpu just segfaults at the moment, though.

    Technology is fun! Yaaaaay!
    Posted in: Custom Card Creation
  • posted a message on Generating Magic cards using deep, recurrent neural networks
    Nevermind, I fixed it.

    ...

    Seriously, though, it's because there's a few missing steps in the guides I was looking at. So far, I've found out that I need to make sure nvcc is on my $PATH, and then I found out that the installer has been downloading the wrong version, so I'm trying to install the not-wrong versions of stuff now.

    EDIT: The installer chokes on itself. Tempted to crack open the app and see if it's something obvious.

    EDIT: I was using the wrong network installer. These things have no version branding on the inside... ("Well geez, you dope, why were you using an old installer?" I downloaded this junk last week.)

    EDIT: 'Unable to get the number of gpus available: CUDA driver version is insufficient for CUDA runtime version'. Well, I'm done for tonight.

    EDIT: ... Sigh ... I can't do GPU training on this computer. Perhaps on some other computer, but not this one. This isn't a "Oh, it's really hard" thing, this is "I skipped checking that it actually satisfied the system requirements". CUDA can't run on this. Maybe OpenCL, but I don't feel like climbing out of one rabbit hole only to immediately plunge down another. (Okay, fine, it's definitely compatible with OpenCL 1.2. But I go no further for now.)
    Posted in: Custom Card Creation
  • To post a comment, please or register a new account.