Hey guys. I'm very busy with the school year starting up. I can not figure out why it keeps timing out after a few hours. I am totally happy if someone forks it and continues developing it, or if they figure out a fix I would love a pull request. I just don't have the time or mental energy to do much programming until winter break. I am more than happy to host it!
Traceback (most recent call last):
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/eventlet/wsgi.py", line 454, in handle_one_response
result = self.application(self.environ, start_response)
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/engineio/middleware.py", line 34, in __call__
return self.wsgi_app(environ, start_response)
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/flask/_compat.py", line 33, in reraise
raise value
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/flask/_compat.py", line 33, in reraise
raise value
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/croxis/src/mtgai/app/main/views.py", line 263, in card_select
extra_template_data['urls'] = convert_to_urls(session['cardtext'], cardsep=session['cardsep'])
File "/home/croxis/src/mtgai/venv/lib/python3.4/site-packages/werkzeug/local.py", line 368, in <lambda>
__getitem__ = lambda x, i: x._get_current_object()[i]
KeyError: 'cardtext'
this is what I get when I just press Generate in Croxis' website. Something is wronk and someone is german (Werkzeug?)
Small site update. I'm now more certain that the generator is causing the websocket to time out. I do not have a fix for this or the getting everyone else card bugs.
Well I was going to post about an update, but now the neural net is throwing a hissy fit:
/home/croxis/torch/install/bin/luajit: bad argument #1 to '?' (empty tensor at /home/croxis/torch/pkg/torch/generic/Tensor.c:851)
stack traceback:
[C]: at 0x7f6782dc2cc0
[C]: in function '__index'
sample_hs_v3.1.lua:202: in main chunk
[C]: in function 'dofile'
...oxis/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00405be0
I saw a bunch of cards coming out with "if you have &^^^^ or more cards in hand" as well, with no settings changed except to pick a different random seed. I think I used 2929 that time.
EDIT: Woot, pretty cards working again! I do think there's some sort of bleed happening across sessions; halfway through my latest generation, it started spitting out a bunch of cards named "Akaito" then "Yukkuri," which is not anything I requested (I clicked "generate" with everything at default, random seed was 124 I think).
Also, the progress bar goes well above 100%!
There IS bleeding happening. I don't know why. Either the different function calls, which are in their own processes, are somehow sharing the same pipe (which doesn't look like it should be the case based on python's documentation), or the websocket is firing to all browsers that happen to be in the generation step.
I need to get some more lesson planning done (going standards based grading is harder than I thought) so I wont be able to attack this again for a couple days.
Also the 100% is because I'm sneaking in about 100 more characters to help compensate for the nn debug output and incomplete cards. Sometimes it ends up below, sometimes it ends up above. It is also going above 100% because you are being sent cards being generated by other people. I didn't realize the site would be popular enough to have multiple simultaneous users
I asked the web UI to append '\whenever a creature you control ingests one or more cards' to the end of cards. Instead I got cards with 'if you have 4 or more cards in your hand,' appended.
Uh, I think I got someone else's cards.
Edit: tried again, it made two but only printed pretty text for the first, then it switched to something else:
This is a definite functionality-breaking problem.
I tried it and I can't reproduce what you saw, it is working correctly for me. Give me more input specifics if you changed anything else, it might help.
Despite my tendency to fire off bug reports, Croxis, I am super happy to have your Web UI! My VMs are hosed at the moment, so it's great to have that alternative. Plus, given we're not procedurally generating artwork yet, I will totally use the make-printable-cards feature for whatever goes into my draft set.
Bug reports make me happy (well not happy because it means I missed something, but you know what I mean). The printable card feature and the mse set feature is also broken for the moment* ut I am working on that fix now.
*The variables and raw card text is stored in what is called a session. The default behaviour for a session is to store it in an encrypted cookie on your browser. When I changed generation into a javascript/websocket I can read the form settings from the browser session cookie, but I can't write to it like I did with the old method. I'm switching to a serverside session system that will fix this issue.
Fancy new ui feature! The card generation and image loading is almost parallel, meaning the page will load and start displaying cards as they generate*. This is available for all formats.
*So this works on my home development machine but is slow and unstable on the actual server. I'm going to have to do more testing. My hunch is that the neural net takes longer to warm up on my server and is blocking the websockets, causing them to time out. I am probably going to have to put the neural net behind python's multiprocess module to prevent GIL issues.
I have a feature request for the v3 sampler: add io.flush() on line 166. This unblocks stdout/pipes so I can read the output as it is happening and stream it to the client. Hopefully this does not break anything
I keep getting gateway timeouts. Keep trying and hope for the best?
my vps is a little overloaded at the moment as I am doing some brain training. Reduce the character count to around 1500 seems to work for me. I'll need to fiddle with something more ajaxy as a work around.
Added: A brain checkpoint by talcos
Added: Rarity
Fixed: Multicolored card backgrounds
Fixed: Artifact creatures not having power or toughness
Fixed: Temperatures
I'm back! (Well I was back Friday but I had to catch up on Project Runway. Priorities. [Man has that show gone downhill, not as down hill as America's Next Top Model.])
I skimmed the past 10 pages and I have a couple questions that I might of missed:
Question on Temperature -- I could of sworn it was an Integer with values from 0-100. Sounds like it is a 0-1 float?
I saw sample_hs_v3 mentioned. Is there a link to it?
What are the nifty benefits to hardcast's fork of char rnn?
I want to clean up this and other issues posted to the mtgai github page, then I will work on incorporating the new renderer.
Fixed! Also added hardcast's snapshot
PS: Anyone wish to donate a checkpoint?
/home/croxis/torch/install/bin/luajit: bad argument #1 to '?' (empty tensor at /home/croxis/torch/pkg/torch/generic/Tensor.c:851)
stack traceback:
[C]: at 0x7f6782dc2cc0
[C]: in function '__index'
sample_hs_v3.1.lua:202: in main chunk
[C]: in function 'dofile'
...oxis/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00405be0
There IS bleeding happening. I don't know why. Either the different function calls, which are in their own processes, are somehow sharing the same pipe (which doesn't look like it should be the case based on python's documentation), or the websocket is firing to all browsers that happen to be in the generation step.
I need to get some more lesson planning done (going standards based grading is harder than I thought) so I wont be able to attack this again for a couple days.
Also the 100% is because I'm sneaking in about 100 more characters to help compensate for the nn debug output and incomplete cards. Sometimes it ends up below, sometimes it ends up above. It is also going above 100% because you are being sent cards being generated by other people. I didn't realize the site would be popular enough to have multiple simultaneous users
I tried it and I can't reproduce what you saw, it is working correctly for me. Give me more input specifics if you changed anything else, it might help.
Bug reports make me happy (well not happy because it means I missed something, but you know what I mean). The printable card feature and the mse set feature is also broken for the moment* ut I am working on that fix now.
*The variables and raw card text is stored in what is called a session. The default behaviour for a session is to store it in an encrypted cookie on your browser. When I changed generation into a javascript/websocket I can read the form settings from the browser session cookie, but I can't write to it like I did with the old method. I'm switching to a serverside session system that will fix this issue.
EDIT: Print now works again.
fixed
Fancy new ui feature! The card generation and image loading is almost parallel, meaning the page will load and start displaying cards as they generate*. This is available for all formats.
*So this works on my home development machine but is slow and unstable on the actual server. I'm going to have to do more testing. My hunch is that the neural net takes longer to warm up on my server and is blocking the websockets, causing them to time out. I am probably going to have to put the neural net behind python's multiprocess module to prevent GIL issues.
my vps is a little overloaded at the moment as I am doing some brain training. Reduce the character count to around 1500 seems to work for me. I'll need to fiddle with something more ajaxy as a work around.
Added: A brain checkpoint by talcos
Added: Rarity
Fixed: Multicolored card backgrounds
Fixed: Artifact creatures not having power or toughness
Fixed: Temperatures
I skimmed the past 10 pages and I have a couple questions that I might of missed:
I want to clean up this and other issues posted to the mtgai github page, then I will work on incorporating the new renderer.