00:06:09 you want a spreadsheet 00:06:11 oops 00:18:25 overkill for just removal stats, but there are dozens of other metrics I'd like to be able to track 01:09:52 one thing that would be kind of cool is to automate objstat runs 01:10:41 but it's extremely cpu intentsive for 1k iterations; I have to run a 4 core EC2 instance for 8-10 hours to make the 4 spreadsheet sets 01:10:59 then I have a little google app I made to convert the tdt uploaded as zips to google sheets and do some formatting on them 01:11:50 maybe someday we can move stuff to a kind of "libcrawl" so I can move objstat code out of c++ and just have it be a python script, or something 01:12:09 not really related to the automation, but just would be nice 01:12:45 like all the c++ side would do is create a save with a fully generated dungeon (according to the levels asked for) 01:13:18 oh, well, I guess that's not quite true; said library would have to provide some kind of interface to query a save 01:14:13 presumably we could also have unit tests to look for weird changes in monster/item distributions, but I'm not sure about that 01:21:48 Experimental (bcrawl) branch on underhound.eu updated to: 0.23-a0-3054-g1a545eca1e 01:51:28 -!- behalebabo_ is now known as behalebabo 02:39:48 that sounds good, yeah 02:40:42 basically what I want is a server that can run a (commit -> [stats]) function on demand 02:43:16 re libcrawl, I think it's doable, but it requires detangling the ui from the rest of crawl, and moving all the global variables into a crawl_game object 02:43:51 moving towards more unit test coverage will naturally force us to break global dependencies; its just gonna take a while 03:25:38 -!- behalebabo_ is now known as behalebabo 03:30:52 Fork (bcrawl) on crawl.kelbi.org updated to: 0.23-a0-3054-g1a545eca1e 03:51:53 unless every commit is going to generate tens of thousands of data points, I still think those tools you mentioned is overkill 03:52:14 serverless file querying is extremely powerful, cheap, and low maintenance 03:52:44 https://aws.amazon.com/athena/ and https://cloud.google.com/bigquery/external-data-cloud-storage 03:54:15 Fork (bcadrencrawl) on crawl.kelbi.org updated to: 0.22.1-2676-g84a05d23f6 04:37:26 03Benoit Giannangeli02 07https://github.com/crawl/crawl/pull/1119 * 0.25-a0-943-g6a1d48e: New morgue section `screenshots` 10(4 minutes ago, 5 files, 34+ 4-) 13https://github.com/crawl/crawl/commit/6a1d48e2e373 05:04:40 03Benoit Giannangeli02 07https://github.com/crawl/crawl/pull/1119 * 0.25-a0-944-ge0cc68f: Compatibility tag for screenshots 10(7 minutes ago, 2 files, 3+ 1-) 13https://github.com/crawl/crawl/commit/e0cc68f7bff7 05:04:40 03Benoit Giannangeli02 07https://github.com/crawl/crawl/pull/1119 * 0.25-a0-945-g348ac22: One screenshot a turn limit 10(5 minutes ago, 2 files, 7+ 2-) 13https://github.com/crawl/crawl/commit/348ac222aa9d 05:33:53 Maedhros (L27 OpFi) Crash caused by signal #11: Segmentation fault (Abyss:4) 05:41:59 something's up with linux builds ? they were all cancelled on my PR 05:42:47 03Benoit Giannangeli02 07https://github.com/crawl/crawl/pull/1119 * 0.25-a0-945-g0cdda87: One screenshot a turn limit 10(44 minutes ago, 2 files, 7+ 2-) 13https://github.com/crawl/crawl/commit/0cdda87a05b4 06:28:03 03gammafunk02 07* 0.25-a0-943-g2f7e296: Remove a spurious semicolon 10(33 minutes ago, 1 file, 1+ 1-) 13https://github.com/crawl/crawl/commit/2f7e29601aee 06:28:03 03gammafunk02 07* 0.25-a0-944-g8d05931: Rename some static tag helper functions 10(26 minutes ago, 1 file, 140+ 146-) 13https://github.com/crawl/crawl/commit/8d059310dc47 06:28:03 03gammafunk02 07* 0.25-a0-945-gec7a518: Declare some tags functions in tags.h 10(13 minutes ago, 2 files, 20+ 10-) 13https://github.com/crawl/crawl/commit/ec7a5183416c 06:28:03 03gammafunk02 07* 0.25-a0-946-g6709f3f: Properly declare some map functions as static 10(6 minutes ago, 1 file, 7+ 5-) 13https://github.com/crawl/crawl/commit/6709f3f1eeba 06:53:32 alexjurkiewicz: there are hundreds of cc and h files, and if you have a hundred different metrics, per-file tracking gets you to tens of thousands 06:54:56 but yeah, a simpler solution is a better place to start 06:55:39 Unstable branch on crawl.kelbi.org updated to: 0.25-a0-946-g6709f3f1ee (34) 06:55:39 it's just a shame that there don't seem to be any pre-existing open source tools for this 07:08:03 03gammafunk02 07* 0.25-a0-947-gc7cb02f: Make a map function non-static 10(2 minutes ago, 3 files, 4+ 4-) 13https://github.com/crawl/crawl/commit/c7cb02f7745d 07:18:48 Unstable branch on crawl.kelbi.org updated to: 0.25-a0-947-gc7cb02f774 (34) 07:27:16 03gammafunk02 07* 0.25-a0-948-g643b649: Correct a unit test function call 10(79 seconds ago, 1 file, 1+ 1-) 13https://github.com/crawl/crawl/commit/643b6494703c 07:32:25 Unstable branch on crawl.kelbi.org updated to: 0.25-a0-948-g643b649470 (34) 07:59:37 The build was broken. (master - c7cb02f #13214 : gammafunk): https://travis-ci.org/crawl/crawl/builds/684214981 08:12:50 -!- cjm_ is now known as cjm 09:18:32 yes. webtiles should use proto. along with save files 09:33:55 I know I've seen people directly grep/pull up old commits in the chat here...how do you do that? (or more pertinently to what I want to see: could someone pull up when the lair branches were shortened? I want to see how it did save compat) 09:34:17 "git -S", usually 09:44:35 !vault snake_hunt 09:44:42 oh right 09:45:11 huh. well in looking for commits about shortening snake I found out that subtractor snakes were a thing. I guess they never made it into stable? (Zot unique snake that had weaken, and several other debuffs) 09:45:14 1/1. https://github.com/crawl/crawl/blob/master/crawl-ref/source/dat/des/branches/snake.des#L1044 09:49:14 most of the time when someone does a query here, they've done the search elsewhere 09:49:14 I usually just do it in git log 09:51:47 I'm trying git log --oneline | grep [searchterm] ...but I'm having no luck looking for what I need. I can make my own solution sure, but I was wondering what mainline did. (looking for when branches were shortened; because I noticed if you just do that with no catch code...any existing down stairs that go to far go to...Zig:27 for some weird 09:51:47 reason; not to mention you could be cutting off some branches by shortening the core dungeon, etc.) 09:53:36 Bcadren: `git help grep`, git has built in grep functionality 09:54:05 Yea I'm using it I found that much...just not finding what I was looking for... 10:00:10 In this case I know they were shortened in about 0.19. I also know that branch-data.h governs branch sizes. I do "git log branch-data.h" and in the first screenful is b6148b5347da93205c40feb08e93a7904c6881eb "Shorten Slime to 5 floors" which presumably does something for save compat (if anything; perhaps if you started a game with a longer Slime, you keep it) 10:04:04 248ef00ae6 New Snake spawn lists for shorter Snake 10:29:33 03advil02 07* 0.25-a0-949-gdd2af9e: Fix a bunch of basic errors with TOUCH_UI builds 10(4 minutes ago, 2 files, 2+ 10-) 13https://github.com/crawl/crawl/commit/dd2af9e06e16 10:33:04 Unstable branch on crawl.kelbi.org updated to: 0.25-a0-949-gdd2af9e06e (34) 11:42:40 -!- MikeHollisJr is now known as MCSizzell 11:42:50 -!- MCSizzell is now known as MikeHollisJr 11:57:04 -!- Raichvent_ is now known as Raichvent 12:01:46 ebering: re miscasts, I'm not sure I like the gray/white distinction as a UI thing, and at least for some categories the white color doesn't necessarily line up intuitively with what could happen 12:02:02 I wonder if it would make sense to make "mild" yellow and adjust the other colors to follow? 12:03:17 was looking at transmut in particular so maybe I'm off base for some of the other types 12:07:24 though here's a different idea about transmut: I wonder if the duration could be scaled in a more extreme fashion so that something like a 1-2 turn polymorph would make more intuitive sense for a "mild" miscast? 12:37:25 !crashlog Maedhros 12:38:57 90s limit exceeded: killed !crashlog Maedhros 12:44:26 where can in I find any information related to websockets so that I can properly map apache via proxy, 12:44:33 specific to DCSS I mean 12:46:09 I'm basically just guessing at the path to send the websockets to 12:54:39 <|amethyst> The websocket URL path is /socket, if that's what you're asking 12:55:21 <|amethyst> There are probably other here who have experience putting a reverse proxy in front of webtiles; I do not 12:56:01 <|amethyst> s/other/&s/ 12:57:16 yeah, the tmut miscast duration is maybe not quite right 13:00:31 maybe mild should rule out stationary forms, too? 13:00:43 though I'm not sure if those are "bad" in all circumstances 13:00:45 I knew that, for some reason it it isn't working, does anyone have a proxypass setup on apache that I could see your websocket lines? 13:14:51 <|amethyst> !tell alexjurkiewicz have you done reverse proxy in front of webtiles or know someone who has? MikeHollisJr was asking about it 13:15:32 |amethyst: OK, I'll let alexjurkiewicz know. 13:17:41 I'm not sure many public servers do that 13:17:49 maybe underhound 13:18:48 I've got other things on 80 and 443, and I'd prefer not to expose the ports, while it is purely cosmetic, it doesn't feel complete to me until I'm able to get that situated 13:19:27 oh speaking of 443, is there something I need to change in the code if I change the domain it's running on? my ssl certs arn't working anymore (yes I'm copying the new ones into the chroot) 13:22:23 actually it's not the certs, the ssl port seems to be timing out 13:23:32 er, well at least its not the certs themselves, the new certs should be in exactly the same place the old ones were though (and the game didn't switch physical machines) 13:35:09 -!- Raichvent_ is now known as Raichvent 13:40:46 -!- Raichvent_ is now known as Raichvent 14:01:51 are the websockets ws or wss? or are both supported? 14:23:44 -!- Tiobot is now known as Guest36821 15:06:00 -!- amalloy is now known as amalloy_ 17:39:48 gammafunk is there a way to get cerebot (my own instance) to monitor a channel instead of through msgs? 17:41:08 MikeHollisJr: what do you mean by monitor a channel? a discord channel? 17:41:13 irc 17:41:52 gammafunk, I've got all my bots in one spot except for cerebot who is antisocial, 17:42:12 if you mean have it send its relays through the channel, that's sort of weird since it's a lot of channel spam 17:42:29 you'd have it send its queries in the channel and recieve the answer in the channel? 17:42:52 I already have a discord irc relay for sizzell 17:43:17 so that's not a huge deal and the channel will be locked down and used only to process commands when all is said and done 17:43:43 yeah, it's just not what the bot was really designed for, since it doesn't make sense for typical usage 17:43:51 in any case, no, you'd have to rewrite it to do that; probably not a ton of work to do so, to be fair 17:44:00 okay no problem I just wanted to see if there was an option already, sounds good 17:45:53 any idea if sequell will ignore multiple commands in quick succession? I'm trying to get sequell to ignore my discord irc bridge, so that cerebot handles it becuase I think it's getting two commands then sequelching one (the wrong one) 17:46:48 actually I wonder if I could just drop cerebot at this point 17:47:07 yeah, you could just bridge a sequell instance 17:47:20 the main issue you'd run into is that output would not be markdowned in any way 17:47:59 yeah I don't think I want to go that route anyway, because cerebot handles command ownership where as the bridge doesn't 17:49:29 right, I guess that's a bigger issue actually 17:53:02 MikeHollisJr: I don't know about sequell ignoring, but it probably has some facility for ignoring commands based on source? maybe that never came up, so greensnark never added that functionality 17:54:21 it's possible, sizzell runs in ##crawl though doesn't it? maybe it's not relayed I havn't checked discord in a bit 17:55:20 oh actually maybe not, there hasn't been any notices recently 18:02:40 MikeHollisJr: sizzell isn't an issue, since sizzel responds to different commands 18:02:59 s/sizzel /sizzell / 18:04:36 the bots in ##crawl(-dev) don't have a problem responding to each other, but they're set up so that they largely don't overlap in commands 18:04:39 !cmd gitgrep 18:04:42 Command: !gitgrep => .echo $(ignore $1)$(let (n $1 s $*) (concat "%git HEAD^{/" $s "}" (repeat (concat "^^{/" $s "}") (- (int $n) 1)))) 18:04:49 is an example of bot-bot interaction on purpose 18:11:56 |amethyst: MikeHollisJr: i've set up a reverse proxy, what's your problem? 18:12:00 alexjurkiewicz: You have 1 message. Use !messages to read it. 18:13:06 I use nginx, not apache 18:13:06 alexjurkiewicz, I can't get the websockets to forward 18:13:13 non-ws requests work? 18:13:18 if you use apache any chance I could see your relevent lines 18:13:18 yes 18:14:51 you should make sure the Connection and Upgrade headers are being sent through to the backend 18:15:04 if connections are failing to upgrade to websockets, that's probably the cause 18:15:09 in nginx, i write: 18:15:12 proxy_set_header Upgrade $http_upgrade; 18:15:14 proxy_set_header Connection "upgrade"; 18:16:04 i'll be around more later if you still have problems 18:17:50 I've got a bunch of other sites that run, and have been using apache for years, so I'm still with apache and these are my relevent lines which I THINK are correct https://pastebin.com/CPceC9Vh 18:24:00 Unstable branch on underhound.eu updated to: 0.25-a0-949-gdd2af9e06e (34) 18:55:39 i think you might be a little ocnfused about how webtiles server works. You don't need to explicitly support websocket protocol, you just need to pass through the Upgrade header and ensure Connection is set (this helps get around dumb forward proxies in the wild, it's not required AIUI) 18:56:09 you forward all traffic for your vhost through to the single webtiles server, which is probably listening over http on localhost 18:56:52 here's my full proxy configuration 18:56:53 location / { 18:56:56 proxy_pass http://127.0.0.1:8080; 18:56:58 proxy_http_version 1.1; 18:57:00 proxy_set_header Upgrade $http_upgrade; 18:57:02 proxy_set_header Connection "upgrade"; 18:57:04 proxy_set_header Host $http_host; 18:57:06 proxy_set_header X-Forwarded-Proto "https"; 18:57:08 proxy_read_timeout 7200; 18:57:10 } 18:58:06 my webtiles server config.py contains these lines: 18:58:09 bind_address = "127.0.0.1" 18:58:11 bind_port = 8080 18:58:42 no I know that I just need to pass it along, but if i take websockets traffic on /socket and forward it to ws://domain/socket it just says it can't establish a websocket connection, or something similar, but if I add the port and ignore the proxy it works fine 18:59:16 and thank you for that, I'll see if I can tranlsate it 18:59:17 you don't need to do anything about forwarding /socket specially 19:00:01 doesn't websocket work via normal http(s) port and issue some kind of command to turn the connection into a websocket? 19:00:05 webtiles server can be treated like any other backend service that listens on http. The only difference is you need to ensure Connection and Upgrade headers are set on the requests sent through to it 19:00:08 so yeah, you'd not need any special support for that 19:00:15 that's right, gammafunk 19:00:45 then I'm completely at a loss 19:10:21 this is my complete virtualhost, you can tell I've been experimenting with things half of it is commented out, https://pastebin.com/cQCKin11 but connecting with that config I get "WebSocket connection to 'ws://localhost:8180/socket' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED" Now if undo the proxy and use a redirect, it works. 19:13:21 comment out 48 & 49, and uncomment 46 and it works 19:18:42 but also doing that I have the port number visible which I'm not crazy about 19:19:30 I don't want people to have a reminder to go looking for more services 19:24:56 well, the site seems to work for me 19:24:58 I can start a game 19:26:19 I had a redirect on 80 to 8180, instaed of proxying 19:26:28 give me one sec to put it the other way 19:27:27 ok reverted, make sure the intial request is on 80 rather than 8180 and you'll see what it's doing 19:31:13 alexjurkiewicz, connect on 80 and it should break, yet bypass the proxy and connect to 8180 and it'll work the way I've got it configured now 19:34:51 ??options 19:34:51 rcfile[1/4]: https://github.com/crawl/crawl/blob/master/crawl-ref/docs/options_guide.txt 19:43:10 sorry, not around for a few hours 19:43:57 no worries, thank you for your input though, I'm going to bring the site back online for now 20:16:31 those bastard scientists in the US made Formicid a registered trademark for ant killer 20:17:26 on more phd level research, its the EU not US. so its 0-1 for them but they could own you guys bigtime if they enforce their trademarks like the popsicle corp 23:22:03 advil: wasn't TOUCH_UI added specifically for android support, which I presume is still defunct 23:22:58 yeah, looks like was added with the initial android port by frogbotherer 23:42:54 heya, how do I get involved with the development of crawl? 23:43:13 chat in this channel ;) 23:43:36 we've got detailed info on our github page, one sec the bot will pull up the link 23:43:39 ??patch 23:43:40 patch[1/2]: For details about how to discuss and submit a patch or pull request, see: https://github.com/crawl/crawl/blob/master/crawl-ref/docs/develop/contribution-process.md 23:43:47 that's the long form guide 23:43:51 ??repo 23:43:52 github[1/1]: The site that hosts the crawl git repository at https://github.com/crawl/crawl 23:44:01 the readme.md has the section "how to help" 23:44:38 I will at some point soon in the development cycle post a devblog post identifying good low hanging fruit bugs if you're not sure where to start 23:46:05 ebering: oh thanks 23:46:17 I've been playing off and on for years and I figured I'd really like to help out 23:47:51 ebering: so, following the devblog is a good place to learn about low hanging fruit bugs? 23:48:09 Seems like the kind of thing to start with, for sure. 23:48:56 where do I find the devblog? 23:49:00 AJTJ: well, it isn't usually, but I'm planning to make a post soon specifically about that 23:49:19 http://crawl.develz.org/wordpress/ 23:52:32 this might be a good low hanging fruit bug? https://github.com/crawl/crawl/issues/1385 23:56:20 that one's actually a flavor bug 23:56:20 tarantellas bite is AF_CONFUSE and not resisted by any form of poison resistance or immunity 23:56:24 @??tarantella 23:56:24 tarantella (13s) | Spd: 15 | HD: 8 | HP: 24-33 | AC/EV: 3/14 | Dam: 1913(confuse) | web sense | Res: 06magic(20) | Vul: 09poison | XP: 214 | Sz: small | Int: animal. 23:57:23 is there a lead developer for crawl? 23:57:32 no 23:57:46 everyone with +v in the channel has commit access to the repo 23:57:52 but there is no formal "head" 23:58:05 huh, is there a core team? 23:58:08 yes 23:58:11 ??credits 23:58:12 I don't have a page labeled credits in my learndb. 23:58:14 er 23:58:22 the CREDITS.txt file lists the dev team 23:58:29 !source CREDITS.txt 23:58:29 anyone in this channel with voice is on the dev team 23:58:40 I swear we had an entry for it already 23:58:45 ??dev team 23:58:49 dev team ~ devteam[1/27]: Official list: https://github.com/crawl/crawl/blob/master/crawl-ref/CREDITS.txt | Dev Wiki page: https://crawl.develz.org/wiki/doku.php?id=dcss:admin:devteam 23:58:53 there we go 23:59:04 https://github.com/crawl/crawl/blob/master/crawl-ref/source/contrib/sdl2/CREDITS.txt 23:59:10 ah cool, so its just open source and community built then? Anyone been here since the beginning? 23:59:11 !learn add credits https://github.com/crawl/crawl/blob/master/crawl-ref/CREDITS.txt 23:59:21 credits[1/1]: https://github.com/crawl/crawl/blob/master/crawl-ref/CREDITS.txt 23:59:33 how did crawl start? 23:59:40 well, the founding devs aren't active, although greensnark wrote Sequell and maintains it 23:59:44 see this post 23:59:57 http://crawl.develz.org/wordpress/the-dawn-of-stone-soup