Brutkey

SnoopJ
@SnoopJ@hachyderm.io
SnoopJ
@SnoopJ@hachyderm.io

hmm, or maybe it's only some of the pages for the Kurosawa series?

what the heck, I'll see if the issue keeps happening once the series is open, and then I'll reach out if it still is

SnoopJ
@SnoopJ@hachyderm.io

well, fixing the caching took a while, but now that's sorted… I hope…

In the process I also confirmed my theory from yesterday that Regent Theatre's API nonce changes on a daily basis. Thankfully, it's in the HTML served by the schedule root page, so it's just another web request and a pattern-match.

There is I guess potential for TOCTOU with that nonce if the program execution crosses the day boundary, but that's easy: I will "just" put this program on a timer that avoids that problem :D

SnoopJ
@SnoopJ@hachyderm.io

Coolidge Corner's film pages are taking a snooze a lot. I guess this is the point where I am willing to send them an email so some computer-toucher can have a look at what's going on there

SnoopJ
@SnoopJ@hachyderm.io

well, fixing the caching took a while, but now that's sorted… I hope…

In the process I also confirmed my theory from yesterday that Regent Theatre's API nonce changes on a daily basis. Thankfully, it's in the HTML served by the schedule root page, so it's just another web request and a pattern-match.

There is I guess potential for TOCTOU with that nonce if the program execution crosses the day boundary, but that's easy: I will "just" put this program on a timer that avoids that problem :D

SnoopJ
@SnoopJ@hachyderm.io

okay, I have done the needful and now have a re-usable cache mechanism, so I can stop writing that half-a-dozen lines again and again

I settled on being okay with caching the "showings by date" internal structure that is the output of each provider before they go to the global gather. It's discarding a lot of data from the cache, but that's… fine.

The cache is really just there so we don't hammer the upstreams, and if I
really want HTTP-layer caching, I can go get someone else's solution for it and plug that in.

SnoopJ
@SnoopJ@hachyderm.io

several seconds later: ah, crap, I just realized that I'm missing the serialization of one of those types

SnoopJ
@SnoopJ@hachyderm.io

In addition to their API being… idiosyncratic… the Southeast Asian showings at Apple Cinema are also an interesting edge case. These are separate showings and would not fit into the false dichotomy of dub/sub that one might be tempted to adopt for this domain:

Coolie (Tamil)
Coolie (Telugu)
Coolie (Hindi)

I could see the case for collapsing those to a single listing (especially if the languages could be combined) but I don't think I will bother for now, it's not as big a problem for the reader as mass-market movies are

SnoopJ
@SnoopJ@hachyderm.io

okay, I have done the needful and now have a re-usable cache mechanism, so I can stop writing that half-a-dozen lines again and again

I settled on being okay with caching the "showings by date" internal structure that is the output of each provider before they go to the global gather. It's discarding a lot of data from the cache, but that's… fine.

The cache is really just there so we don't hammer the upstreams, and if I
really want HTTP-layer caching, I can go get someone else's solution for it and plug that in.

SnoopJ
@SnoopJ@hachyderm.io

In addition to their API being… idiosyncratic… the Southeast Asian showings at Apple Cinema are also an interesting edge case. These are separate showings and would not fit into the false dichotomy of dub/sub that one might be tempted to adopt for this domain:

Coolie (Tamil)
Coolie (Telugu)
Coolie (Hindi)

I could see the case for collapsing those to a single listing (especially if the languages could be combined) but I don't think I will bother for now, it's not as big a problem for the reader as mass-market movies are

SnoopJ
@SnoopJ@hachyderm.io

enhance

SnoopJ
@SnoopJ@hachyderm.io

In which I forget to de-dupe

SnoopJ
@SnoopJ@hachyderm.io

but spewing 155 files to disk every day, on the other hand, that's another level of obnoxious. It's only 3 MB of data (on the other hand, it's 3 MB of data!) but yea, maybe I should collate these requests and serialize the function with the loop that creates them.

SnoopJ
@SnoopJ@hachyderm.io

that N*M is 155, by the way. not really enough to justify being annoyed about it, but enough to be annoying