Brutkey

SnoopJ
@SnoopJ@hachyderm.io
SnoopJ
@SnoopJ@hachyderm.io

I probably need a caching implementation that I can re-use between providers. I keep re-writing the simple parts of that right where it's needed.

SnoopJ
@SnoopJ@hachyderm.io

caching N*M web requests is mildly annoying but the alternative is to cache after I've started to munge things and I kinda hate doing that in an application like this.

it's easy enough to cache by filename with
{actualMovieID}_{query_start_date}_{query_performed_date}.json, just annoying

SnoopJ
@SnoopJ@hachyderm.io

The API backing Apple Cinemas has several eyebrow-raisers:

* TLS fingerprinting by CloudFlare is enabled
* Typos in the API (Location vs. Loction)
* Redundant/ignored query parameters (movieID vs actualMovidId, end datetime for a range query does not matter, does not even need to be after start datetime)

This one is I think going to require me to send 1 + Nmovies*Ndays requests

SnoopJ
@SnoopJ@hachyderm.io

Writing up my provider code for Apple Cinemas and find myself writing the following in a comment explaining that the query route ignores the end date:

# We'll set this parameter "right" anyway, as a prayer for that messy API's soul.

SnoopJ
@SnoopJ@hachyderm.io

does present me with an interesting conundrum of what I will do when I aggregate the info from all of these into a single stream that other people can consume… not sure if people will want info about non-movies from a project mostly focusing on movies, but OTOH I'm already doing the necessary scraping work for them…

SnoopJ
@SnoopJ@hachyderm.io

The API backing Apple Cinemas has several eyebrow-raisers:

* TLS fingerprinting by CloudFlare is enabled
* Typos in the API (Location vs. Loction)
* Redundant/ignored query parameters (movieID vs actualMovidId, end datetime for a range query does not matter, does not even need to be after start datetime)

This one is I think going to require me to send 1 + Nmovies*Ndays requests

SnoopJ
@SnoopJ@hachyderm.io

does present me with an interesting conundrum of what I will do when I aggregate the info from all of these into a single stream that other people can consume… not sure if people will want info about non-movies from a project mostly focusing on movies, but OTOH I'm already doing the necessary scraping work for them…

SnoopJ
@SnoopJ@hachyderm.io

wow, Regent Theatre's use of a commercial offering called EventON to store their event data really made writing a provider for their events quite a nuisance

features include:

* shocking number of form fields (may be PHP/WP data?)
* date range fields that are ignored when satisfying the request
* multiple nonces re-used for every request
* serving HTML over JSON
* putting more information in that HTML than the sibling JSON metadata
* inability to link to a particular month in the on-page calendar schedule
* no event pages to link to :(

most of those are the software's fault, although the last one feels like the theatre gets some credit. oh well, I'll link to the main schedule page and the user can figure it out, the calendar does tell them what day the event is on

SnoopJ
@SnoopJ@hachyderm.io

but in the end, I won, their movies now appear on the calendar

SnoopJ
@SnoopJ@hachyderm.io

wild to me how much javascript is on these sites to do so very much nothing

single script of out a dozen on one page at 4000 lines when prettified, like… buddy, my thing is currently 624 lines total and I cannot imagine it would take more than 1000 of JS to make it REAL schmick

I know that comparison on LoC is vague at best and I know that webtech exists to serve ads and everything else is a side effect

but goddamn

SnoopJ
@SnoopJ@hachyderm.io

wow, Regent Theatre's use of a commercial offering called EventON to store their event data really made writing a provider for their events quite a nuisance

features include:

* shocking number of form fields (may be PHP/WP data?)
* date range fields that are ignored when satisfying the request
* multiple nonces re-used for every request
* serving HTML over JSON
* putting more information in that HTML than the sibling JSON metadata
* inability to link to a particular month in the on-page calendar schedule
* no event pages to link to :(

most of those are the software's fault, although the last one feels like the theatre gets some credit. oh well, I'll link to the main schedule page and the user can figure it out, the calendar does tell them what day the event is on

SnoopJ
@SnoopJ@hachyderm.io

If you're reading this, you're at the tail end of a long stream of posts on the feed about #Monsterdon, a fediverse community movie-watching event.

This week we watched GODZILLA, MOTHRA, AND KING GHIDORAH: GIANT MONSTERS ALL-OUT ATTACK (2001)

It has a Godzilla possessed by the vengeful spirits of those who died in the violence of the Pacific, the eponymous kaiju as protectors of Japan against Godzilla, and even a bonus kaiju: the inimitably-cute Bagorah. Garnished with a small amount of Japanese nationalism, and a healthy serving of early 2000s vibes.

There was also a post-watch presentation of KAIJU ON ICE: GIANT MONSTERS ALL-OUT CHILL (2025), a remix of the movie into several musical segments, and made by members of our own community. I hear it will be uploaded to the Internet Archive, soon.

See the post at the top of this thread for more info:

https://hachyderm.io/@SnoopJ/111004128276431553

SnoopJ
@SnoopJ@hachyderm.io

KAIJU ON ICE: GIANT MONSTERS ALL-OUT CHILL (2025)

THE END

πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘πŸ‘

Thanks for the chill segments y'all, look forward to seeing that uploaded

#Monsterdon #WrongFrogs

SnoopJ
@SnoopJ@hachyderm.io

I'm going with yes, this lo-fi chill remix of the movie counts as #WrongFrogs. The perfect chaser for a thriller.

AND this is an original work by
#Monsterdon's own @ryan@social.miyaku.media and FIRST TIME PARTICIPANT @LuluHelle@ohai.social

If you're not doing anything else, why not have a watch/listen:

https://miru.miyaku.media/

So proud of the cool things this community gets up to

SnoopJ
@SnoopJ@hachyderm.io

lo-fi beats to get vaporized to

#Monsterdon #WrongFrogs