In a nutshell how academics thinks about software performance whether CPU-time (loosely correlates to energy efficiency, especially with compiled languages) or memory-space is to consider roughly how the number of instructions or bytes scales with the amount of input. Whether "constant", "logarithmic", "linear", "linear-logarithmic", or "exponential".
Designing algorithms boils down to breaking down problems & aggregating solutions. Designing data-structures is where breakthroughs are.
The most difficult task a computer undertakes is simply to communicate its computations with other software, hardware, computers & through them us humans!
Communicating to another program involves "serializing" the data into a common usually-bytestream format for it to "lex" then "parse", same for sending data into the future. If the communication/storage channel is unreliable we need error-correction/detection. If it doesn't have enough bandwidth we need compression/decompression.
1/2
In many cases that becomes a back & forth communication, especially when communicating humans or other computers. This becomes challenging in kernelspace with all the hardware wanting to set the schedule.
Drivers beneath, within, & above Linux abstracts this communication into a common interface. Fileformats & helper libs aid generating sheer quantity of output data.
Or the hardware needs to be emulated.
& libraries like GTK need to establish a shared language to communicate with humans.
2/2
The most difficult task a computer undertakes is simply to communicate its computations with other software, hardware, computers & through them us humans!
Communicating to another program involves "serializing" the data into a common usually-bytestream format for it to "lex" then "parse", same for sending data into the future. If the communication/storage channel is unreliable we need error-correction/detection. If it doesn't have enough bandwidth we need compression/decompression.
1/2
Based on @wolf480pl@mstdn.io 's suggestion, over the next few days I'll attempt to give very highlevel introductions to the topics I tend to toot about, to help you follow along. I don't know how successful I'll be...
I will pin these!
Namely how browsers & operating systems work, energy-efficient computing, etc. Feel free to ask questions to help refine these introductions!
I'm an amateur browserengine dev as a hobby. I like showing the potential for HTML+CSS downloaded over HTTP to work beautifully across any medium! I started with an auditory browser named "Rhapsode", and am preparing to create one for TV remote input called "Haphaestus".
Aiming to achieve deeply-intertwined accessibility, IoT coolness, simplicity, & privacy.
These apply CSS to the downloaded/parsed XML for that styletree to be transliterated into output once layed-out.
1/2
I finished reading World Wide Waste by Gerry McGovern. I'd consider it essential reading for anyone working with computers!
https://gerrymcgovern.com/books/world-wide-waste/
It's well cited (though I still need to check those citations) & uses maths effectively to make it's point.
That computers + (surveillance) capitalism is actually worse for the environment than the predigital era. That we can and must move slow and fix things, and fund that vital work directly.
Don't get me wrong, computers can absolutely help us regain our environmental efficiency. They just *aren't*.
Not as long as we're:
* constantly syncing everything to the cloud,
* expecting same-hour delivery,
* funding our clickbait via surveillance advertising,
* buying a new phone every year,
* using AIs because they're cool rather than useful,
* running bloated software & webpages,
* buying into "big data"
* etc
Computing is environmentally cheap, but it rapidly adds up!
Open Source Has Too Many Parasocial Relationships - Justin Warren @ Pivot Nine:
https://pivotnine.com/blog/open-source-has-too-many-parasocial-relationships/
I'll now commence a programming language tournament!
Pitting your nominations against each other! Throwing in a few of my picks...
Feel free to nominate more until the end of Round 1!
Round 1 Match 1:
Nominations by @demonshreder@mastodon.xyz , @eichkat3r@hessen.social , @rcgj_OxPhys@floss.social , @McCrankyface@beige.party , & me
The other aspect of LÖVE's Body class is that critically bodies can collide & interact! Incurring impulses to apply as described yesterday. How'd reimplement this upon our Lua machine?
Checking whether any 2 given bodies are colliding is trivial for the computer: 4 comparisons if we're talking bounding boxes.
Things get slow & complex when we ask "which of these bodies are colliding?" & "Was it between frames?"!
1/3?
A simple bot gatekeeper for nginx - Evil Genius Robot:
https://evilgeniusrobot.uk/posts/a-simple-bot-gatekeeper-for-nginx.html
I've been seeing a fair bit of chatter about our visions for hosting internet services, so my thoughts:
In my home-hosting experience I find that the trick is to keep things dead simple, & refuse any suggestions otherwise. Helps not only to minimize your maintenance burden, but also to keep your attack surface tiny.
However the future I envision is to minimize the need for servers!
To me the future is peer-to-peer, the future is local-first!
And I don't mean blockchain.
I believe everyone should have the right to use, study, modify, & share the software on their own computer. I strive to always deliver these Four Freedoms in the software I develop!
What's the best programming language? Debate!
Yes, I want to stir up trouble...
In a nutshell how academics thinks about software performance whether CPU-time (loosely correlates to energy efficiency, especially with compiled languages) or memory-space is to consider roughly how the number of instructions or bytes scales with the amount of input. Whether "constant", "logarithmic", "linear", "linear-logarithmic", or "exponential".
Designing algorithms boils down to breaking down problems & aggregating solutions. Designing data-structures is where breakthroughs are.
The most difficult task a computer undertakes is simply to communicate its computations with other software, hardware, computers & through them us humans!
Communicating to another program involves "serializing" the data into a common usually-bytestream format for it to "lex" then "parse", same for sending data into the future. If the communication/storage channel is unreliable we need error-correction/detection. If it doesn't have enough bandwidth we need compression/decompression.
1/2
In many cases that becomes a back & forth communication, especially when communicating humans or other computers. This becomes challenging in kernelspace with all the hardware wanting to set the schedule.
Drivers beneath, within, & above Linux abstracts this communication into a common interface. Fileformats & helper libs aid generating sheer quantity of output data.
Or the hardware needs to be emulated.
& libraries like GTK need to establish a shared language to communicate with humans.
2/2
The most difficult task a computer undertakes is simply to communicate its computations with other software, hardware, computers & through them us humans!
Communicating to another program involves "serializing" the data into a common usually-bytestream format for it to "lex" then "parse", same for sending data into the future. If the communication/storage channel is unreliable we need error-correction/detection. If it doesn't have enough bandwidth we need compression/decompression.
1/2