@nen@mementomori.social
I'm trying to find a way to compare the size (information, measured in bits) of original measurement data with the size of computational models fitted to the data. A good model would reduce the size, i.e. compress the data.
I think it would require representing all numbers in a comparable manner β data, model parameters, and error of model vs data.
This is what I have come up with thus far: a floating point representation with variable-length significand and who-cares-length exponent. Size of a number measured in bits would be determined only/mostly by its significand. The length of the exponent is ignored or maybe included as constant, probably small number of bits. Trailing bits after the significand are random, not zeros, because they represent unknown details.
Edit: My use of terminology is a mess because I'm not super familiar with this stuff. I hope it is better now.
Feedback/thoughts welcome, especially if there is a known better approach to this.
#Math #ComputerScience