This stuff while old, is not routine for decision makers. They don't seem to grok how to formulate the questions and the choices.
I have hopes of a resurgence of operations research and linear optimisation as goods in themselves: we could be plotting more nuanced courses in dark waters of competing pressure. Decision systems support across many fields would remove subjective, politicised pressures.
Linear programming, and even integer linear programming are pretty well solved practically speaking.
> Unless you’re talking about advanced PR and market manipulation techniques to capture and retain ad revenue
Those very much _are_ the goals at those enterprises.
These organizations employed too many people of relatively mediocre ability relative to output, leading to waste and eventual disbandment. Today's private sector companies in FAMNG+ are making bigger breakthroughs in AI, apps, self-driving cars, etc. with fewer people relative to population and more profits. This is due to more selective hiring and performance metrics. Yeah those people form the 60s were smart, but today's STEM whiz kids are probably lapping them.
Those weren’t really the topics people were interested in at the time (depending on your definition of AI).
The shoulders of giants, as they say.
2. A FAQ for the newsgroup is not automatically part of the rules for the challenge.
3. If the entire FAQ is treated as rules text, then the rules directly say you cannot win, and that's not an acceptable way to do rules.
The line about filename shenanigans being disallowed.
>A FAQ for the newsgroup is not automatically part of the rules for the challenge.
I think it’s completely clear that this FAQ was about the challenge.
With a big enough file those fractions of a bit add up to a non trivial number of bits. You can be cunning about how you encode your deltas too (next delta makes use of remaining unused bits from previous delta).
I haven't worked through all the details, so it might be in the end result everything rebalances to say no, but I'd like to withhold judgement for the moment.
I don’t follow. Wouldn’t that be (because not random)
Renegadry aside, for those who are more interested in the Information Theory perspective on this:
Kolmogorov complexity is a good teaching tool, but hardly used in engineering practice because it contains serious foot-guns.
One example of defining K complexity(S, M) is the length of the shortest initial tape contents P for a given abstract machine M such that, when M is started on this tape, the machine halts with final tape contents P+S. Obviously, one must be very careful to define things like “initial state”, “input”, “halt”, and “length”, since not all universal machines look like Turing machines at first glance, and the alphabet size must either be consistent for all strings or else appear as an explicit log factor.
Mike’s intuitive understanding was incorrect in two subtle ways:
1. Without specifying the abstract machine M, the K complexity of a string S is not meaningful. For instance, given any S, one may define an abstract machine with a single instruction that prints S, plus other instructions to make M Turing complete. That is, for any string S, there is an M_S such that complexity(S, M_S) = 1 bit. Alternatively, it would be possible to define an abstract machine M_FS that supports filesystem operations. Then the complexity using Patrick’s solution could be made well-defined by measuring the length of the concatenation of the decompressor P with a string describing the initial filesystem state.
2. Even without adversarial examples, and with a particular M specified, uniform random strings’ K complexity is only _tightly concentrated around_ the strings’ length plus a machine-dependent constant. As Patrick points out, for any given string length, some individual string exemplars may have much smaller K complexity; for instance, due to repetition.
What is the distribution of the complexity of a string? Is there some Chernof-like bound?