Submitting this to let others that use jq beware of this :/
System jq:
$ jq --version
jq-1.6
$ echo '{"number":288230376151711744}' | jq '.number'
288230376151711740
Fresh compile from source according to the build instructions at https://github.com/stedolan/jq: $ ./configure --with-oniguruma=builtin && make -j8
$ ./jq --version
jq-1.6-137-gd18b2d0-dirty
$ echo '{"number":288230376151711744}' | ./jq '.number'
288230376151711744
Alternatively: $ ./configure --with-oniguruma=builtin --enable-decnum=no && make -j8
$ echo '{"number":288230376151711744}' | ./jq '.number'
288230376151711740
So the basic bug is fixed, jq has included a bignum library for > 2 years. I don't know if Mint (and thus presumably Ubuntu, and thus possibly Debian) includes an older version of jq or sets nonstandard user-unfriendly flags on purpose, but I'm somewhat underwhelmed in either case.Obviously this situation in Belarus is a simple legal restriction and not a wall, but I’ve long thought that a sensible constitutional guarantee would be the right to leave the country at any time.
In totalitarian regimes, constitutions aren't worth the pixels they are printed on. Rights guaranteed by laws must be enforced by the judiciary and the executive. But that only happens if (1) cases actually reach a court, (2) that court is independent, and (3) the executive is willing to enforce the court's decisions.
Indeed, according to https://en.m.wikipedia.org/wiki/Constitution_of_Belarus, "Citizens [...] have the right to protest against the government." Problem solved?
> Since software that implements IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide...
[0] https://datatracker.ietf.org/doc/html/rfc7159#section-6
Which means that if you are putting 64 bit integers into JSON and require every bit to be used, you are not actually creating a JSON which is compatible with all the consumers. For example, such JSON is not compatible with browser. Here is what my Firefox's JQ console says:
>> x = '{"id":675127116845989888,"id_str":"675127116845989888"}'
<- "{\"id\":675127116845989888,\"id_str\":\"675127116845989888\"}"
>> JSON.parse(x)
<- Object { id: 675127116845989900, id_str: "675127116845989888" }
I'd say that JQ acting the same way as browsers is pretty reasonable, no?Without the superlatives and strawman attacks and with an actual description of how things should be done instead this would be an interesting read.
Compare with most of "legacy" mathematics, which studies countable structures (so the description can use arbitrary series).
Of course, there are larger sets, but they mostly serve as a theater (just like countable infinity is just a theater of computer science).
Finite sets in the rest of mathematics are sometimes considered uninteresting, but not so in computer science, where only very small sets (roughly of up to 32 elements, all subsets can be easily checked on a typical computer) are considered uninteresting.
Good example is number theory, which philosophically considers all natural numbers, regardless of size, to be equal type of objects. In computer science, the (although fuzzy) magnitude of numbers plays much bigger role.
This is a strange claim since the entire field was founded upon the investigation of potentially (and often actually) infinite computations.
> Compare with most of "legacy" mathematics, which studies countable structures (so the description can use arbitrary series).
Define "most". Do it in a way that makes real and complex analysis and topology (and probably many other branches) the smaller part of mathematics.
Most importantly though, my problem with this kind of discussion is that the question itself is meaningless. Not everything can be classified into neat " X is a Y" relationships. Not everything needs to be classified into such relationships. Even if the discussion reached a consensus, that consensus would be meaningless. Computer science is a part of math? OK, but so what? Computer science is not a part of math? OK, but so what? Neither conclusion would tell us anything useful.
Nope. Turing machines are arbitrary and rather unmathematical. The lambda calculus is a much better computational formalism, more mathematically grounded and oriented, with far more direct practical applications.
Thus proving the point that a field that studies them cannot be considered a branch of mathematics.
Here we see the following claim:
> By partaking in a form of fraud that has left the Overton window of acceptability, the researchers in the collusion ring have finally succeeded in forcing the community to acknowledge its blind spot. For the first time, researchers reading conference proceedings will be forced to wonder: does this work truly merit my attention? Or is its publication simply the result of fraud?
But I don't see how this follows. If I follow the link to the description of the actual fraud ( https://cacm.acm.org/magazines/2021/6/252840-collusion-rings... ), it says essentially the opposite: the "fraudulent" papers are no different from papers published by ordinary means.
> the review process is notoriously random. In a well-publicized case in 2014, organizers of the Neural Information Processing Systems Conference formed two independent program committees and had 10% of submissions reviewed by both. The result was that almost 60% of papers accepted by one program committee were rejected by the other, suggesting that the fate of many papers is determined by the specifics of the reviewers selected
> In response, some authors have adopted paper-quality-independent interventions to increase their odds of getting papers accepted. That is, they are cheating.
> Here is an account of one type of cheating that I am aware of: a collusion ring.
> A group of colluding authors writes and submits papers to the conference.
> The colluders share, amongst themselves, the titles of each other's papers, violating the tenet of blind reviewing
> The colluders hide conflicts of interest, then bid to review these papers, sometimes from duplicate accounts, in an attempt to be assigned to these papers as reviewers.
> The colluders write very positive reviews of these papers
So the system is: conferences already can't tell the difference between a good paper and a bad paper. Researchers respond by adopting strategies for passing review that are irrelevant to paper quality (since paper quality doesn't count). But those strategies aren't bad for paper quality. If I'm reading conference papers, why would I worry about whether one of them is the product of review collusion?
Because the one you are reading may have crowded out a better one. Even if the current review system is essentially random, replacing it with something that is essentially a contest of well-connectedness is worse. Young researchers with good ideas but fewer connections, or people from less well-known institutions would have their ideas suppressed.
So you should be worrying about stagnation, and about not reading what might actually be new and exciting.