Readit News logoReadit News
chowells commented on The issue of anti-cheat on Linux (2024)   tulach.cc/the-issue-of-an... · Posted by u/todsacerdoti
sounds · 2 days ago
About halfway in the article, there's a brief nod to CS:GO. It uses a tick system and the server controls what is possible, such as physics or awarding kills. Fighting genre games use the same server-based game logic.

Cheating is a big draw to Windows for semi-pro gamers and mid streamers. What else is there to do except grind? Windows gives the illusion of "kernel level anti-cheat," which filters out the simplest ones, and fools most people some of the time.

chowells · 2 days ago
Fighting games do not use server-mediated simulation, in general. Cheating is actually a huge problem in popular games. And in fact, even running a server-mediated simulation wouldn't help with any of the common cheating in fighting games.

For instance, a common cheat in Street Fighter 6 is to trigger a drive impact in response to the startup of a move that is unsafe to a drive impact. That is recognizing the opponent's animation and triggering an input. There's no part of that which cares where the game simulation is being done. In fact, this kind of cheating can only be detected statistically. And the cheats have tools to combat that by adding random triggering chances and delays. It's pretty easy to tune a cheat to be approximately as effective as a high-level player.

Kernel-level anticheat isn't a perfect solution, but there are people asking for it. It would make cheating a lot harder, at least.

chowells commented on The joy of recursion, immutable data, & pure functions: Making mazes with JS   jrsinclair.com/articles/2... · Posted by u/jrsinclair
b_e_n_t_o_n · 5 days ago
I don't think so?
chowells · 5 days ago
Yes. It's using a global variable to canonicalize instances so that reference equality is value equality. An LRU cache will evict things that are still in use, so that the canonicalization process returns a different instance for the same value. (If it doesn't evict anything still in use, it's strictly inferior to just tying to the garbage collector anyway.) This will break the assumption that reference equality is the same as value equality that the rest of the code depends on.
chowells commented on The joy of recursion, immutable data, & pure functions: Making mazes with JS   jrsinclair.com/articles/2... · Posted by u/jrsinclair
sillysaurusx · 5 days ago
I’m still sad that JS doesn’t have tail call optimization. I’ve always wondered why. Is it hard to implement?
chowells · 5 days ago
People are too addicted to automatic stack traces for mainstream languages to optimize away stack frames.
chowells commented on The joy of recursion, immutable data, & pure functions: Making mazes with JS   jrsinclair.com/articles/2... · Posted by u/jrsinclair
chowells · 5 days ago
I'm slightly horrified by the memory leak that's casually introduced without even a remark as to the potential to cause a problem. I can't tell if I'm more horrified by the cavalier attitude or the fact that JavaScript makes having a global registry the only easy way to use an object of an arbitrary type as a key to Map.

But at the very least, if you're going to memoize immutable values, please do it in a way that allows garbage collection. JavaScript has WeakRef and FinalizationRegistry. (Why it doesn't provide the obvious WeakCache built on those is a mystery, though.)

The issues won't be visible on a toy example like making mazes a few hundred elements across, but if you use these techniques on real problems, you absolutely need to cooperate with the garbage collector.

chowells commented on Left to Right Programming   graic.net/p/left-to-right... · Posted by u/graic
chowells · 6 days ago
Autocomplete-oriented programming optimizes for writing code. I don't think that's a good route to go down. Autocomplete is good for spewing out a large volume of code, but is that what we want to encourage?

I'd much rather optimize for understanding code. Give me the freedom to order such that the most important ideas are up front, whatever the important details are. I'd much rather spend 3x the time writing code of it means I spend half the time understanding it every time I return to it in the future.

chowells commented on OpenBSD is so fast, I had to modify the program slightly to measure itself   flak.tedunangst.com/post/... · Posted by u/Bogdanp
cogman10 · 9 days ago
/dev/urandom isn't a great test, IMO, simply because there are reasonable tradeoffs in security v speed.

For all I know BSD could be doing 31*last or something similar.

The algorithm is also free to change.

chowells · 9 days ago
Um... This conversation is about OpenBSD, making that objection incredibly funny. OpenBSD has a mostly-deserved reputation for doing the correct security thing first, in all cases.

But that's also why the rng stuff was so much faster. There was a long period of time where the Linux dev in charge of randomness believed a lot of voodoo instead of actual security practices, and chose nonsense slow systems instead of well-researched fast ones. Linux has finally moved into the modern era, but there was a long period where the randomness features were far inferior to systems built by people with a security background.

chowells commented on How well do coding agents use your library?   stackbench.ai/... · Posted by u/richardblythman
chowells · 11 days ago
Why would I care? I write libraries to help humans write code. If an LLM was actually good at writing code, it would engage with the library in the same way as a human. If it can't, that's not a problem with the library. I don't lose anything if an LLM doesn't use a library I've written.
chowells commented on UI vs. API. vs. UAI   joshbeckman.org/blog/prac... · Posted by u/bckmn
mort96 · 14 days ago
I'm not sure it's possible to have a technology that's user-facing with multiple competing implementations, and not also, in some way, "liberal in what it accepts".

Back when XHTML was somewhat hype and there were sites which actually used it, I recall being met with a big fat "XML parse error" page on occasion. If XHTML really took off (as in a significant majority of web pages were XHTML), those XML parse error pages would become way more common, simply because developers sometimes write bugs and many websites are server-generated with dynamic content. I'm 100% convinced that some browser would decide to implement special rules in their XML parser to try to recover from errors. And then, that browser would have a significant advantage in the market; users would start to notice, "sites which give me an XML Parse Error in Firefox work well in Chrome, so I'll switch to Chrome". And there you have the exact same problem as HTML, even though the standard itself is strict.

The magical thing of HTML is that they managed to make a standard, HTML 5, which incorporates most of the special case rules as implemented by browsers. As such, all browsers would be lenient, but they'd all be lenient in the same way. A strict standard which mandates e.g "the document MUST be valid XML" results in implementations which are lenient, but they're lenient in different ways.

HTML should arguably have been specified to be lenient from the start. Making a lenient standard from scratch is probably easier than trying to standardize commonalities between many differently-lenient implementations of a strict standard like what HTML had to do.

chowells · 14 days ago
Are you aware of HTML 5? Fun fact about it: there's zero leniency in it. Instead, it specifies a precise semantics (in terms of parse tree) for every byte sequence. Your parser either produces correct output or is wrong. This is the logical end point of being lenient in what you accept - eventually you just standardize everything so there is no room for an implementation to differ on.

The only difference between that and not being lenient in the first place is a whole lot more complex logic in the specification.

chowells commented on Which colors are primary?   jamesgurney.substack.com/... · Posted by u/Michelangelo11
gizmo686 · 16 days ago
I think that understanding how eyes and light work is very informative on this subject.

Why are there 3 primary colors (regardless of which 3 you pick)? That has nothing to do with the nature of light, and everything to do with the fact that humans see light using 3 distinct frequency response curves [0]. This means that humans perceive color as a 3 dimensional space; and the role of the primary colors is to pick a point in this space by selectively stimulating or masking the 3 response curves. In a world of pure linear algebra, almost any 3 colors would do, but physical reality limits how ideally we can mix them; and how much light they can emit/mask.

Further, the 3 response curves are overlapping, so there is no set of ideal colors that would let you actually control the 3 curves independently.

[0] At least for color perception in a typical human.

chowells · 15 days ago
The linear algebra argument for almost any three colors only works if you can have negative intensities. I don't know how to do that with stimulation of photoreceptors, so I don't think that applies here.
chowells commented on Which colors are primary?   jamesgurney.substack.com/... · Posted by u/Michelangelo11
chowells · 16 days ago
No mention that both sets of primaries come from the biology of the average human eye, and other animals might be better served by other colors? Ok, yeah, that's not really relevant to the point the article was actually getting to, but I think it's important to remember. There's nothing magical about those colors. They effectively stimulate color receptors in our eyes such that our brains interpret the input in ways that can be combined to cover a pretty large gamut of the full range our eyes can perceive.

But as for what the article actually does focus on, I absolutely agree. You can create some really striking art by restricting your gamut to the range you can cover with a particular set of pigments.

u/chowells

KarmaCake day3559April 14, 2013View Original