How many cars have he reviewed? I thought he mostly reviewed smartphones.
So 50+.
What's that wild intuition of underscores???
Also, why would you ever read it as a regex in a human /text/ markup language?
Surrounding text with underscores to indicate italicization is intuitive to anyone who is familiar with that convention.
Personally, I find surrounding text with forward slashes exactly wrong for italicization, because I mentally apply a skew-transform to the text to make the slashes into vertical lines, which leaves the text itself slanted in the wrong direction. Backslashes would make more sense, and also avoid looking like regular expressions. But literally no one uses that convention and we do not need a new one.
the language being small enough that all basics can be grasped in a day.
the language being complete that it will remain the same in 10 years.
while at the same time being memory safe.
btw I don't think golang applies given the bad ffi story in go.
--- edit btw:: yeah this implies the use of a GC. though it must not have massive pauses or stop the world GC.
memory safety doesn't mean just one thing, but probably it requires either a lot of rust-like features, a tracing garbage collector, or automatic reference counting.
the language being small enough that all basics can be grasped in a day
that disqualifies taking the rust-like path.
able to use all c libraries through ffi. without loss of performance or extra fu.
that disqualifies most (all?) advanced tracing gc strategies
it must not have massive pauses or stop the world GC.
that disqualifies simpler tracing gc strategies
depending on what precisely you're looking for, it's possible it might be a pipe dream. but it's also possible you'll find what you want in one of D, Nim or Swift. Swift is probably the closest to what you want on technical merit, but obviously extremely tied to Apple. D and Nim strap you with their particular flavor of tracing gc, which may or may not be suited to your needs.
This is true whether the machines are robot driven or not, and it's the reason most commercial farms have huuuuuge fields rather than lots of small ones.
I don't care to sit through sponsor reads, nothing more to it than that. When I'm viewing on a client that doesn't support sponsorblock, I'll manually seek to the end of the segment. Supporting the creator is great; I pay for YouTube Premium, though thanks to uBlock Origin I wouldn't see the add if even if I stopped paying. To a couple creators, I send a regular donation. If I could spend another $10/mo to make up for any revenue my sponsorblock usage loses other creators, I'd do that, but I'm less enthusiastic about regularly listening to sales pitches for the same products over and over again.
Also: I'm not sure how common it is for YouTube sponsorship contracts to have payment contingent on the view count for the section of the video with the sponsored segment, and I'm not sure if the way sponsorblock skips such segments is visible to YouTube's analytics. With at least some of the most prolific sponsors of creators I watch (Audible, Brilliant, etc) the payout is based on how many viewers sign up for a trial through the affiliate link. And YouTube has no incentive to make it easy for creators to share their detailed analytics with third-party sponsors, since independent sponsorships cut YouTube out of the deal. YouTube would prefer creators replace their independent sponsor reads with mid-roll ads.
Garbage collectors are a leaky abstraction: Some support interior pointers, others don't. Some support parallel tasks sharing memory, others don't. Some require compaction, making C FFI more difficult, others don't. Some require a deep integration with whatever mechanism is used for green processes/threads and growable stacks, others don't. Etc.
When looking at languages like Erlang, JavaScript, Python, or Go, the choices made at the language level are partly reflected in their garbage collectors.
That idea of a universal/generic VM supporting many languages has been tried many times, with limited success, for example with the JVM, CLR, or Parrot. What makes this different?
What wasm is doing is something different than previous efforts. The gc facilities aren't provided for the sake of interop with other languages, or for the sake of sharing development resources across language runtime implementations. Wasm is providing gc facilities so that managed-memory runtime languages can target wasm environments without suffering on account of limitations imposed by the restrictive memory model, and secondarily to reduce bundle sizes.
Wasm can potentially support more tunable gc parameters to better suit the guest language's idiosyncrasies than can other general purpose language runtimes. And unlike the runtimes we're comparing it against, language implementers don't have to option of making something bespoke.
Related to this is the double-fork pattern to avoid zombie processes (and a couple other issues) when initiating a daemon process.
The parent process was an event loop based python program whose main function was to manage the creation and deletion of these child processes, and the simplest way to spawn child processes without blocking the event loop is to call fork(2) on a thread pool. My thread pool was triaging the number of worker threads based on demand, so occasionally it would decide a worker was no longer needed, and all the child processes that happened to have been created on that thread would get SIGKILL'd — something you rarely want when using a thread pool!
I didn't want the child processes to die unless the parent process's business logic decided they were no longer needed, or if the parent was itself killed (this latter reason being the motivation for setting PDEATHSIG).
Once I understood why my processes were dying, the solution was simple: make sure the worker threads never exit.
My debugging was not aided by the fact that disabling the code where I set PDEATHSIG had no effect, since someone else's code was invisibly setting it regardless.