SIGECAPS is an acronym taught in US medicine for the diagnosis of major depressive disorder: Sleep disturbance, Interest loss, Guilt, Energy loss, Concentration loss, Appetite changes, Psychomotor agitation, Suicidality. And must have Depressed mood or Anhedonia (inability to enjoy things previously enjoyable).
The history of the SIG E CAPS acronym is also interesting, I've heard it was short for SIG (old shorthand for "to be prescribed") Energy CAPsules.
Not everyone who wears eyeglasses, unless you're prepared to add another several hundred dollars for the lens holder and set of prescription lenses.
I do have the 3M 6800 full face respirator but almost never use it. The silicone 7xxx series is much more comfortable than the rubber 6xxx series, and the 750x silicone half mask is reasonably priced. Augment with comfortable googles as necessary (vented ones work for me since I'm painting, not rioting).
> Executes one line of script per frame (~60 lines/sec).
Makes the "runs at 60FPS" aspect of the engine feel a lot less relevant. At this speed, anything more complex than Pong would be a struggle. Even a CHIP-8 interpreter is usually expected handle a dozen or so comparably expressive instructions per frame.At the risk of sounding obvious :
- Chrome (and Chromium) is a product made and driven by one of the largest advertising company (Alphabet, formally Google) as a strategical tool for its business model
- Chrome is one browser among many, it is not a de facto "standard" just because it is very popular. The fact that there are a LOT of people unable to use it (iOS users) even if they wanted to proves the point.
It's quite important not to amalgamate some experimental features put in place by some vendors (yes, even the most popular ones) as "the browser".
I like K better than J aesthetically, but it's harder to recommend to beginners due to the fragmentation of the ecosystem.
5(|+\)\1,1which is impossible.
- No code is feasibly guaranteed to be secure
- All code can be weaponized, though not all feasibly; password vaults, privacy infrastructure, etc. tend to show holes.
- It’s unrealistic to assume you can control any information; case-in-point the garden of Eden test: “all data is here; I’m all-powerful and you should not take it”.
I’m not against regulation and protective measures. But, you have to be prioritize carefully. Do you want to spend most of the world’s resources mining cryptocurrency and breaking quantum cryptography, or do you want to develop games and great software that solves hunger and homelessness?
Some code architectures make privacy and security structurally impossible from the beginning.
As technologists, we should hold ourselves responsible for ensuring the game isn't automatically lost before the software decisions even leave our hands.
1. There are good, ethical people working at these companies. If you were going to train on customer data that you had promised not to train on there would be plenty of potential whistleblowers.
2. The risk involved in training on customer data that you are contractually obliged not to train on is higher than the value you can get from that training data.
3. Every AI lab knows that the second it comes out that they trained on paying customer data saying they wouldn't, those paying customers will leave for their competitors (and sue them int the bargain.)
4. Customer data isn't actually that valuable for training! Great models come from carefully curated training data, not from just pasting in anything you can get your hands on.
Fundamentally I don't think AI labs are stupid, and training on paid customer data that they've agreed not to train on is a stupid thing to do.
2. The risk of using "illegal" training data is irrelevant, because no GenAI vendors have been meaningfully punished for violating copyright yet, and in the current political climate they don't expect to be anytime soon. Even so,
3. Presuming they get caught redhanded using personal data without permission- which, given the nature of LLMs would be extremely challenging for any individual customer to prove definitively- they may lose customers, and customers may try to sue, but you can expect those lawsuits to take years to work their way through the courts; long after these companies IPO, employees get their bag, and it all becomes someone else's problem.
4. The idea of using carefully curated datasets is popular rhetoric, but absolutely does not reflect how the biggest GenAI vendors do business. See (1).
AI labs are extremely shortsighted, sloppy, and demonstrably do not care a single iota about the long term when there's money to be made in the short term. Employees have gigantic financial incentives to ignore internal malfeasance or simple ineptitude. The end result is, if anything, far worse than stupidity.