Readit News logoReadit News
kghe3X commented on Boeing missing key elements of safety culture: FAA report   ainonline.com/aviation-ne... · Posted by u/elorant
burnerburnson · 2 years ago
The average engineer at Boeing makes $120k/year. That's about $50k less than what a new grad with no experience will get from big tech.

Boeing doesn't have a culture problem, they have an idiot problem. The idea that you can hire competent engineers offering salaries like that is absurd.

They need to adopt a pay for performance mentality and bring in managers who are not afraid to fire underperformers.

kghe3X · 2 years ago
Just where is an inexperienced new grad making $170k out of the gate? I find this difficult to believe. Are you normalizing for cost of living? I suspect, most Boeing employees aren't based in the Valley.
kghe3X commented on On being listed as an artist whose work was used to train Midjourney   catandgirl.com/4000-of-my... · Posted by u/earthboundkid
ClumsyPilot · 2 years ago
My 2 cents - people often compare this AI to automation that came before say the printing press, or automatic fabric production . But it’s actually far worse.

If all the calligraphers drop dead, the printing press still works. If everyone who hand makes fabrics drops dead, the automatic fabric machines continue to develop.

But midjourney doesn’t work without artists, it depends on them like a parasite depends on its host. Once the host dies, the parasite is doomed.

So it’s value-destroying, more vandalism than capitalism. Or maybe like Viking pillaging. It’s like the people who used to burn millions of penguins alive to convert their fat into oil.

https://www.newscientist.com/article/dn21501-boiled-to-death...

kghe3X · 2 years ago
I think the analogy holds. Printing presses can't produce hand-drawn calligraphy just like Midjourney can't produce art outside some boundary on it's training set. Both are limited in capability compared to humans, both have inhuman capabilities. Both required humans to initially produce value. Neither requires humans to continue to produce value.
kghe3X commented on Overchoice   en.wikipedia.org/wiki/Ove... · Posted by u/hypertexthero
kghe3X · 2 years ago
Analysis paralysis
kghe3X commented on Tax prep companies: $90M lobbying against free tax-filing   opensecrets.org/news/2023... · Posted by u/everybodyknows
hamandcheese · 2 years ago
> FreeTaxUSA

> $15 total

Sounds like a scam to me.

kghe3X · 2 years ago
Well, it's not.
kghe3X commented on Autoenshittification. How the computer killed capitalism. – by Cory Doctorow   doctorow.medium.com/autoe... · Posted by u/rbanffy
nektro · 2 years ago
not even gonna read this because the title couldn't be more false. the computer accelerated capitalism far beyond anyone could have ever dreamed.
kghe3X · 2 years ago
Maybe if you read it you would realize why the title makes sense.
kghe3X commented on Six programming languages I’d like to see   buttondown.email/hillelwa... · Posted by u/johndcook
codesnik · 3 years ago
All six features listed are very redundant or useless, in my opinion.

Contracts? how they're different or less verbose than plain asserts? what they do better?

"reactive programming"? if remove that strange code editing "replace", just a chain of definitions instead of variables in, say, ruby, gives you basically the same effect.

etc.

What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

and, somewhat related (both are using backtracking), adoption of some of the ideas of https://en.wikipedia.org/wiki/Icon_(programming_language), not on a whole language level, but in some scoped generator context would be nice for some tasks I had to do.

kghe3X · 3 years ago
What I'd love to see is a language with a first class grammars to replace many uses of regexes or badly written DSL's, like what Perl6 tried to do.

There is recent research in this area.

https://conservancy.umn.edu/handle/11299/188954

kghe3X commented on Belief in AI sentience is becoming a problem   reuters.com/technology/it... · Posted by u/samizdis
softcactus · 3 years ago
> As an aside, I'm surprised you went with "killing humanely" as opposed to not killing in the first place. I'm not sure how that fits into your model of ethics.

I brought up "killing humanely" because we breed chickens for meat and we instantiate AI for tasks, and then terminate them when they are no longer needed. Creation means inevitable destruction.

Yes, there is some cost baked in to treating chickens well, but I believe that harm reduction is the logical conclusion of valuing intelligence. Eating meat is a cultural vestige that we should try to move away from with synthetic meat or some murder-free alternative. I say this as a meat-eater myself, but that's kind of getting into the weeds.

> It's not even clear that shutting down an AI is unethical, especially if it feels no pain and can be started up again, or cloned, or stuck in a simulation...

There is no answer to the "swamp man" question, there is also no way to objectively measure pain. But if an AI receives a negative reward, then it will react to that stimulus. Is this any different from pain in the animal kingdom? This is a pseudo-scientific way of describing pain, but I think that most of these questions are a matter of definition and are not actually answerable. Why not give the benefit of the doubt to the subject of our experimentation?

> Third, I disagree that there is no downside to treating "non-sentient" AI well. There's yet no well-defined boundary for sentience, so have fun treating every grain of sand "well" under some yet-to-be-specified definition of "well".

I know that the grain of sand was used as a hyperbole, but I don't see any issue with practicing thoughtfulness towards inanimate objects. Maybe a rock can't feel pain, but our ecosystems are delicate and a sort of "modern animism" could make us stop and think about the downstream effects that our activities have on the environment.

https://en.wikipedia.org/wiki/Hulduf%C3%B3lk#See_also

> Finally, designing an intelligence at all seems fraught with ethical dilemmas (see designer babies), but engineering a reward (presumably, a priori, against the AI's will) into the termination routine seems particularly twisted to me.

If we have determined that creating AI is inevitable, then we are already designing it to our will. Engineering it to have a positive experience in death isn't twisted, it's merciful. If death is certain, would you rather have a painful death, a death void of sensation, or a pleasurable death? The alternative is to either leave the AI on forever, or never create it in the first place, neither of which are ideal.

kghe3X · 3 years ago
What's to stop us from applying the same rationale to humans once the line between artificial intelligence and human intelligence becomes sufficiently blurred?
kghe3X commented on Belief in AI sentience is becoming a problem   reuters.com/technology/it... · Posted by u/samizdis
softcactus · 3 years ago
Isn't the best practice to treat everything like it is sentient? Sort of like a Pascal's wager but for ethical treatment.

For example: When it comes to killing a chicken, it's best to assume that death is an equally unpleasant experience, so I should kill the chicken as humanely as possible and treat it well during life.

There is no downside to treating a non-sentient AI well. This is going to sound silly, but maybe we could program an AI in such a way that shutting it down is "pleasurable" or results in a large reward function. I don't think I need to list the potential downsides for treating a sentient/intelligent AI poorly. I really don't see any issues with this sort of "techno-animism".

kghe3X · 3 years ago
While I generally agree that we should minimize harm when in doubt, I don't think your analogy holds up.

First, Pascal's wager is flawed in that it assumes there are two known outcomes, when the outcomes are unknown in both quality and quantity. For example, there might be a god that punishes belief with infinite negative reward.

Second, killing the chicken humanely isn't without cost. Consider how cheap it is to toss the male chicks straight into the meat grinder at the hatchery facilities. As an aside, I'm surprised you went with "killing humanely" as opposed to not killing in the first place. I'm not sure how that fits into your model of ethics.

Third, I disagree that there is no downside to treating "non-sentient" AI well. There's yet no well-defined boundary for sentience, so have fun treating every grain of sand "well" under some yet-to-be-specified definition of "well". It's not even clear that shutting down an AI is unethical, especially if it feels no pain and can be started up again, or cloned, or stuck in a simulation...

Finally, designing an intelligence at all seems fraught with ethical dilemmas (see designer babies), but engineering a reward (presumably, a priori, against the AI's will) into the termination routine seems particularly twisted to me.

kghe3X commented on Shacl: A Description Logic in Disguise   arxiv.org/abs/2108.06096... · Posted by u/PaulHoule
PaulHoule · 3 years ago
The URI resolution idea is 99% crap.

That is, most of the time you don't want to publish subjects and predicates as resolvable URIs. However, people see so many examples of http:... that they don't release it's even possible to make non-resolvable URIs.

I used random UUIDs all the time but that is a super-fraught area since some people really want them to be in temporal sequence so their database index is happy.

I've also done the content addressable blob store thing.

kghe3X · 3 years ago
I think proposed UUID v7 is sortable, FWIW.

https://www.ietf.org/archive/id/draft-peabody-dispatch-new-u...

kghe3X commented on Google’s powerful AI spotlights a human cognitive glitch   arstechnica.com/science/2... · Posted by u/samizdis
kghe3X · 3 years ago
We asked a large language model, GPT-3, to complete the sentence “Peanut butter and pineapples___”. It said: “Peanut butter and pineapples are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.” If a person said this, one might infer that they had tried peanut butter and pineapple together, formed an opinion and shared it with the reader.

But how did GPT-3 come up with this paragraph? By generating a word that fit the context we provided. And then another one. And then another one. The model never saw, touched or tasted pineapples—it just processed all the texts on the Internet that mention them.

Up to this point, the article sticks to its thesis that humans have a tendency to ascribe real-world experience to the agents generating fluent text.

And yet reading this paragraph can lead the human mind – even that of a Google engineer – to imagine GPT-3 as an intelligent being that can reason about peanut butter and pineapple dishes.

And with this line they jump past that thesis into an implication that AI can't reason about concepts it hasn't experienced physically with a human body. This is obviously incorrect, we humans reason all the time about things we haven't directly experienced.

I think important concepts in evaluating different types of intelligence are the model learning abstraction, mapping between concepts, and reasoning correctly given the information available in its training data. Does GPT-3 have an understanding of abstractions like objects, combinations, food, flavor? Why does it need to have tasted pineapple to infer that sentence? It seems to know that peanut butter and savory are associated.

Can an educated person, blind from birth, not infer that if it is day time and it is not cloudy, then the sky is blue? Surely they can be considered to possess intelligence, despite never experiencing color directly.

u/kghe3X

KarmaCake day17June 13, 2022View Original