Readit News logoReadit News
chartpath commented on Chatbot hinted a kid should kill his parents over screen time limits: lawsuit   npr.org/2024/12/10/nx-s1-... · Posted by u/airstrike
ChrisArchitect · 9 months ago
Related:

Can A.I. Be Blamed for a Teen's Suicide?

https://news.ycombinator.com/item?id=41924013

chartpath · 9 months ago
Yeah same company. Clearly they haven't improved their guardrails in the last couple months.

It's never the technology that's the problem, it's the owners and operators who decide how to use it.

chartpath commented on Your docs are your infrastructure   stackoverflow.blog/2024/1... · Posted by u/runos
rqtwteye · 9 months ago
I really wish tech writing would get more respect. I work in medical devices where we have to produce a ton of docs. Instead of hiring writers we burn a lot of engineers’ time writing docs. Their main job is to produce systems and writing is just a side annoyance to them The result is badly written, inconsistent documentation that’s close to useless besides fulfilling regulatory requirements.
chartpath · 9 months ago
We should pivot the culture to one that is pro-liberal arts again. These people know how to read and write better than STEMs in general.

CS as the only path to programming was always too narrow, and often people with a broader education are better at creative solutions. With AI-assisted programming I'd argue they have an even clearer advantage now.

chartpath commented on Fine-tuning now available for GPT-4o   openai.com/index/gpt-4o-f... · Posted by u/davidbarker
chartpath · a year ago
Some much focus on fine-tuning when it can actively make performance on reasoning and planning benchmarks worse (over a baseline of already worse-than-coin-toss).

Why not give us nice things for integrating with knowledge graphs and rules engines pretty please?

chartpath commented on Setting up PostgreSQL for running integration tests   gajus.com/blog/setting-up... · Posted by u/mooreds
chartpath · a year ago
I've never had a problem working https://pgtap.org/ into CI.

I know the article title says "integration tests" but when a lot of functionality is done inside PostgreSQL then you can cover a lot of the test pyramid with unit tests directly in the DB as well.

The test database orchestration from the article pairs really well with pgTAP for isolation.

chartpath commented on Doubts grow about the biosignature approach to alien-hunting   quantamagazine.org/doubts... · Posted by u/pseudolus
mr_mitm · a year ago
I'm on the more pessimistic side of this argument. Lots of very smart people have thought about this, and really the only element that is able to form complex molecules besides carbon is silicon, and there are good arguments as to why it's a much worse basis for life. And if you have carbon-based life forms, you will have water and CO2.

In any case, it is just way more likely than any other form and makes absolutely the most sense to look for this first.

Sure, you can always say that we don't know what we don't know, but the periodic table of elements is finite and complete, and we are pretty sure that chemistry doesn't change across the universe. I realize I'm fighting an uphill battle in my position, because it's hard to prove the non-existence of things, so you will always be able to say "whatever, maybe you didn't think of everything", and it's true, but I have a hard time seeing how life can be anything but carbon-based. If you have more insight besides what resembles a god of the gaps, I'd be very interested.

chartpath · a year ago
Kind of a tangent but I'm really interested in why statements like:

> if you have carbon-based life forms, you will have water and CO2.

..can lead to statements like:

> it is just way more likely than any other form

I totally agree on the observation, but what is fascinating to me is why a deductive statement can be considered to indicate likelihood in probability. It seems there is a bit of abductive reasoning going on behind the scenes which neither the deductive logic or inductive probability can really capture on their own.

chartpath commented on Seven signs of ethical collapse (2012)   scu.edu/ethics/focus-area... · Posted by u/tacon
theultdev · 2 years ago
$10k in debt would mean wiping out their net worth so they have more debt than assets. Not just $10k payment.

To be in $10k debt, it means you have more debt than assets.

Either way, wealth distribution is wrong. Stealing is stealing.

chartpath · 2 years ago
How is wealth distribution wrong? It sounds like you are arguing in favour of inequality.
chartpath commented on After OpenAI's blowup, it seems pretty clear that 'AI safety' isn't a real thing   gizmodo.com/ai-safety-ope... · Posted by u/rntn
crazygringo · 2 years ago
It's so confusing because people keep (intentionally?) conflating two separate ideas of "AI safety".

The first is the kind of humdrum ChatGPT safety of, don't swear, don't be sexually explicit, don't provide instructions on how to commit crimes, don't reproduce copyrighted materials, etc. Or preventing self-driving cars from harming pedestrians. This stuff is important but also pretty boring, and by all indications corporations (OpenAI/MS/Google/etc.) are doing perfectly fine in this department, because it's in their profit/legal incentive to do so. They don't want to tarnish their brands. (Because when they mess up, they get shut down -- e.g. Cruise.)

The second kind is preventing AGI from enslaving/killing humanity or whatever. Which I honestly find just kind of... confusing. We're so far away from AGI, we don't know the slightest thing of what the actual practical risks will be or how to manage them. It's like asking people in the 1700's traveling by horse and carriage to design road safety standards for a future interstate highway system. Maybe it's interesting for academics to think about, but it doesn't have any relevance to anything corporations are doing currently.

chartpath · 2 years ago
There's a third kind, which is when unscrupulous business managers or politicians use it to make decisions that they would not be capable of auditing for a rationale when otherwise required to know why such a decision was made.

It's more of an ethics and compliance issue with the cost of BS and plausible deniability going to zero. As usual, it's what humans do with technology that has good or bad consequences. The tech itself is fairly close to neutral as long as training data wasn't chosen specifically to contain illegal substance or by way of copyright infringement (which isn't even the tech, it's the product).

Deleted Comment

chartpath commented on Seven signs of ethical collapse (2012)   scu.edu/ethics/focus-area... · Posted by u/tacon
theultdev · 2 years ago
yes, it turns from a no to a hard no. that's destroying a lot to gain a little.

many of those "rich" people could be supporting many other "poor" people via jobs that you just wiped out for a small personal gain.

chartpath · 2 years ago
$10k is nothing for rich people. Just putting them in debt for it doesn't mean nothing of value was provided.

The reverse would be true where poor people could be ruined, unless the value provided is worth significantly more than the debt created, which seems doubtful.

u/chartpath

KarmaCake day445December 1, 2011View Original