CS as the only path to programming was always too narrow, and often people with a broader education are better at creative solutions. With AI-assisted programming I'd argue they have an even clearer advantage now.
CS as the only path to programming was always too narrow, and often people with a broader education are better at creative solutions. With AI-assisted programming I'd argue they have an even clearer advantage now.
Why not give us nice things for integrating with knowledge graphs and rules engines pretty please?
I know the article title says "integration tests" but when a lot of functionality is done inside PostgreSQL then you can cover a lot of the test pyramid with unit tests directly in the DB as well.
The test database orchestration from the article pairs really well with pgTAP for isolation.
In any case, it is just way more likely than any other form and makes absolutely the most sense to look for this first.
Sure, you can always say that we don't know what we don't know, but the periodic table of elements is finite and complete, and we are pretty sure that chemistry doesn't change across the universe. I realize I'm fighting an uphill battle in my position, because it's hard to prove the non-existence of things, so you will always be able to say "whatever, maybe you didn't think of everything", and it's true, but I have a hard time seeing how life can be anything but carbon-based. If you have more insight besides what resembles a god of the gaps, I'd be very interested.
> if you have carbon-based life forms, you will have water and CO2.
..can lead to statements like:
> it is just way more likely than any other form
I totally agree on the observation, but what is fascinating to me is why a deductive statement can be considered to indicate likelihood in probability. It seems there is a bit of abductive reasoning going on behind the scenes which neither the deductive logic or inductive probability can really capture on their own.
To be in $10k debt, it means you have more debt than assets.
Either way, wealth distribution is wrong. Stealing is stealing.
The first is the kind of humdrum ChatGPT safety of, don't swear, don't be sexually explicit, don't provide instructions on how to commit crimes, don't reproduce copyrighted materials, etc. Or preventing self-driving cars from harming pedestrians. This stuff is important but also pretty boring, and by all indications corporations (OpenAI/MS/Google/etc.) are doing perfectly fine in this department, because it's in their profit/legal incentive to do so. They don't want to tarnish their brands. (Because when they mess up, they get shut down -- e.g. Cruise.)
The second kind is preventing AGI from enslaving/killing humanity or whatever. Which I honestly find just kind of... confusing. We're so far away from AGI, we don't know the slightest thing of what the actual practical risks will be or how to manage them. It's like asking people in the 1700's traveling by horse and carriage to design road safety standards for a future interstate highway system. Maybe it's interesting for academics to think about, but it doesn't have any relevance to anything corporations are doing currently.
It's more of an ethics and compliance issue with the cost of BS and plausible deniability going to zero. As usual, it's what humans do with technology that has good or bad consequences. The tech itself is fairly close to neutral as long as training data wasn't chosen specifically to contain illegal substance or by way of copyright infringement (which isn't even the tech, it's the product).
Deleted Comment
many of those "rich" people could be supporting many other "poor" people via jobs that you just wiped out for a small personal gain.
The reverse would be true where poor people could be ruined, unless the value provided is worth significantly more than the debt created, which seems doubtful.
Can A.I. Be Blamed for a Teen's Suicide?
https://news.ycombinator.com/item?id=41924013
It's never the technology that's the problem, it's the owners and operators who decide how to use it.