Readit News logoReadit News
scuol commented on A comparison of Ada and Rust, using solutions to the Advent of Code   github.com/johnperry-math... · Posted by u/andsoitis
jandrewrogers · 3 months ago
This is an unfortunate reality. C++ has evolved into a language capable of a surprisingly deep compile-time verification but almost no one uses that capability. It reflects somewhat negatively on the C++ developer community that problems easily solved within the language are routinely not solved, though the obsession with backward compatibility with old versions of the language plays a role. If you fully leverage it, I would argue that recent versions of C++ are actually the safest systems language. Nonetheless, almost no one has seen code bases that leverage that verification capability to its maximum. Most people have no clue what it is capable of.

There is the wisdom that it is impossible to deliver C++ without pervasive safety issues, for which there are many examples, and on the other hand there are people delivering C++ in high-assurance environments with extremely low defect rates without heroic efforts. Many stories can be written in that gap. C++ can verify many things that are not verifiable in Rust, even though almost no one does.

It mostly isn’t worth the argument. For me, C++20 reached the threshold where it is practical to design code where large parts can be formally verified in multiple ways. That’s great, this has proven to be robust in practice. At the same time, there is an almost complete absence of such practice in the C++ literature and zeitgeist. These things aren’t that complex, the language users are in some sense failing the language.

The ability to codegen situationally specific numeric types is just scratching the surface. You can verify far weirder situational properties than numeric bounds if you want to. I’m always surprised by how few people do.

I used to be a C++ hater. Modern C++ brought me back almost purely because it allows rich compile-time verification of correctness. C++11 was limited but C++20 is like a different world.

scuol · 3 months ago
> C++ can verify many things that are not verifiable in Rust, even though almost no one does.

Do you have an example of this? I'm curious where C++ exceeds Rust in this regard.

scuol commented on Self-driving cars begin testing on NYC streets   amny.com/nyc-transit/self... · Posted by u/pkaeding
joe463369 · 4 months ago
If today I buy a brand new car, drive off the lot and the breaks fail causing me to plough into a pedestrian and kill them, who is to blame?
scuol · 4 months ago
The manufacturer obviously, but they can sell the car in the first place because this defect risk is quantifiable for their liability insurance provider, who can evaluate how risky said car company is in terms of their manufacturing and how likely it is they'll need to pay out a claim, etc.

For self-driving, that evaluation is almost impossible. Sure it can look good statistically, but for things like brake lines, brake pad material, brake boosters, etc, they are governed by the laws of physics which are more understandable than any self-driving algorithm.

scuol commented on To be a better programmer, write little proofs in your head   the-nerve-blog.ghost.io/t... · Posted by u/mprast
hiAndrewQuinn · 6 months ago
Oh, I have a relevant and surprisingly simple example: Binary search. Binary search and its variants leftmost and rightmost binary search are surprisingly hard to code correctly if you don't think about the problem in terms of loop invariants. I outlined the loop invariant approach in [1] with some example Python code that was about as clear and close to plain English at I could get.

Jon Bentley, the writer of Programming Pearls, gave the task of writing an ordinary binary search to a group of professional programmers at IBM, and found that 90% of their implementations contained bugs. The most common one seems to have been accidentally running into an infinite loop. To be fair, this was back in the day when integer overflows had to be explicitly guarded against - but even then, it's a surprising number.

[1]: https://hiandrewquinn.github.io/til-site/posts/binary-search...

scuol · 5 months ago
The way I always remember leftmost and rightmost binary search (the C++ equivalents-ish of lower_bound and upper_bound) is to always have a "prior best" and then move the bounds according to the algo

while (l < r)

{

//find the midpoint

auto mp = l + (l-r)/2;

if (nums[mp] == target) { prior = target;

#ifdef upper_bound

l = target + 1; // move the left bound up, maybe there's more up there we can look for!

#else

//lower bound, we found the highest known instance of target, but let's look in the exclusive left half a bit more

r = target - 1;

#endif

}

excuse the terrible formatting, it's been a long day grinding leetcode after getting laid off...

scuol commented on Being too ambitious is a clever form of self-sabotage   maalvika.substack.com/p/b... · Posted by u/alihm
scuol · 6 months ago
If this sounds like you, I highly recommend reading "The Problem of the Puer Aeternus".

You can definitely skip a lot of the tedious bits where the author essential copy-pastes other books for analysis, but this is a very common pattern where people tend to hold themselves back because doing the unambitious, rather pedestrian next step forward requires one to face these preconceived notions about oneself, e.g. "I should've done this long ago", etc.

scuol commented on I deleted my second brain   joanwestenberg.com/p/i-de... · Posted by u/MrVandemar
scuol · 6 months ago
I understand the sentiment, but disagree with the solution. PKMs can be overwhelming if someone nerdy enough to use one ends up using it ineffectively.

The way I do it that I find works well is to have the following:

1. each day, have a journal page for a given day. Content only happens in the journal pages

2. have a series of topics that you tag. This system is up to you, but I usually find something with a hierarchy that is <=3 levels deep is best, e.g. I have "Job Search/2025/Company"

3. for each of the relevant tag pages, have those have some sort of "query" that will pull in all relevant tasks from all the journal pages, sort them by priority / state / deadline so you can see this all in one place (e.g. "What's the next step I have to do for my Nvidia application?" -> easy to answer with this system). Depending on your PKM, the hierarchy enables you to easily answer that question at a higher level, e.g. "What's the next steps I have to do for ALL of my applications?".

In each journal page, you can also write down a "task backlog" so minor tasks that you remember don't take up headspace while you intend to work on other major tasks (e.g. write down "get back to Joel about the Nvidia referral").

Regarding a point other folks have made: treat the journal and these tags as more of a "stream" of things you're doing in your life, instead of a collection of every-expanding obligations or a mausoleum of unexplored ambition.

I built this in Logseq, which seems to be the only one that has an advanced-enough query language to do this in that is possible to do local-only (no mandatory cloud data) in text files. If anyone knows how to build such a system in a different application, I'd be happy to learn! Logseq has been stale for a year or 2 as the authors are working on a much needed near-total rewrite which I'm not sure is ever going to arrive at this point.

scuol commented on Claude Code for VSCode   marketplace.visualstudio.... · Posted by u/tosh
stingraycharles · 6 months ago
Yeah. I would like multiple agents because each can be primed with a different system prompt and “clean” context. This has been proven to work, eg with Aider’s “architect” vs “editor” models / agents working together.

For parallel work who want stuff to “happen faster”, I am convinced most of these people don’t really read (nor probably understand) the code it produces.

scuol · 6 months ago
It's basically like having N of the most prolific LoC producing colleagues who don't have a great mental model of how the language works that you have to carefully parse all of their PRs.

Honestly, I've seen too many fairly glaring mistakes in all models I've tried that signal that they can't even get the easy stuff right consistently. In the language I use most (C++), if they can't do that, how can I trust them to get all the very subtle things right? (e.g. very often they produce code that holds some form of dangling references, and when I say "hey don't do that", they go back to something very inefficient like copying things all over the place).

I am very grateful they can churn out a comprehensive test suite in gtest though and write other scripts to test / do a release and such. The relief in tedium there is welcome for sure!

scuol commented on AI is ushering in a “tiny team” era   bloomberg.com/news/articl... · Posted by u/kjhughes
scuol · 6 months ago
At least for C++, I try to use copilot only for generating testing and writing ancillary scripts. tbh it's only through hard-won lessons and epic misunderstandings and screw-ups that I've built a mental model that I can use to check and verify what it's attempting to do.

As much as I am definitely more productive when it comes to some dumb "JSON plumbing" feature of just adding a field to some protobuf, shuffling around some data, etc, I still can't quite trust it to not make a very subtle mistake or have it generate code that is in the same style of the current codebase (even using the system prompt to tell it as such). I've had it make such obvious mistakes that it doubles down on (either pushing back or not realizing in the first place) before I practically scream at it in the chat and then it says "oopsie haha my bad", e.g.

```c++

class Foo

{

int x_{};

public:

bool operator==(Foo const& other) const noexcept { return x_ == x_; // <- what about other.x_? }

};

```

I just don't know at this point how to get it (Gemini or Claude or any of the GPT) to actually not drop the same subtle mistakes that are very easy to miss in the prolific amount of code it tends to write.

That said, saying "cover this new feature with a comprehensive test suite" saves me from having to go through the verbose gtest setup, which I'm thoroughly grateful for.

scuol commented on Show HN: Seastar – Build and dependency manager for C/C++ with Cargo's features   github.com/AI314159/Seast... · Posted by u/AI314159
scuol · 7 months ago
Highly recommend you rename because of a name clash with an existing famous C++ framework: https://seastar.io/
scuol commented on Nvidia CEO criticizes Anthropic boss over his statements on AI   tomshardware.com/tech-ind... · Posted by u/01-_-
unshavedyak · 7 months ago
I agree, but i think the thing we often miss in these discussions is how much LLMs have potential to be productivity multipliers.

Yea, they still need to improve a bit - but i suspect there will be a point at which individual devs could be getting 1.5x more work done in aggregate. So if everyone is doing that much more work, it has potential to "take the job" of someone else.

Yea, software is being needed more and more and more, so perhaps it'll just make us that much more dependent on devs and software. But i do think it's important to remember that productivity always has potential to replace devs, and LLMs imo have huge potential in productivity.

scuol · 7 months ago
Oh I agree it can be a multiplier for sure. I think it's not "AI will take your job" but rather "someone who uses AI well will take your job if you don't learn it".

At least for C++, I've found it does very mediocre at suggesting project code (because it has the tendency to drop in subtle bugs all over the place, you basically have to carefully review it instead of just writing it yourself), but asking things in copilot like "Is there any UB in this file?" (not that it will be perfect, but sometimes it'll point something out) or especially writing tests, I absolutely love it.

scuol commented on Nvidia CEO criticizes Anthropic boss over his statements on AI   tomshardware.com/tech-ind... · Posted by u/01-_-
scuol · 7 months ago
Just this morning, I had Claude come up with a C++ solution that would have undefined behavior that even a mid-level C++ dev could have easily caught (assuming iterator stability in a vector that was being modified) just by reading the code.

These AI solutions are great, but I have yet to see any solution that makes me fear for my career. It just seems pretty clear that no LLM actually has a "mental model" of how things work that can avoid the obvious pitfalls amongst the reams of buggy C++ code.

Maybe this is different for JS and Python code?

u/scuol

KarmaCake day60November 16, 2023View Original