Readit News logoReadit News
Animats · 2 years ago
That's a good article.

Having worked on program verification in the past, Rust looks like the most useful modern language to which formal methods can be applied. Rust's rules eliminate so many of the cases that are hard to formalize.

Big remaining problems involve deadlock analysis, both in the thread sense and in the Rc/borrow sense. Those are somewhat equivalent. I'd like to have static deadlock analysis in Rust. If you had that, I think you could get safe back-pointers, too. If you can prove that all borrow and upgrade calls can't fail, you can eliminate most reference counts. That gives you free interior mutability when that's possible.

The author's comment on provers is interesting. The big problem with theorem provers is that they're developed by people who like to prove theorems. They're in love with the formalism. This leads to a UI disconnect with programmers.

You can knock off most things you have to prove with a SAT solver. But you'll need something heavier for the hard problems. Coq is too manual. The author thinks ACL2 is too functional. Not sure what to do there, but I've been out of this for decades.

Machine learning may help in guiding theorem provers. Most programs aren't that original in terms of control flow and data usage. So inferring a proof plan from other code may work. ML systems can't be trusted to do the proof, but may take over guiding the process.

nextaccountic · 2 years ago
F* is this mythical language that proves stuff automatically (with an SMT solver, more powerful than SAT), but still lets you prove manually if the automated solver fails

The crypto routines in Firefox and Wireguard is actually written in F* (or more specifically Low, a low level DSL embedded in F), not in Rust, and is fully verified

https://project-everest.github.io/

https://mitls.org/

xavxav · 2 years ago
> You can knock off most things you have to prove with a SAT solver. But you'll need something heavier for the hard problems. Coq is too manual. The author thinks ACL2 is too functional. Not sure what to do there, but I've been out of this for decades.

Agreed, though I think Lean is making big progress in the UX front of ITP verification through its extensive meta-programming facilities.

My issue with these tools applied to verifying external languages like Rust, is that the proof is not written in the target language, forcing your developers to learn two languages.

I have recently been thinking about what a "verification aware Rust" would look like, colored by my experience writing Creusot, the work on Verus and Aeneas and my time in the lab developing Why3. I think building such a language from the ground up could provide a step change in ease of verification even from Rust, particularly with regard to those "hard parts" of the proof.

GregarianChild · 2 years ago
> "verification aware Rust" ... building such a language from the ground up could

Could you sketch in a few bullet point what you think is missing and how to fix the gaps?

In my experience a core problem is that adding rich specifications slows down rapid evolution of programs, since the proliferate transitively along the control flow graphs. We can see this most prominently with Java's checked exceptions, but most effect annotations (to give an example of simple specification annotations) suffer from the same cancerous growth. This is probably at least partly intrinsic, but it could be improved by better automation.

rtpg · 2 years ago
I think that Coq/Agda/Lean and friends are really going to be the winners in the proof space. Interactivity is a pretty good model for the feedback loop, and they're systems that exist and work, without a lost of asterisks.

The biggest thing I think these tools are missing out of the box is, basically, "run quickcheck on my proof for me". It's too easy to have some code, and try to prove some property that isn't true, and rip your hair out thinking "why can't I get the proof working!"

If halfway through your proof you end up in a thing that seems nonsensical, it would be nice for these tools to just have a "try to generate a counter-example from here" command that spits out a thing.

Proofs are so path-dependent, and it's not trivial to do in general. But I think they are all very close to greatness, and every time I've gone through the effort to formally verify a thing, it's been pretty illuminating. Just... if you're trying to prove some code, the process should take into account the idea that your code might be buggy.

pjmlp · 2 years ago
I also quite like Idris, the problem with these languages is that they require being a Mage level 10, while most folks are still getting around magic exists.
themulticaster · 2 years ago
Isabelle/HOL has Quickcheck, which is precisely what you think it is. Although it only works if the free variables in the thesis you'd like to prove are sufficiently simple AFAIK (think integers or lists). The more powerful alternative to Quickcheck is Nitpick, which is a purpose-built counterexample finder.

The workflow you'd like already exists one to one in Isabelle: You can simply type "nitpick" halfway through a proof and it will try to generate a counterexample in that exact context.

Animats · 2 years ago
> "try to generate a counter-example from here" For the theories behind some SAT solvers, that's quite possible. There's a set of problems that are completely decidable. These are composed of add, subtract, multiply by constants, logic operators, and theory of arrays and structs. That covers most things which cause programs to crash and many of the things you want to write in assertions.
MaxBarraclough · 2 years ago
> Machine learning may help in guiding theorem provers

Interesting idea. There would be real upside where the machine learning system gets it right and guides the prover to a valid proof, and little downside if it gets it wrong. (The prover won't derive an invalid proof, it will just fail to derive a valid one.) I don't think there are many applications of machine learning that have this property.

eru · 2 years ago
> (The prover won't derive an invalid proof, it will just fail to derive a valid one.) I don't think there are many applications of machine learning that have this property.

Don't basically all applications of any heuristic solver to NP problems have this property? (Including problems that are only in NP, because they are also already in P.)

klabb3 · 2 years ago
> Big remaining problems involve deadlock analysis, both in the thread sense and in the Rc/borrow sense. Those are somewhat equivalent. I'd like to have static deadlock analysis in Rust. If you had that, I think you could get safe back-pointers, too. If you can prove that all borrow and upgrade calls can't fail, you can eliminate most reference counts. That gives you free interior mutability when that's possible.

I’m not a formal verification guy, but I don’t think locks in the traditional sense is helpful. No compiler will give you anything useful to work with wrt locks. You just accept and live with the danger.

In fact, locks and ref counts are runtime constructs in Rust. Imo it “goes against” even Rust’s own strong principles of single ownership - these things are shared mutable state. Arc broke Rusts RAII model so much that they had to remove scoped threads (which needs destructors to run before end-of-scope).

Anyway, rust aside. Global refcounts have cyclic and leak issues that again are global. I don’t think we can necessarily get rid of them, but I do believe containing them is better, eg with arena-style scoping.

As for locks my gut feeling is we need an asymmetric data structure, for instance a channel. In eg Go you can break a channel into sender and receiver, and have the sender goroutine do `defer close(ch)` which is guaranteed to run if the rest of thread finishes, even if there’s a panic. Doesn’t necessarily have to be channels but separating the roles (reader-writer / producer-consumer) is much easier to reason about, and that probably helps formal analysis too.

zozbot234 · 2 years ago
Scoped threads have been reintroduced to Rust since ver 1.63, released in August 2022. There's also significant interest in introducing true linear types to Rust, where destructors would be guaranteed to run - however the changes required for that are large and will probably have to be introduced in a new Rust edition.
Animats · 2 years ago
> I’m not a formal verification guy, but I don’t think locks in the traditional sense is helpful. No compiler will give you anything useful to work with wrt locks. You just accept and live with the danger.

Deadlock analysis is well understood. You have to prove that locks get locked in the same order regardless of the path taken. That can be easy or hard. In many cases, if you have the call graph annotated with where locks are set and cleared, you can try to prove that the program is deadlock-free. I'd like to see Rust tools to help with that.

zozbot234 · 2 years ago
The issue with SAT/SMT for program verification is it's too much like dark magic. It's acceptable if you can have it generate a proof certificate for the properties you care about and commit these with the Rust source code - but AIUI, generating usable proof certificates for 'unsat' proofs is a hard problem where you might be better off writing out the proof interactively.
throwaway17_17 · 2 years ago
I really like that we are now two posts deep from a quote from a ~1973 paper by Hoare. I posted a really long comment in the original boat’s article’s HN thread pointing out that skewing the quote towards a Rust-centric view (while understandable given boat’s history with the language) was an artificial narrowing of the scope of Hoare’s critique. Now Grayson has taken that narrow slice and used it as jumping off point to discuss a few interesting areas/design-points of Rust. While I still think my comment was correct, the ensuing discussion by Grayson in this blog were more than sufficient to overcome any technical gripes with the inciting article.
pron · 2 years ago
> I really like that we are now two posts deep from a quote from a ~1973 paper by Hoare

Thing is that by the mid-nineties Hoare had already noticed that there might be something a little misguided with the very foundation of '70s-era focus on soundness: http://users.csc.calpoly.edu/~gfisher/classes/509/handouts/h...

Since then, more advanced unsound methods than simple tests have only made unsound methods pull ahead of sound methods even further (I've posted other comments on this page to that effect).

lolinder · 2 years ago
A link to your comment would be helpful to those of us who missed that post previously:

https://news.ycombinator.com/item?id=40355876

throwaway17_17 · 2 years ago
Thanks, how do you get the actual link for a particular post?
rstuart4133 · 2 years ago
Wow, 1973. 50 years ago. A professional lifetime.

Hoare got so close. He found the problem, and in hindsight solving it using a type system to enforce shared xor mutability seems like a small step. Had he figured out that small step back was the solution back then he would have saved us 50 years of enormous amounts of pain.

throwaway17_17 · 2 years ago
My original comment about boat’s post citing the Hoare paper:

After the opening quote by Hoare and a quick intro paragraph, the author says 'What Tony Hoare was writing about when he said that references are like jumps was the problem of mutable, aliased state.' withoutboats is a well known Rust developer, so I guess this assumption about Hoare's quote should not be a surprise. However, I don't see the text of the quote as reinforcing that perspective. I can see an argument for the article's extrapolation, taking Hoare's seemingly more general lamentations about the semantic existence of references (a semantic group containing at least 'reference, pointer, or indirect address into the languageas an assignable item of data') and then applying to the Rust model of non-aliased, mutable state as an attempt to circumvent the problems Hoare is addressing. But this argument is an attempt at taking a narrowly scoped perspective on a problem, correcting that small slice of the greater problem, and then announcing the entirety of the problem no longer exists. Like I said, understandable why the article would take this direction, I just don't think it truly addresses the totality of Hoare's critique of references as an undesirable semantic abstraction. The title of the section withoutboats has quoted and the first sentence are unfortunately left out of his selection: "8. Variables One of the most powerful and most dangerous aspects of machine code programming is that each individual instruction of the code can change the content of any register, any location of store, and alter the condition of any peripheral: it can even change its neighboring instructions or itself. Worse still, the identity of the location changed is not always apparent from the written form of the instruction; it cannot be determined until run time, when the values of base registers, index registers, and indirect addresses are known." [This paragraph is essentially acknowledging that low-level machine code gains its computational power from unlimited ability to alter the state of computation] Also, the ... following the quote's second paragraph omits the following: "For example, in ALGOL 68, the assignment x : = y; always changes x , but the assignment x: = y+l; if x is a reference variable may change any other variable (of appropriate type) in the whole machine. One variable it can never change is x!" [This quote, which was removed, makes it explicit that Hoare is addressing the mere existence of references] Both of the omitted sections tend very strongly toward Hoare's actual critiques being the semantic concept of references in high level languages being problematic, not merely mutable state. There is some natural extension of Hoare's discussion of references as means of assignment, which does lead to the 'spooky action' occurances. However, following this section of Hints on Programming Language Design, Hoare talks for a bit about structured programming, scope, and parameter importance. Discussing that without references, programmers have disjoint names for semantic objects and those are only altered/mutated by passing the sole existing object to a procedure as a parameter and having it passed back. Overall, the TL;DR may be negatively stated as Rust developer gonna view things through a Rust-y lense. However, I think that is an incorrect reading. withoutboats skipped a crucial step in his going from Hoare's critique of referencs, they went directly from the text to an interpretation of the critique focused on aliased, mutable state. There is some discussion to be had about Hoare's assumptions of a single semantic object existing in a one-to-one correspondence with a disjoint source code name, especially in the context of multi-processor and networked programming prevalent in 2024. While I think that a more general solution to Hoare's problem exists and acknowledge Rust's attempts to at least tame a portion of the problem, I don't think any language has 'fixed' this issue.

the_mitsuhiko · 2 years ago
> Both of the omitted sections tend very strongly toward Hoare's actual critiques being the semantic concept of references in high level languages being problematic, not merely mutable state.

Both sections however talk about mutations. Immutable references are not any different from inputs into a function.

titzer · 2 years ago
I don't really get the gushing over this post in the comments, it papers over entire fields of program analysis in a single breath. While I like Graydon and respect his point of view, this paragraph is hopelessly over-simplified.

> Conversely, languages that have a GC have unfortunately often not felt it necessary to bother having strong local reasoning support. Java for example has a GC and also uncontrolled mutable aliasing, so weak local reasoning and consequently more-challenging formal verification. If you write to an object in Java every other variable referencing that object "sees" the change, so any logical formula that held over all those variables might be invalidated by the write unless you use some fancy method of tracking possible write-sets or separating the heap.

First of all, there have been several languages that regions in their type system and are still backed by GC, so this is a strawman. Take, for example, Pony. Second, there is an entire field of pointer analysis and escape analysis that can and does infer uniqueness and reasons about whether two references can be aliases. Third, the whole point of static types is to slice apart the heap into independently non-aliasing parts using this technology called "classes" and "fields". We're not talking about JavaScript here, I think we should stop pretending like Java and C# and Scala and a bazillion GC'd languages don't have local reasoning about mutable state.

zozbot234 · 2 years ago
> ...there is an entire field of pointer analysis and escape analysis that can and does infer uniqueness and reasons about whether two references can be aliases.

You can't do this automagically in the general case, you really do need something much like separation logic. (And of course the Rust borrow checking semantics can be seen as a simplified variety of separation logic.)

Classes and fields don't give you completely local reasoning about mutable state, because class/object A can end up depending on the mutable state of class/object B. And class inheritance as found in Java adds all sorts of further complexity of its own, especially as programs evolve over time.

thinkharderdev · 2 years ago
> First of all, there have been several languages that regions in their type system and are still backed by GC, so this is a strawman

Is it? He's not claiming that GC somehow makes it impossible to have strong local reasoning. He's just saying that, for whatever reason, the designers of most languages have made choices contrary to that goal. Which seems obviously true to me. Counter-examples exist of course but looking at the mainstream GC'd general purpose programming languages (Java, C#, Python, etc) it is true.

> Third, the whole point of static types is to slice apart the heap into independently non-aliasing parts using this technology called "classes" and "fields"

The whole point according to who? "Object-oriented" design means many things to many people and from what I understand the original idea was gesturing at something that we would now call the "actor model" which legitimately does try and carve up the heap into non-aliased, owned data structures which can only be mutated through message passing, but OO as it is actually implemented in the real world seldom even approaches those original goals. And languages like Java do nothing to prevent you from storing mutable references to the same object in multiple different objects. This is drawn out more eloquently than I am capable of doing in this post https://without.boats/blog/references-are-like-jumps/

mrkeen · 2 years ago
> I think we should stop pretending like Java and C# and Scala and a bazillion GC'd languages don't have local reasoning about mutable state.

Pop-quiz: what does this print out?

  void myMethod(final Map<String, String> map) {
    map.remove("key");
    int oldSize = map.size();
    map.put("key", "val");
    int newSize = map.size();
    System.out.println(newSize - oldSize);
  }
EDIT: Apologies, I misread your double negative.

Nope. I reread it again and I'm still thoroughly confused. Java does have local reasoning, as opposed in particular to JavaScript, which does not?

vlovich123 · 2 years ago
Doesn't the answer depend on whether or not the map is being mutated by other threads concurrently?

To be fair, even in Rust that answer would depend on the specific map implementation - for example if the map internally held a lock to expose a thread-safe non-mut interface, there's a race between when you remove/insert and when you read the size. That suggests that the whole concept of local reasoning about mutable state gets pretty messy pretty quick unless you stick to very basic examples.

pron · 2 years ago
I usually love Graydon's posts, but this one -- like many articles about formal methods -- may be misleading for most readers unfamiliar with the field. It's a little like reporting an order of magnitude improvement in transmuting lead into gold without mentioning that there are 29 further orders of magnitude to go before the method is cost-effective in practice.

Does no aliasing make formal verification significantly easier? Absolutely! Does it mean we can expect a practical and cost-effective way to verify real world programs? Not even remotely. Yes, some programs, circuits, and program components are formally verified every day, but they are the exception rather than the rule; they are relatively very small and they are constructed in particular careful ways. Nice properties that allow local reasoning about some other properties are important but do not materially affect the way we can assure the correctness of mainstream software with sound methods. A reasonable, real-world program that would take a million years of effort to verify without such properties will only take 100,000 years of effort to verify with them. That is good and it is even useful in practice because a program that would take a year to verify could now take a month -- making it actually cost-effective -- only the number of such programs to begin with is vanishingly small in the grand scheme of things.

A program in a language with no heap, no pointers, no integers -- only boolean variables -- and no loops larger than two iterations (the language is nowhere near Turing complete) cannot be practically verified (by reduction from TQBF). Not in theory, not in practice, and there aren't even heuristic methods that cover many real world instances of such programs (of course, some such programs could happen to be easily verifiable). It can be made to work for some properties (like memory safety) -- we say that we can make these invariants inductive or composable -- but it turns out that that's not nearly good enough for what software needs.

Back in the seventies and eighties, and even nineties, the hope was that while the "worst case" complexity of verifying programs was known to be intractable, perhaps certain local guarantees and structures employed by programming languages could move us away from the worst case. It has since been proven to not be the case. Even the hope that programs people actually write are far enough from the worst case for good heuristic methods to emerge, and even that now seems to not be the case.

Many years ago I gave a talk covering the relevant findings: https://pron.github.io/posts/correctness-and-complexity

The main result is that the verification most interesting properties that we'd like to verify does not compose. I.e. if we can prove a property for components P1...Pn, then proving that property for P1 ○ ... ○ Pn (where ○ is some composition such as a subroutine call or message passing) does not just grow super-polynomially in n (which would be good!), but super-polynomially in the size of each of the components, i.e. it would be just as hard to prove if everything was not decomposed. I.e. correctness does not decompose.

Yes, some important programs can be soundly verified and various local properties can make that easier, which is useful but because of software is growing much faster than the scale of effective formal verification and because we've learned more results about the complexity of software verification, end-to-end verification of "ordinary" software appears further away today than it did 50 years ago. That is why there's a shift in research of software assurance toward unsound methods. A focus on soundness does not, I believe, help the cause of more correct software, but distracts away from more techniques that have proven more fruitful.

This gap between "we can verify quite a bit" and "what we can verify is a drop in the ocean" -- both of which are true at the same time -- is something that is often missing from discussions of formal methods.

littlestymaar · 2 years ago
Hi Mr Presler,

I don't know if you realize that you've preached the exact same argument here on HN a hundred time for the past 6 years at least, without ever adding nuance after the many comments you've received on the topic here. You're not entirely wrong, but also far from entirely correct as half the world interested in the topic has had many chances to show you over the hundreds of time you've made that argument here.

And what's fascinating (besides your relentless preaching) is that the complete absence of any evolution in your position after all this time show that you haven't learned anything from these discussions with others, nothing.

That really sounds like pathological close-mindedness at that point. It means that everybody is losing there time when engaging with you, and that's really something you should try and think about.

Regards

pron · 2 years ago
Hi. I think that's the normal dynamics when an expert in a particular subject discusses something with laymen. I'm not a researcher, but I have applied a range of formal methods for software verification for about a decade on systems ranging from safety-critical real-time embedded software to distributed databases. I constantly learn from people with a similar or higher level of experience. Even in my comments today you can find me mentioning a technique I only learnt about relatively recently from the people at Antithesis (I'd known about it since FoundationDB, but I didn't know as much about the details of their state exploration heuristics).

It is one of the greatest disappointments of my carreer that that group is still small; even most researchers don't have much experience in the mainstream application of formal methods, plus they each tend to focus on a narrow range of methods. The vast majority of practitioners care about correctness, but very few explore approaches other than simple testing or use of simple type systems; even something as basic and as old as non-trivial assertions -- which is a form of formal specification -- is underutilised.

BTW, a complicating factor is that verification techniques often don't extrapolate well from one application domain to another; a method that is effective for cryptography may not be effective for distributed systems and vice-versa.

pron · 2 years ago
P.S.

A focus on soundness does not, I believe, help the cause of more correct software but distracts from techniques that have proven more fruitful in practice (and supported by theory) -- sometimes to the surprise of those who used to think that soundness is the only way, e.g. http://users.csc.calpoly.edu/~gfisher/classes/509/handouts/h.... This happens over and over: there is some important improvement in sound methods while at the same time unsound methods are making much bigger strides toward more correct software in practice. Here's just one very recent example (which isn't even necessarily state-of-the-art in unsound methods): https://antithesis.com

Of course, this is a very complex topic, because there are some specific situations where sound techniques are more powerful and practical.

jerf · 2 years ago
Which kind of touches on the other problem, which is the difficulty of getting people to even use the easy, low-cost techniques we already know. Here we are talking about SAT solving and formula propagation and how unsafe Rust interacts with those things and in the world I experience a moment of shock if I so much as see

    type Username string
    type Password string

    func Login(Username, Password)
instead of

    func Login(username string, password string)
in code. (Obviously these type signatures are just complex enough to make my point and would generally have more to them in the real world.)

Or at times even just encountering a unit test is enough to surprise me. Programmers need to get really, really tired of fixing the same bug(s) over and over in QA before they finally have the brainwave of adding their first unit test somewhere around the 25,000 line point.

I wish formal methods all the best and I wish I had more time to spend with them, but I'm awfully busy fighting upstream just to keep the unit tests running every time a new developer joins the project. I'm just asking for you to please install the git precommit hook or actually pay attention to the CI or whatever. Please. I'm not asking you to learn a SAT solver. Just keep the tests passing, and if a test broke because it's no longer applicable, please remove it instead of just letting it break for the next couple of years. etc.

pas · 2 years ago
By any chance can you elaborate on a bit of what are these unsound methods are, what's the current state-of-the-art? And what antithesis does? (Is it a fuzzer?)
ngruhn · 2 years ago
What does "unsound method" mean in this context? Doesn’t that mean, if it finds a bug that this may not actually be a bug? I.e. it may give false positives?
roca · 2 years ago
I think it's reasonable to look forward to verifying that small Rust libraries use 'unsafe' correctly. That alone would be really helpful.
pron · 2 years ago
Even if that were the case, you'd be working very hard, wasting your assurance budget, to prove something underwhelming, like memory safety or other similarly low-level properties. After all that hard work you'd only end up at the same point where most of the software world -- which is already using memory-safe languages -- is today. On the other hand, if you were to spend your budget how they do (rather, should do) with modern unsound methods, you would likely end up not noticeably worse than if you did both.

Remember, the goal isn't to prove some technical property, but to find the greatest number of bugs that matter to you -- those you believe are most dangerous for your particular program or library -- within budget. Researchers are often interested in discovering what properties they can verify, but practitioners care about the functional properties they need to verify.

Deductive methods (including type systems) are particularly myopic in that regard precisely because they're so limited in their effectiveness to begin with. People interested in those particular techniques are rightfully very excited when they discover they can do a little bit more than they previously could, even if what they can do is still so far behind what practitioners need (and are getting elsewhere with more practical methods).

In general, the gap between research and what practitioners think it might mean for their industry work is large in this field and greatly misunderstood by practitioners.

zozbot234 · 2 years ago
That would require a formal model of how 'unsafe' works, and we still don't have that. Pointer provenance is an especially difficult issue, also affecting C/C++ specification efforts.
eggy · 2 years ago
I just posted a relevant comment on the F* article a minute ago. I would subjectively prefer F* because of syntax over Rust, but for the show control software we are developing, we are going with Ada/SPARK2014. Rust needs a formal, published standard like the other legacy PLs to draw the same crowd currently using Ada/SPARK2014 and the formal verification tools and some real-world safety-critical, high-integrity applications in its back pocket.

"I prefer F#/F* syntax, but I had to go with Ada/SPARK2014 for the safety-related control systems I am trying to verify formally and use for high-integrity applications. Rust is making some inroads with AdaCore and Ferrous Systems partnering on providing formal verification tools for Rust like they do for Ada/SPARK2014, but Rust still doesn't have a published standard like C, Common Lisp, Prolog, Fortran, COBOL, etc. Plus, the legacy is immense for Ada and SPARK2014."

mst · 2 years ago
> One way to look at reference counting is as a sort of eager, optimized form of garbage collection that works in the case that you have strictly acyclic data (or can tolerate leaking cycles).

It's interesting to note that python operates with the refcounting + tracing hybrid described (and I know of production deployments where they force the tracing collector to e.g. only run once every N requests because python's never got that fast either).

perl, meanwhile, is pure refcounting but has weak references so you can make data acyclic from a refcounting POV at the cost of having to pay attention to which parts of the cyclic structure you keep references to (and for cyclic object graphs there are some tricks you can pull to move the weak link around so you don't have to pay such attention, but that would be a whole comment on its own).

koka does refcounting in theory but tries to lift as much of it to compile time as possible in practice, plus if you do e.g. 'y = x + 1' and x has only a single reference to it and is guaranteed to not be used afterwards, it re-uses the storage for y and so can do in-place mutation.

nim, meanwhile, offers something called ORC, which is automatic reference counting plus an implementation of Bacon+Rajan's Recycler algorithm which is designed to collect -only- cycles in an otherwise refcounting based system and as such is really rather fast at doing so.

Coming back to Rust: There's a stop the world Recycler implementation (rather than concurrent as intended both by the original paper and the author's future plans if the itch returns) here - https://github.com/fitzgen/bacon-rajan-cc - released as the bacon-rajan-ccc crate from this fork - https://github.com/mbartlett21/bacon-rajan-cc - and the README for those contains an Alternative section linking to other experiments in the same area.

(I've yet to play with any of the rust versions but love Recycler sufficiently as a concept that I suspect if/when I encounter a situation where it might help I'll be going back and doing so)

If you're having trouble finding the Recycler papers, I stashed a copy of each under https://trout.me.uk/gc/ (and if you're a similar sort of strange to me, you might also enjoy the ones stashed under https://trout.me.uk/lisp/ ;)

mccr8 · 2 years ago
Firefox also uses reference counting plus a trial deletion based cycle collector to manage C++ DOM objects (and cycles through JS). In fact, Graydon was responsible for the initial implementation.
zozbot234 · 2 years ago
https://github.com/chc4/samsara is also worth looking into, implementing the same concurrent GC algorithm for Rust. A C++ implementation is available at https://github.com/pebal/sgcl .
mst · 2 years ago
Oh, fantastic, a rust implementation that already hives it off into a separate thread.

Seems the README mentions neither 'Recycler' nor the authors so none of my searches found it; possibly I should file an issue suggesting the author adds those because this is (at the very least) way closer to what I was looking for.

Thank you!

worik · 2 years ago
I am impatient with formal verification of programmes.

Proving that a programme correctly implements a specification is interesting theoretically but of little practical use

Correct specification is as hard as correct programming, so the difficult problems are not solved, they are moved.

There are practical use cases for formal methods, but they are rare to encounter

nyssos · 2 years ago
> Correct specification is as hard as correct programming

A complete specification, sure, but that's not really the goal. The things we want to prove are generally just a few key properties: this function always terminates, that one always returns a sorted array, etc. And once you can do that you can impose these things as preconditions: e.g. you must demonstrate that an array has been pre-sorted before you pass it to this function.

worik · 2 years ago
Yes.

I am not against all formal methods. Pre and post conditions for loops etcetera. When they are useful they should be used (in many cases they are built in: `for i = k; i < l; i++` has a precondition, a loop condition, and an action all in one statement.)

It is the idea, that was very hot thirty or so years ago, that by using formal specifications and rules of inference we could eliminate bugs.

kzrdude · 2 years ago
Written on May 15th 2024 which was the 9th anniversary of Rust 1.0