Um …duh? Get out a calculator. Consult a reference, etc. Otherwise compute the result, and ensure you've done that correctly, ideally as independent of the code under test as possible. A lot of even mathematical stuff has "test vectors"; e.g., the SHA algorithms.
> Here’s how you’d do it with an expect test:
printf "%d" (fibonacci 15);
[%expect {||}]
> The %expect block starts out blank precisely because you don’t know what to expect. You let the computer figure it out for you. In our setup, you don’t just get a build failure telling you that you want 610 instead of a blank string. You get a diff showing you the exact change you’d need to make to your file to make this test pass; and with a keybinding you can “accept” that diff. The Emacs buffer you’re in will literally be overwritten in place with the new contents:…you're kidding me. This is "fix the current state of the function — whether correct or not — as the expected output."
Yeah… no kidding that's easier.
We gloss over errors — "some things just looked incorrect" — well, but how do you know that any differently than fib(10)?
That said, I don’t see how it’s much different to TDD (write the test to fail, write the code to pass the test) aside from automating adding the expected test output.
So I guess it’s TDD that centres the code, not the test…
I know it doesn't make financial sense to self-host given how cheap OSS inference APIs are now, but it's comforting not being beholden to anyone or requiring a persistent internet connection for on-premise intelligence.
Didn't expect to go back to macOS but they're basically the only feasible consumer option for running large models locally.
You can calculate the exact cost of home inference, given you know your hardware and can measure electrical consumption and compare it to your bill.
I have no idea what cloud inference in aggregate actually costs, whether it’s profitable or a VC infused loss leader that will spike in price later.
That’s why I’m using cloud inference now to build out my local stack.