Readit News logoReadit News
tdfirth commented on Go's escape analysis and why my function return worked   bonniesimon.in/blog/go-es... · Posted by u/bonniesimon
onionisafruit · 5 days ago
Go has been my primary language for a few years now, and I’ve had to do extra work to make sure I’m avoiding the heap maybe five times. Stack and heap aren’t on my mind most of the time when designing and writing Go, even though I have a pretty good understanding of how it works. The same applies to the garbage collector. It just doesn’t matter most of the time.

That said, when it matters it matters a lot. In those times I wish it was more visible in Go code, but I would want it to not get in the way the rest of the time. But I’m ok with the status quo of hunting down my notes on escape analysis every few months and taking a few minutes to get reacquainted.

Side note: I love how you used “from above” and “from below”. It makes me feel angelic as somebody who came from above; even if Java and Ruby hardly seemed like heaven.

tdfirth · 5 days ago
Ha! I had not intended to imply that one is better than the other, but I am glad that it made you feel good :).

I also came "from above".

Deleted Comment

tdfirth commented on Go's escape analysis and why my function return worked   bonniesimon.in/blog/go-es... · Posted by u/bonniesimon
tdfirth · 5 days ago
I don’t think this is confusing to the vast majority of people writing Go.

In my experience, the average programmer isn’t even aware of the stack vs heap distinction these days. If you learned to write code in something like Python then coming at Go from “above” this will just work the way you expect.

If you come at Go from “below” then yeah it’s a bit weird.

tdfirth commented on Django: what’s new in 6.0   adamj.eu/tech/2025/12/03/... · Posted by u/rbanffy
jasoncartwright · 7 days ago
[flagged]
tdfirth · 7 days ago
American hegemony, and all that.
tdfirth commented on Launch HN: Mentat (YC F24) – Controlling LLMs with Runtime Intervention    · Posted by u/cgorlla
alexchantavy · 7 days ago
> they mimic common misconceptions found on the internet (e.g. "chameleons change color for camouflage")

Wait what, what do chameleons actually change color for then?? TIL.

---

So if I understand correctly, you take existing models, do fancy adjustments to them so that they behave better, and then sell access to that?

> These are both applications where Fortune 500 companies have utilized our technology to improve subpar performance from existing models, and we want to bring this capability to more people.

Can you share more examples on how your product (IIUC, a policy layer for models) is used?

tdfirth · 7 days ago
I believe they change color to express emotion.
tdfirth commented on Show HN: Gemini Pro 3 imagines the HN front page 10 years from now   dosaygo-studio.github.io/... · Posted by u/keepamovin
SXX · 7 days ago
10 years is way too long for Google. It will be gone in 5 replaced by 3 other AI cloud services.
tdfirth · 7 days ago
You're right. How naive of me.
tdfirth commented on Show HN: Gemini Pro 3 imagines the HN front page 10 years from now   dosaygo-studio.github.io/... · Posted by u/keepamovin
tdfirth · 7 days ago
Google kills Gemini cloud services is the best one. I can't believe I haven't seen that joke until today.
tdfirth commented on Structured outputs on the Claude Developer Platform   claude.com/blog/structure... · Posted by u/adocomplete
mkagenius · a month ago
Hmm, wouldn't it sacrifice a better answer in some cases (not sure how many though)?

I'll be surprised if they hadn't specifically trained for structured "correct" output for this, in addition to picking next token following the structure.

tdfirth · a month ago
In my experience (I've put hundreds of billions of tokens through structured outputs over the last 18 months), I think the answer is yes, but only in edge cases.

It generally happens when the grammar is highly constrained, for example if a boolean is expected next.

If the model assigns a low probability to both true and false coming next, then the sampling strategy will pick whichever one happens to score highest. Most tokens have very similar probabilities close to 0 most of the time, and if you're picking between two of these then the result will often feel random.

It's always the result of a bad prompt though, if you improve the prompt so that the model understands the task better, then there will then be a clear difference in the scores the tokens get, and so it seems less random.

tdfirth commented on SQL pipe syntax available in public preview in BigQuery   cloud.google.com/bigquery... · Posted by u/marcyb5st
tdfirth · 10 months ago
It should have always worked this way. Without this feature you take the algebra out of relational algebra. That's the root of most of the composition issues in SQL.

Sadly it's a few decades too late though, and sadly this just fragments the "ecosystem" further.

u/tdfirth

KarmaCake day517June 23, 2020
About
Founder at https://cotera.co
View Original