Readit News logoReadit News
tuckerman commented on Ask HN: Best codebases to study to learn software design?    · Posted by u/pixelworm
nchagnet · 2 days ago
I generally agree with your point on ease of experimentation, but if we insist on calling it software engineering, then maybe the field needs to adhere to engineering principles, as the GP highlighted.
tuckerman · 2 days ago
I believe part of engineering isn’t over-engineering for the task at hand as well. If the costs of a “failure” are low/zero then the right thing can be to move quickly expecting some problems.

I think the field could get better at knowing when costs are low (eg sometimes scalability, cheaper to change a database choice than rebuild a bridge) and where the costs are sometimes very high (eg security).

tuckerman commented on Computer fraud laws used to prosecute leaking air crash footage to CNN   techdirt.com/2025/08/22/i... · Posted by u/BallsInIt
opello · 4 days ago
What's the charge for the arrest? I thought legally intellectual property wasn't "real property." If it actually was a trade secret, it might make more sense.
tuckerman · 4 days ago
Just because a user has privileges to access files doesn’t mean doing so is permitted for any purpose. Accessing them for this unauthorized purpose is likely computer fraud, at least under California law as I understand it.
tuckerman commented on Show HN: OpenAI/reflect – Physical AI Assistant that illuminates your life   github.com/openai/openai-... · Posted by u/Sean-Der
TZubiri · 7 days ago
I get that this is as-is, but I wonder if so many ultra-alpha products don't dilute the OpenAI brand and create redundancy in the product line. It feels like the opposite of Apple's well thought out planned product design and product line.

Let's see if it pays out.

tuckerman · 7 days ago
For a developer platform having examples is useful as a starting point for new projects.

Also, I’m not sure if it’s similar at OpenAI, but when I was at Google it was much easier to get approval to put an open source project under the Google GitHub org than my personal user.

tuckerman commented on Ashet Home Computer   ashet.computer/... · Posted by u/todsacerdoti
JKCalhoun · 14 days ago
No doubt you've already looked into Ben Eater's various offerings (?).
tuckerman · 14 days ago
I came across them (and they seem very cool!) but my working theory is that, in addition to more electronics heavy projects like those, I also want something that can fill the role of the apple ii plus that was the "family computer" when I was a kid without going straight to giving him access to a modern desktop/computer which feel so hermetic.

I'm somehow very confident in this while also being sure that people probably thought very similar things about home radios destroying the youth in the 1920s :D

tuckerman commented on Ashet Home Computer   ashet.computer/... · Posted by u/todsacerdoti
tuckerman · 14 days ago
He's still too young for something like this but I've been searching for something to use when we more properly introduce my son to computers. Using modern components to make something useful that still exposes the electronics side, encourages tinkering and exploration over media consumption, etc and it seems like a project like this could fit the bill nicely!
tuckerman commented on Linear sent me down a local-first rabbit hole   bytemash.net/posts/i-went... · Posted by u/jcusch
CharlieDigital · 18 days ago
Yes? If that's the primary selling point for a project manager versus being just a really damn good project manager with good visibility?

I've never used a project manager and thought to myself "I want to switch because this is too slow". Even Jira. But I have thought to myself "It's too difficult to build a good workflow with this tool" or "It's too much work to surface good visibility".

This is not a first-person shooter. I don't care if it's 8ms vs 50ms or even 200ms; I want a product that indexes on being really great at visibility.

It's like indexing your buying decision for a minivan on whether it can do the quarter mile at 110MPH @ 12 seconds. Sure, I need enough power and acceleration, but just about any minivan on the market is going to do an acceptable and safe speed and if I'm shopping for a minivan, its 1/4 mile time is very low on the list. It's a minivan; how often am I drag racing in it? The buyer of the minivan has a purpose for buying the minivan (safety, comfort, space, cost, fuel economy, etc.) and trap speed is probably not one of them.

It's a task manager. Repeat that and see how silly it sounds to sweat a few ms interaction speed for a thing you should be touching only a few times a day max. I'm buying the tool that has the best visibility and requires the least amount of interaction from me to get the information I need.

tuckerman · 18 days ago
I think there is a mismatch between most commenters on HN and who is making purchasing decisions for something like Linear: it would the PGM/TPM org or leadership pushing it and they are touching the tool a lot more often. Even if a small speed up ultimately doesn't make a difference in productivity, the perceived snapiness makes it feel "better/more modern" than what they currently have.

That said, I really enjoy Linear (it reminds me a lot of buganizer at Google). The speed isn't something I notice much at all, it's more the workflow/features/feel.

tuckerman commented on OpenAI's new open-source model is basically Phi-5   seangoedecke.com/gpt-oss-... · Posted by u/emschwartz
charcircuit · 19 days ago
You are free to look at every single weight and study how it affects the result. You can see how the model is architected. And you don't need training data to be provided to be able to modify the weights. Software can still be open source even if it isn't friendly to beginners.
tuckerman · 19 days ago
I think you could say something remarkably similar about just releasing bytecode as well and I think most people would call foul at that. I don't think it's so cut and dry.

This isn't entirely about being a beginner or not either. Full fine-tuning without forgetting does really want the training data (or something that is a good replacement). You can do things like LoRa but, depending on your use case, it might not work.

tuckerman commented on OpenAI's new open-source model is basically Phi-5   seangoedecke.com/gpt-oss-... · Posted by u/emschwartz
NitpickLawyer · 19 days ago
Yeah, makes sense. Good observations regarding the benchmark vs. vibes in general, and I didn't know / made the connection between the lead of phi models going to oAI and gpt-oss. Could very well be a similar exercise + their "new" prompt level adherence (system > developer > user). In all the traces I've seen of refusals the model "quotes" the policy quite religiously. Similar thing was announced for gpt5.

I think the mention of the "horny people" is warranted, they are an important part of the open models (and first to explore the idea of "identities / personas" for LLMs, AFAIK). Plenty of fine-tuning bits of know-how trickled from there to the "common knowledge".

There's a thing that I would have liked to be explored, perhaps. The idea that companies might actually want what -oss offers. While the local llm communities might want freedom and a horny assistant, businesses absolutely do not want that. And in fact they spend a lot of effort into implementing (sometimes less than ideal) guardrails, to keep the models on track. For very easy usecases like support chatbots and the like, businesses will always prefer something that errs on the side of less than useful but "safe", rather than have the bot start going crazy with sex/slurs/insults/etc.

I do have a problem with this section though:

> Really open weight, not open source, because the weights are freely available but the training data and code is not.

This is factually incorrect. The -oss models are by definition open source. Apache2.0 is open source (I think even the purists agree with this). The requirement of sharing "training data and code" is absolutely not a prerequisite for being open source (and historically it was never required. The craze surrounding LLMs suddenly made this a thing. It's not).

Here's the definition of source in "open source":

> "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.

Well, for LLMs the weights are the "preffered form of making modifications". The labs themselves modify models the same as you are allowed to by the license! They might use more advanced tools, or better datasets, but in the end the definition still holds. And you get all the other stuff, like the right to modify, re-release, etc. I'd really wish people would stop proliferating this open weight nonsense.

Models released under open source licenses are open source. gpt-oss, qwens and mistrals (apache2.0), deepseeks(MIT), etc.

Models released under non open source licenses also exist, and they're not open source because the licenses under which they're released aren't. LLamas, gemmas, etc.

tuckerman · 19 days ago
I mostly agree with your assessment of what we should/shouldn't call open source for models but there is enough grey area to make the other side a valid position and not worthy of being dismissed so easily. I think there is a fine line between model weights and, say, bytecode for an interpreter and I think if you released bytecode dumps under any license it would be called out.

I also believe the four freedoms are violated to some extent (at least in spirit) by just releasing the weights and for some that might be enough to call something not open source. Your "freedom to study how the program works, and change it to make it do what you wish" is somewhat infringed by not having the training data. Additionally, gpt-oss added a (admittedly very minimal) usage policy that somewhat infringes on the first freedom, i.e. "the freedom to run the program as you wish, for any purpose".

tuckerman commented on Ollama Turbo   ollama.com/turbo... · Posted by u/amram_art
hanifbbz · 21 days ago
I like how the landing page (and even this HN page until this point) completely miss any reference to Meta and Facebook. The landing page promises privacy but anyone who knows how FB used VPN software to spy on people, knows that as long as the current leadership is in place, we shouldn't assume they've all of a sudden became fans of our privacy.
tuckerman · 21 days ago
Ollama isn’t connected to Meta besides offering Llama as one of the potential models you can run.

There is obviously some connection to Llama (the original models giving rise to llama.cpp which Ollama was built on) but the companies have no affiliation.

tuckerman commented on Show HN: AgentMail – Email infra for AI agents   chat.agentmail.to/... · Posted by u/Haakam21
tuckerman · a month ago
I was previously considering building in this space but the infra around sending /receiving email for lots of addresses seemed like a major pain before getting to anything properly exciting, excited to see this! Would also encourage you to build good local dev/testing infra, dealing with email gets messy.

I believe truly useful AI assistants will use the same tools that humans prefer to use, rather than forcing us to come to it (in the same way truly intelligent embodied AI would use the same spaces/stairs/tools/doors as humans). Email, despite all its warts, still runs a lot of the world.

u/tuckerman

KarmaCake day529June 22, 2011
About
Working on something new. Previously at Wayve, Google/Google[x], and Airbnb. Alumnus of South Park Commons.

Frequent dabbler and dilettante, dad, coffee drinker, software engineer.

cameron at ctuck dot com twitter.com/tuckerman

[Verifying my cryptographic key: openpgp4fpr:135c23b218651a6275ecc71efefdf30a8e3e3078]

View Original