More discussion: https://news.ycombinator.com/item?id=43977188
More discussion: https://news.ycombinator.com/item?id=43977188
This was my reaction as well. Something I don't see mentioned is I think maybe it has more to do with training data than the goal-function. The vector space of data that aligns with kindness may contain less accuracy than the vector space for neutrality due to people often forgoing accuracy when being kind. I do not think it is a matter of conflicting goals, but rather a priming towards an answer based more heavily on the section of the model trained on less accurate data.
I wonder if the prompt was layered, asking it to coldy/bluntly derive the answer and then translate itself into a kinder tone (maybe with 2 prompts), if the accuracy would still be worse.
Context is often incomplete, unclear, contradictory, or just contains too much distracting information. Those are all things that will cause an LLM to fail that can be fixed by thinking about how an unrelated human would do the job.
It's easy to forget that the conversation itself is what the LLM is helping to create. Humans will ignore or depriotitize extra information. They also need the extra information to get an idea of what you're looking for in a loose sense. The LLM is much more easily influenced by any extra wording you include, and loose guiding is likely to become strict guiding
- optimizing the placement of wind turbines to maximize energy capture
- determining the optimal size and type of solar panels for a given area.
Another field of tools that I'm looking at are the Geospatial ones. Being able to work with mapping software/data always felt like a good mix to me.
What tools are they teaching now? I studied on like AMPL for linear/nonlin prog, ARENA for sims, Matlab for general but it's been a while.
When I graduated, the OR term was already fading, and from what I’ve seen, it’s pretty much gone as a standalone field. The tools are still strong, but OR isn’t often listed as a job specialization on its own.
I started as a business analyst, and while OR wasn’t in any job descriptions, it gave me an edge. I used OR methods to go above and beyond, working closely with branch and executive management to analyze cost-effectiveness, optimize decisions, and make strategic recommendations. This helped me stay at the top of my pay band. Of course, I still handled traditional BA tasks like dashboards, reports, automation, and SQL.
My advice? Cross-specialize. OR is incredibly valuable, but it works best when paired with another strong skill set. For me, a CS minor and SQL/database skills helped early in my career.
To put it simply: OR lets you optimize a warehouse layout—but most jobs also require you to move boxes. It aligns more with engineering management roles than entry-level work, and those management positions typically go to people with industry experience.
That said, I genuinely believe OR is one of the best specializations when combined with another field. You just need to polish it with the right complementary skills.
(Full disclosure I used AI to clean up this message, but it's still very close to my initial draft. Mostly just some grammar and phrasing changes, but it does kind of read like AI now so I wanted to call it out that the sentiment is still genuine)
As far as connecting to other practitioners, I mostly just stay active in forums and joined a few LinkedIn groups but I need to improve in this area too, which is my motivation for posting this.
And as a consequence, they know where that area of expertise ends. And they know what half-knowing something feels like compared to really knowing something. And thus, they will preface and qualify their statements.
LLMs don't do any of that. I don't know if they could, I do know it would be inconvenient for the sales pitch around them. But the people that I call experts distinguish themselves not by being right with their predictions a lot, but rather by qualifying their statements with the degree of uncertainty that they have.
And no "expert system" does that.
That's not to say I think it is rationalizing it's own level of understanding, but that somewhere in the vector space it seems to have a Gradient for speculative language. If primed to include language about it, it could help cut down on some of the hallucination. No idea if this will effect the rate of false positives on the statements it does still answer confidently however
Transformers aren't zettabyte sized archives with a smart searching algo, running around the web stuffing everything they can into their datacenter sized storage. They are typically a few dozen GB in size, if that. They don't copy data, they move vectors in a high dimensional space based on data.
Sometimes (note: sometimes) they can recreate copyrighted work, never perfectly, but close enough to raise alarm and in a way that a court would rule as violation of copyright. Thankfully though we have a simple fix for this developed over the 30 years of people sharing content on the internet: automatic copyright filters.
Imagine playing the bongos, and you meet some guy who plays it really well… and it’s Richard Feynman.
Very quickly I will list the 3 main points that have helped me the most
1) the things you care to try to excel at is a statement about things worth excelling at and actual skill is often a minor detail. It's okay to identify with where the effort goes and how much you give rather than the result of it. In this way it is like voting, and there is no best person at voting. You identify with the tribe, not your ability
2) when being competitive does actually matter, the best in the world cannot be everywhere at once, so there is actually a lot of meaning behind being the best locally at something. Or even just not the worst locally. Identity is irrelevant on this one, but it does require you care and are self aware about how good you actually are at things.
3) how you relate to others is also a big part of identity. being in the middle of the pack on most things makes you much more relatable than being best. For some person who is better than you at everything, are you able to deeply connect with them or do you get distracted by comparison thoughts, insecurity, or ideas to use them for something self-serving? If not you, still how often in their life do you think that happens for them with others?
In particular the video "Conceptualizing the Christoffel Symbols". Also look at content on the Metric Tensor
Additionally, there is content from other sources (albeit less produced) on describing projective geometry which is also related
And the post by Richard Weiss explaining how he got Opus 4.5 to spit it out: https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5...