Readit News logoReadit News
EricMausler commented on Claude 4.5 Opus’ Soul Document   lesswrong.com/posts/vpNG9... · Posted by u/the-needful
simonw · 18 days ago
Here's the soul document itself: https://gist.github.com/Richard-Weiss/efe157692991535403bd7e...

And the post by Richard Weiss explaining how he got Opus 4.5 to spit it out: https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5...

EricMausler · 18 days ago
This entire soul document is part of every prompt created with Claude?
EricMausler commented on Starcloud   blogs.nvidia.com/blog/sta... · Posted by u/jonbaer
Reubend · 2 months ago
Last time these folks were mentioned on HN, there was a lot of skepticism that this is really possible to do. The issue is cooling: in space, you can't rely on convection or conduction to do passive cooling, so you can only radiate away heat. However, the radiator would need to be several kilometers big to provide enough cooling, and obviously launching such a large object into space would therefore eat up any cost savings from the "free" solar power.

More discussion: https://news.ycombinator.com/item?id=43977188

EricMausler · 2 months ago
Alternatively, assuming they are aware of the cost, what does this say about what they are implying the cost of electricity is going to be?
EricMausler commented on Training language models to be warm and empathetic makes them less reliable   arxiv.org/abs/2507.21919... · Posted by u/Cynddl
gleenn · 4 months ago
I think the immediately troubling aspect and perhaps philosophical perspective is that warmth and empathy don't immediately strike me as traits that are counter to correctness. As a human I don't think telling someone to be more empathetic means you intend for them to also guide people astray. They seem orthogonal. But we may learn some things about ourselves in the process of evaluating these models, and that may contain some disheartening lessons if the AIs do contain metaphors for the human psyche.
EricMausler · 4 months ago
> warmth and empathy don't immediately strike me as traits that are counter to correctness

This was my reaction as well. Something I don't see mentioned is I think maybe it has more to do with training data than the goal-function. The vector space of data that aligns with kindness may contain less accuracy than the vector space for neutrality due to people often forgoing accuracy when being kind. I do not think it is a matter of conflicting goals, but rather a priming towards an answer based more heavily on the section of the model trained on less accurate data.

I wonder if the prompt was layered, asking it to coldy/bluntly derive the answer and then translate itself into a kinder tone (maybe with 2 prompts), if the accuracy would still be worse.

EricMausler commented on The new skill in AI is not prompting, it's context engineering   philschmid.de/context-eng... · Posted by u/robotswantdata
furyofantares · 6 months ago
I've seen a lot of cases where, if you look at the context you're giving the model and imagine giving it to a human (just not yourself or your coworker, someone who doesn't already know what you're trying to achieve - think mechanical turk), the human would be unlikely to give the output you want.

Context is often incomplete, unclear, contradictory, or just contains too much distracting information. Those are all things that will cause an LLM to fail that can be fixed by thinking about how an unrelated human would do the job.

EricMausler · 6 months ago
Alternatively, I've gotten exactly what I wanted from an LLM by giving it information that would not be enough for a human to work with, knowing that the llm is just going to fill in the gaps anyway.

It's easy to forget that the conversation itself is what the LLM is helping to create. Humans will ignore or depriotitize extra information. They also need the extra information to get an idea of what you're looking for in a loose sense. The LLM is much more easily influenced by any extra wording you include, and loose guiding is likely to become strict guiding

EricMausler commented on Ask HN: Is Operations Research still a thing?    · Posted by u/samuel2
samuel2 · a year ago
I think cross-specializing with physics of energy might be quite cool. Then I could work on problems such as, e.g.,

- optimizing the placement of wind turbines to maximize energy capture

- determining the optimal size and type of solar panels for a given area.

EricMausler · a year ago
Absolutely, I can see how that could be effective. The jobs may lean toward electrical experience. Power engineering is a subfield of electrical and may be relevant. I looked into it once for myself, seemed like a good fit.

Another field of tools that I'm looking at are the Geospatial ones. Being able to work with mapping software/data always felt like a good mix to me.

What tools are they teaching now? I studied on like AMPL for linear/nonlin prog, ARENA for sims, Matlab for general but it's been a while.

EricMausler commented on Ask HN: Is Operations Research still a thing?    · Posted by u/samuel2
EricMausler · a year ago
Hey there! I have a B.S. in Information & Systems Engineering, which includes Operations Research (OR). Happy to chat about this!

When I graduated, the OR term was already fading, and from what I’ve seen, it’s pretty much gone as a standalone field. The tools are still strong, but OR isn’t often listed as a job specialization on its own.

I started as a business analyst, and while OR wasn’t in any job descriptions, it gave me an edge. I used OR methods to go above and beyond, working closely with branch and executive management to analyze cost-effectiveness, optimize decisions, and make strategic recommendations. This helped me stay at the top of my pay band. Of course, I still handled traditional BA tasks like dashboards, reports, automation, and SQL.

My advice? Cross-specialize. OR is incredibly valuable, but it works best when paired with another strong skill set. For me, a CS minor and SQL/database skills helped early in my career.

To put it simply: OR lets you optimize a warehouse layout—but most jobs also require you to move boxes. It aligns more with engineering management roles than entry-level work, and those management positions typically go to people with industry experience.

That said, I genuinely believe OR is one of the best specializations when combined with another field. You just need to polish it with the right complementary skills.

(Full disclosure I used AI to clean up this message, but it's still very close to my initial draft. Mostly just some grammar and phrasing changes, but it does kind of read like AI now so I wanted to call it out that the sentiment is still genuine)

As far as connecting to other practitioners, I mostly just stay active in forums and joined a few LinkedIn groups but I need to improve in this area too, which is my motivation for posting this.

EricMausler commented on GPT-5 is behind schedule   wsj.com/tech/ai/openai-gp... · Posted by u/owenthejumper
hengheng · a year ago
My personal criterion for calling somebody an expert, or "educated", or a "scholar" is that they have any random area of expertise where they really know their shit.

And as a consequence, they know where that area of expertise ends. And they know what half-knowing something feels like compared to really knowing something. And thus, they will preface and qualify their statements.

LLMs don't do any of that. I don't know if they could, I do know it would be inconvenient for the sales pitch around them. But the people that I call experts distinguish themselves not by being right with their predictions a lot, but rather by qualifying their statements with the degree of uncertainty that they have.

And no "expert system" does that.

EricMausler · a year ago
Anecdotal but I told chatgpt to include it's level of confidence in its answers and to let me know if it didn't know something. This priming resulted in it starting almost every answer with some variation of "I'm not sure, but.." when I asked it vague / speculative questions and then when I asked it direct matter of fact questions with easy answers it would answer with confidence.

That's not to say I think it is rationalizing it's own level of understanding, but that somewhere in the vector space it seems to have a Gradient for speculative language. If primed to include language about it, it could help cut down on some of the hallucination. No idea if this will effect the rate of false positives on the statements it does still answer confidently however

EricMausler commented on GPT-5 is behind schedule   wsj.com/tech/ai/openai-gp... · Posted by u/owenthejumper
Workaccount2 · a year ago
Doing basic copyright analyses on model outputs is all that is needed. Check if the output contains copyright, block it if it does.

Transformers aren't zettabyte sized archives with a smart searching algo, running around the web stuffing everything they can into their datacenter sized storage. They are typically a few dozen GB in size, if that. They don't copy data, they move vectors in a high dimensional space based on data.

Sometimes (note: sometimes) they can recreate copyrighted work, never perfectly, but close enough to raise alarm and in a way that a court would rule as violation of copyright. Thankfully though we have a simple fix for this developed over the 30 years of people sharing content on the internet: automatic copyright filters.

EricMausler · a year ago
No comment on if output analysis is all that is needed, though it makes sense to me. Just wanted to note that using file size differences as an argument may simply imply transformers could be a form of (either very lossy or very efficient) compression.
EricMausler commented on Ask HN: What was your most humbling learning moment?    · Posted by u/spcebar
nicbou · 2 years ago
I identify by the things I love and excel at. If being an artist is a big chunk of who you are, it can hurt your ego to meet q much better artist who is also good at many other things.

Imagine playing the bongos, and you meet some guy who plays it really well… and it’s Richard Feynman.

EricMausler · 2 years ago
I cant help but chime in here because I used to feel this way and all the typical advice never felt right (ie that you shouldn't care how good you are at things)

Very quickly I will list the 3 main points that have helped me the most

1) the things you care to try to excel at is a statement about things worth excelling at and actual skill is often a minor detail. It's okay to identify with where the effort goes and how much you give rather than the result of it. In this way it is like voting, and there is no best person at voting. You identify with the tribe, not your ability

2) when being competitive does actually matter, the best in the world cannot be everywhere at once, so there is actually a lot of meaning behind being the best locally at something. Or even just not the worst locally. Identity is irrelevant on this one, but it does require you care and are self aware about how good you actually are at things.

3) how you relate to others is also a big part of identity. being in the middle of the pack on most things makes you much more relatable than being best. For some person who is better than you at everything, are you able to deeply connect with them or do you get distracted by comparison thoughts, insecurity, or ideas to use them for something self-serving? If not you, still how often in their life do you think that happens for them with others?

EricMausler commented on Strangely Curved Shapes Break 50-Year-Old Geometry Conjecture   quantamagazine.org/strang... · Posted by u/pseudolus
alistairSH · 2 years ago
Is there a good layman’s explanation of higher dimensions as they relate to this type of problem? I’m trying to envision what that means, which is probably a wrong approach…
EricMausler · 2 years ago
Yes actually, I've been binging a lot of General Relativity content lately by coincidence and can say the Dialect YouTube channel has been the best resource for describing this. I'm not an expert so I cannot speak to its accuracy but it seems sound.

In particular the video "Conceptualizing the Christoffel Symbols". Also look at content on the Metric Tensor

Additionally, there is content from other sources (albeit less produced) on describing projective geometry which is also related

u/EricMausler

KarmaCake day286October 1, 2020View Original