Readit News logoReadit News
Illniyar commented on A 2k-year-old sun hat worn by a Roman soldier in Egypt   smithsonianmag.com/smart-... · Posted by u/sensiquest
ekianjo · 3 days ago
> Romans could have invented steam engines if they wanted to.

You invent such things when you have resolved many other problems first. Like water and sanitation, and geopolitical stability. And no, steam engines took a lot more time anyway because advanced metallurgy was necessary to get there.

Illniyar · 3 days ago
If geopolitical stability was a precursor to technology, we would still be living in the stone age :)
Illniyar commented on Bank forced to rehire workers after lying about chatbot productivity, union says   arstechnica.com/tech-poli... · Posted by u/ndsipa_pomu
taylodl · 6 days ago
How many times has a chatbot successfully taken care of a customer support problem you had? I have had success, but the success rate is less than 5%. Maybe even way less than 5%.

Companies need to stop looking at customer support as an expense, but rather as an opportunity to build trust and strengthen your business relationship. They warn against assessing someone when everything is going well for them - the true measure of the person is what they do when things are not going well. It's the same for companies. When your customers are experiencing problems, that's the time to shine! It's not a problem, it's an opportunity.

Illniyar · 6 days ago
This is mentioned a lot, but it's still true - people on HN are not representative of the majority of users for customer support.

The majority of support tickets are repetitive and answered by a simple formula the representative churns without thinking. Which is likely easily replaceable by chatbots.

Illniyar commented on Notion releases offline mode   notion.com/help/guides/wo... · Posted by u/ericzawo
tequila_shot · 8 days ago
I guess for a lot of users like myself using Notion ship has sailed. Most of them have moved to Obsidian, with the new database feature of Obsidian, and it being free, I do not see why users would choose Notion over Obsidian.
Illniyar · 8 days ago
I've never heard of companies using Obsidian. Notion isn't really marketed or suitable for individuals (even if their free plan says otherwise).
Illniyar commented on "Privacy preserving age verification" is bullshit   pluralistic.net/2025/08/1... · Posted by u/Refreeze5224
const_cast · 12 days ago
Adultry was always a morality law, it's just that most morality laws are derived from religion.

Morality laws, by their nature, require an iron fist to enforce. Because they have no rational consequences or proven tangential harms, we have to police the mind. Which is very difficult to do.

Thats not to say that immoral things should always be legal. Murder is immoral too.

But murder isn't just immoral, that's the difference. Its also a real thing that does real harm we can measure and see.

Illniyar · 12 days ago
My point was that adultery is no longer legally enforced was not because of concerns about government overreach. Whether it's morality or religion that changed people's mind about enforcement of Adultery is tangential
Illniyar commented on "Privacy preserving age verification" is bullshit   pluralistic.net/2025/08/1... · Posted by u/Refreeze5224
OkayPhysicist · 13 days ago
The key problem with this entire issue is that it's basically a morality law. There are classes of crimes that, over time, society has discovered simply do not have an enforcement mechanism less damaging than the harm they are seeking to prevent.

An example is Adultery. Most people will agree that it is morally wrong to cheat on your spouse. The reason civilized countries no longer have adultery laws is not because a majority of people support the crime, it's that the level of control a government needs to exercise over its citizenry to actually enforce such a law is repugnant. The state must proscribe definitions of infidelity ( human sexuality being the mess it is, this alone is a massive headache), then engage the state apparatus to surveil people's intimate lives, and then provide a legal apparatus that prevents abuse via allegation. And for what? So that people's feelings are a little less hurt?

The juice simply is not worth the squeeze.

So it goes for age restrictions. Age verification creates massive potential for invasion of privacy, blackmail, censorship, and more, necessitating a massive state censorship apparatus to block foreign content, and for what? So that little Timmy's forced back into trading nudie mags at the bus stop? To save parents the onerous effort of telling their kids "no"?

It's simply not worth it.

Illniyar · 13 days ago
I think that's a bit of rationalizing. I don't thinks there's much evidence that Adultery is no longer a criminal offense because people were concerned about privacy or government control.

It's that people became more secular, Adultery is considered a sin and not a crime, and modern countries instituted separation between religious and secular laws.

Illniyar commented on Why We Migrated from Neon to PlanetScale   blog.opensecret.cloud/why... · Posted by u/anthonyronning
mooreds · 15 days ago
> the more common/sane thing is cheaper unit pricing as you hit scale.

Depends on the provider's business model.

Many devtools want to make it trivial to get started, and zero/low prices facilitate that. They know that once you are set up with the tool, the barrier to moving is high. They also know that devs are tinkerers who may take a free product discovered on their free time and introduce it to a workplace who will pay for it.

But someone has to pay for all those free users/plans (they aren't using zero resources). With this business model, the payer is the person/org with some level of success who is forced up into a more expensive plan.

This is a valid strategy for two reasons:

- such users/orgs are less likely to move because they already have working code using the system and moving introduces risk

- if they have high levels of traffic, they may (not certainly, but may) be a profit making enterprise and will do the cold hard calculus of "it costs me $50/100 GB but would take a dev N hours to move and will have X opportunity cost" and decide to keep paying

The successful "labor of love" project is an unfortunate casualty.

Illniyar · 15 days ago
It's definitely a business model. Just like a dark pattern is a pattern :)

The counter to that argument is that it's creating an adverse effect on your most profitable customers, with an incentive to move to offerings that don't have free tiers (or where the free tiers are not considerably affecting your own costs).

If your free tier is so lucrative that you need to 25x the cost, then your free tier is too expansive and you need to tone it down until the economics make sense.

Illniyar commented on The Subway Game (1980)   gricer.com/subway_game/su... · Posted by u/Lammy
yapyap · 24 days ago
This seems really simple no? Like just look at the subway map?
Illniyar · 24 days ago
The game suggest - Claremont Parkway to 13 Av. Here's the map in 1964 - https://www.nycsubway.org/perl/show?/img/maps/calcagno-1967-...

I had to use google just to find Claremont Parkway on the map :) (I did find 13 av. I imagine people who this is their first time in NY would probably also be confused with 13 st)

And from the article it seems like the navigation inside the subways were also very hard - they are not intuitive even now (with some entrances only allowing one way for example, and you have to go out and back in again)

Illniyar commented on Persona vectors: Monitoring and controlling character traits in language models   anthropic.com/research/pe... · Posted by u/itchyjunk
vessenes · 24 days ago
Actually, Anthropic has put out some research showing that hallucination is a thing their models know they do; similar weights are activated for ‘lying’ and ‘hallucinating’ in the Claude series. Implication - Claude knows - at least mostly - when its hallucinating.

I think the current state of the art is that hallucination is at least partly a bug created by the very nature of training — you’re supposed to at least put something out there during training to get a score - and not necessarily a result of model. Overall I think that’s hopeful!

EDIT: Update, getting downvoted here.. Interesting! Here’s a link to the summary of the paper. https://www.anthropic.com/research/tracing-thoughts-language...

Illniyar · 24 days ago
That's interesting! I guess the question is how did they detect or simulate a model hallucinating in that regard?

Do you have a link to that article? I can't find anything of that nature with a shallow search.

Illniyar commented on Persona vectors: Monitoring and controlling character traits in language models   anthropic.com/research/pe... · Posted by u/itchyjunk
Illniyar · 24 days ago
I can see this working with "evil" and "sycophantic" personas. These seem like traits that would be amenable to input and thus be detectable by manipulating the input.

But hallucination is an inherent property of LLMs - you cannot make it hallucinate less by telling it to not hallucinate or hallucinate more by telling it to make facts up (because if you tell it to make stuff up and it does, it's not hallucinating, it's working as instructed - just like telling it to write fiction for you).

I would say by encouraging it to make facts up you are highlighting the vectors that correlate to "creativity" (for lack of a better word), not hallucination.

Illniyar commented on Seven replies to the viral Apple reasoning paper and why they fall short   garymarcus.substack.com/p... · Posted by u/spwestwood
Illniyar · 2 months ago
I find it weird that people are taking the original paper to be some kind of indictment against llms. It's not like LLMs failing at doing Hanoi tower problem at higher levels is new, the paper took an existing method that was done before.

It was simply comparing the effectiveness of reasoning and non reasoning models on the same problem.

u/Illniyar

KarmaCake day4293February 17, 2013
About
If yo uwant to contact me you can send an email to me.at.alonbd.dot.com
View Original