Readit News logoReadit News
qustrolabe commented on Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?    · Posted by u/embedding-shape
qustrolabe · 10 days ago
HN have very primitive comments layout that gives too big of an focus to large responses and first most upvoted post with all its replies. I think just because of that it's better to do something about large responses with little value. I'd rather they just share conversation link
qustrolabe commented on Screenshots from developers: 2002 vs. 2015 (2015)   anders.unix.se/2015/12/10... · Posted by u/turrini
qustrolabe · 12 days ago
the last one with XMonad is the only one that looks even remotely good to me in compared to what we have today
qustrolabe commented on The Free Software Foundation Europe deleted its account on X   fsfe.org/news/2025/news-2... · Posted by u/latexr
bgwalter · 15 days ago
Ok, now how about a GPL-4 that forbids "AI" training?
qustrolabe · 14 days ago
bad idea
qustrolabe commented on The Free Software Foundation Europe deleted its account on X   fsfe.org/news/2025/news-2... · Posted by u/latexr
qustrolabe · 14 days ago
"the current platform direction and climate combined with an algorithm that prioritises hatred, polarisation, and sensationalism, alongside growing privacy and data protection concerns"

While I agree that this platform has a lot of hateful people, it's definitely possible after some basic internet hygiene end up with nice recommendations feed especially after forming solid following list with good people, no hatred no politics only coolest people doing cool things. I like it there. It's the place where things happen that you read about on other cites only weeks later in some twisted form

qustrolabe commented on Google Antigravity just deleted the contents of whole drive   old.reddit.com/r/google_a... · Posted by u/tamnd
camillomiller · 18 days ago
Now, with this realization, assess the narrative that every AI company is pushing down our throat and tell me how in the world we got here. The reckoning can’t come soon enough.
qustrolabe · 18 days ago
What narrative? I'm too deep in it all to understand what narrative being pushed onto me?
qustrolabe commented on Don't push AI down our throats   gpt3experiments.substack.... · Posted by u/nutanc
qustrolabe · 18 days ago
What examples of AI integrations annoy you? Because I have such wonderful time randomly discovering AI integrations where they actually fit nicely: 1) marimo documentation has ask button to quickly get some help, kind of like way smarter RAG; 2) postman has AI that can write little scripts that visualize responses however you want (for example I turned bunch of user ids into profile links so that I could visit all of them); 3) Grok button on each Twitter post is just amazing to quickly get into what post even references and talks about. 4) Google's AI Mode saved me many clicks, even just Gemini that can quickly fetch when certain TV Show goes live and make reminder is amazing

Deleted Comment

qustrolabe commented on Indie game developers have a new sales pitch: being 'AI free'   theverge.com/entertainmen... · Posted by u/01-_-
qustrolabe · 22 days ago
That's "GMO-free" kind of marketing, not a good thing
qustrolabe commented on Gemini 3   blog.google/products/gemi... · Posted by u/preek
qustrolabe · a month ago
Out of all other companies Google provide the most generous free access so far. I bet this gives them plenty of data to train even better models
qustrolabe commented on OpenAI may not use lyrics without license, German court rules   reuters.com/world/german-... · Posted by u/aiz0Houp
loudmax · a month ago
Simon Willison had an analysis of Claude's system prompt back in May. One of the things that stood out was the effort they put in to avoiding copyright infringement: https://simonwillison.net/2025/May/25/claude-4-system-prompt...

Everyone knows that these LLMs were trained on copyrighted material, and as a next-token prediction model, LLMs are strongly inclined to reproduce text they were trained on.

qustrolabe · a month ago
post trained models strongly inclined to pass response similar to what got them high RL score, it's slightly wrong to keep thinking of LLMs as just next token predictions from dataset's probability distribution like it's some Markov Chain

u/qustrolabe

KarmaCake day54December 26, 2023View Original