Readit News logoReadit News
kozikow commented on Trump's new visa fees spur offshoring talks, hiring turmoil   reuters.com/sustainabilit... · Posted by u/alephnerd
kozikow · 3 months ago
The H-1B program was already broken by the lottery. This new fee just solidifies the L-1 visa as the real high-skilled pipeline. More L-1 visas are already approved annually than new H-1Bs, and this policy only widens that gap.

In addition to L1, O1 is also often gamed. $100K for H1B is mostly "posturing" at this point, as voters don't know about other options.

kozikow commented on AI overviews cause massive drop in search clicks   arstechnica.com/ai/2025/0... · Posted by u/jonbaer
oezi · 5 months ago
The tricky thing for Google will be to do this and not kill their cash cow ad business.
kozikow · 5 months ago
Ads inside LLMs (e.g. pay $ to boost your product in LLM recommendation) is going to be a big thing.

My guess is that Google/OpenAI are eyeing each other - whoever does this first.

Why would that work? It's a proven business model. Example: I use LLMs for product research (e.g. which washing machine to buy). Retailer pays if link to their website is included in the results. Don't want to pay? Then redirect the user to buy it on Walmart instead of Amazon.

kozikow commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
pavlov · 5 months ago
Compare these positive introductory experiences with two technologies that were pushed extremely hard by commercial interests in the past decade: crypto/web3 and VR/metaverse.

Neither was ever able to offer this kind of instant usefulness. With crypto, it’s still the case that you create a wallet and then… there’s nothing to do on the platform. You’re expected to send real money to someone so they’ll give you some of the funny money that lets you play the game. (At this point, a lot of people reasonably start thinking of pyramid schemes and multi-level marketing which have the same kind of joining experience.)

With the “metaverse”, you clear out a space around you, strap a heavy thing on your head, and shut yourself into an artificial environment. After the first oohs and aahs, you enter a VR chat room… And realize the thing on your head adds absolutely nothing to the interaction.

kozikow · 5 months ago
> And realize the thing on your head adds absolutely nothing to the interaction.

There are some nice effects - simulating sword fighting, shooting, etc.

It's just benefits still outweigh the cost. Getting to "good enough" for most people is just not possible in short and midterm.

kozikow commented on Duolingo CEO tries to walk back AI-first comments, fails   htxt.co.za/2025/05/duolin... · Posted by u/_p2zi
lolinder · 7 months ago
My wife quit Duolingo the week before this announcement after years of watching Duolingo prioritize attention manipulation over learning. She had a nearly 6-year streak and was on the paid version at the time, but realized that it wasn't actually helping her learn any more: she'd at some point begun maintaining a streak just for the sake of maintaining a streak.

The best documentation for Duolingo's decline is this article from a few years ago [0]. It's a piece by Duolingo's CPO (who was a former Zynga employee) where he discusses at length how Duolingo started using streaks and other gamification techniques to optimize their numbers. He has a lot to say about manipulating users into spending more time with them, but in the entire piece he barely even gives a token nod to the supposed mission of the company to help people learn. The date he cites for the beginning of their efforts to optimize numbers pretty closely correlates to my sense for when my wife began to complain about Duolingo feeling more and more manipulative and less and less useful.

This past month they finally jumped the shark and she decided to quit after 6+ years. The subsequent announcement that they'd be using AI to churn out even more lackluster content gave us a good laugh but was hardly surprising: they'd given up on prioritizing learning a long while ago.

[0] https://news.ycombinator.com/item?id=34977435

kozikow · 7 months ago
Mindless optimization of basic "attention grab" metric is why the whole internet feels like a slots machine. Be it reddit, Facebook, YouTube, any google result

Thankfully this won't happen with LLMs, as compute is too expensive so execs can't just take an easy way out of optimizing for number of questions asked

kozikow commented on The Leaderboard Illusion   arxiv.org/abs/2504.20879... · Posted by u/pongogogo
ekidd · 8 months ago
Also, I've been hearing a lot of complaints that Chatbot Arena tends to favor:

- Lots of bullet points in every response.

- Emoji.

...even at the expense of accurate answers. And I'm beginning to wonder if the sycophantic behavior of recent models ("That's a brilliant and profound idea") is also being driven by Arena scores.

Perhaps LLM users actually do want lots of bullets, emoji and fawning praise. But this seems like a perverse dynamic, similar to the way that social media users often engage more with content that outrages them.

kozikow · 8 months ago
More to that - at this point, it feels to me, that arenas are getting too focused on fitting user preferences rather than actual model quality.

In reality I prefer different model, for different things, and quite often it's because model X is tuned to return more of my preference - e.g. Gemini tends to be usually the best in non-english, chatgpt works better for me personally for health questions, ...

kozikow commented on Grafana: Why observability needs FinOps, and vice versa   grafana.com/blog/2025/02/... · Posted by u/StratusBen
kozikow · 10 months ago
I am big fan of "cost monitoring".

In my previous company I had a good setup for costs monitoring - including release to release comparisons, drill downs, statistics, etc.

After each release I looked at this data. It saved a lot of $, by simple fixes like "why we are calling this API twice?".

It also quite some issues that weren't strictly customer related, but weren't apparent from other type of data (you will always have some "unknown unknowns" in your monitoring, and costs data seem to be pretty wide net to catch some of those)

kozikow commented on OpenAI says it has evidence DeepSeek used its model to train competitor   ft.com/content/a0dfedd1-5... · Posted by u/timsuchanek
kozikow · a year ago
Chatgpt content is getting pasted all over the web. Now, for anyone crawling the web, it's hard to not include some chatgpt outputs.

So even if you put some "watermarks" in your AI generation, it's plausible defense to find publicly posted content with those watermarks.

Maybe it's explained in the article, but I can't access it, as it's paywalled.

kozikow commented on Using generative AI as part of historical research: three case studies   resobscura.substack.com/p... · Posted by u/benbreen
cyrillite · a year ago
Now the question is how can I, someone without a PhD in history but currently a PhD candidate in another discipline, use these tools to reliably interrogate topics of interest and produce at least a graduate level understanding of them?

I know this is possible, but the further away I get from my core domains, the harder it is for me to use these tools in a way that doesn’t feel like too much blind faith (even if it works!)

kozikow · a year ago
> the harder it is for me to use these tools in a way that doesn’t feel like too much blind faith (even if it works!)

I tend to ask multiple models and if they all give me roughly the same answer, then it's probably right.

kozikow commented on South Korean president declares martial law, parliament votes to lift it   apnews.com/article/south-... · Posted by u/Inocez
bloomingkales · a year ago
Orwell being so right about governments using the constant threat of a virtual enemy has got to be one of the all time top on the money predictions ever.

Up there with gravity and shit. I wish we could do something with this information, but alas, knowledge isnt power.

Cersei Lannister: Power is power.

kozikow · a year ago
> Cersei Lannister: Power is power.

Knowledge is a necessary, but not sufficient component of power

Or in other words observability is a necessary, but not sufficient component of optimization.

kozikow commented on Hey, wait – is employee performance Gaussian distributed?   timdellinger.substack.com... · Posted by u/timdellinger
deepnet · a year ago
My takeaway ( and an indication of who actually needs a performance review [ e.g. the manager ])

“ It’s my opinion that the biggest factor in an employee's performance – perhaps bigger than the employee’s abilities and level of effort – is whether their manager set them up for success “

kozikow · a year ago
Or other way around - in bigcorp (or in startup) choosing what to work on have much bigger impact than the work you do.

On very low level it's up to your manager. As time goes, even as IC you have a lot of agency. It's not just company selection, team selection, but also which part of the project you are working on and how you are approaching solving it.

Of course "if everyone does this, who will fix the bugs". However, the quickest promoted people I've seen are the people who were excellent at politics-izing (and sometimes foresight) the best work assigned to them.

u/kozikow

KarmaCake day1288February 16, 2011
About
Co-founder of deep learning computer vision venture - tensorflight.com. Previously google search.
View Original