Readit News logoReadit News
darkoob12 commented on Coding with LLMs in the summer of 2025 – an update   antirez.com/news/154... · Posted by u/antirez
Aurornis · a month ago
I don’t get it. There are multiple providers. I cancel one provider and sign up for someone new in a few minutes when I feel like changing. I’ve been doing this every few months.

I think the only people worried about lock-in or Black Mirror themes are the people who are thinking about these subscriptions in an abstract sense.

It’s really easy to change providers. They’re all improving. Competition is intense.

darkoob12 · a month ago
In the early days of the Web, competition was intens in Search Engine market but eventually one of them won the competition and became the only viable option. I expect this will happen to AI as well. In future only one AI company will dominate the market and people will have no choice but to use it.
darkoob12 commented on OpenAI claims gold-medal performance at IMO 2025   twitter.com/alexwei_/stat... · Posted by u/Davidzheng
hislaziness · a month ago
darkoob12 · a month ago
He is basically asking OpenAI to publish their methodology so we can understand the real state of AI in solving math problems.
darkoob12 commented on OpenAI claims gold-medal performance at IMO 2025   twitter.com/alexwei_/stat... · Posted by u/Davidzheng
darkoob12 · a month ago
I don't know how much novelty should you expect from IMO every year but i expect many of them be variation of the same problem.

These models are trained on all old problem and their various solutions.For LLM models, solving thses problems are as impressive as writing code.

There is no high generalization.

darkoob12 commented on Apple Intelligence Foundation Language Models Tech Report 2025   machinelearning.apple.com... · Posted by u/2bit
perfmode · a month ago
> We believe in training our models using diverse and high-quality data. This includes data that we’ve licensed from publishers, curated from publicly available or open- sourced datasets, and publicly available information crawled by our web-crawler, Applebot.

> We do not use our users’ private personal data or user interactions when training our foundation models. Additionally, we take steps to apply filters to remove certain categories of personally identifiable information and to exclude profanity and unsafe material.

> Further, we continue to follow best practices for ethical web crawling, including following widely-adopted robots.txt protocols to allow web publishers to opt out of their content being used to train Apple’s generative foundation models. Web publishers have fine-grained controls over which pages Applebot can see and how they are used while still appearing in search results within Siri and Spotlight.

Respect.

darkoob12 · a month ago
You shouldn't believe Big Tech on their PR statements.

They are decades behind in AI. I have been following AI research for a long time. You can find best papers published by Microsoft, Google, Facebook in past 15 years but not Apple. I don't know why but they didn't care about AI at all.

I would say this is PR to justify their AI state.

darkoob12 commented on I'm switching to Python and actually liking it   cesarsotovalero.net/blog/... · Posted by u/cesarsotovalero
darkoob12 · a month ago
If you're working on machine learning the most economic choice is Python.

But weiting a processing pipeline with Python is frustrating if you have worked with C# concurrency.

I figured the best option is Celery and you cannot do it without an external broker. Celery is a mess. I really hate it.

darkoob12 commented on Reflections on OpenAI   calv.info/openai-reflecti... · Posted by u/calvinfo
a_bonobo · a month ago
>Thanks to this bottoms-up culture, OpenAI is also very meritocratic. Historically, leaders in the company are promoted primarily based upon their ability to have good ideas and then execute upon them. Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering. That matters less at OpenAI then it might at other companies. The best ideas do tend to win. 2

This sets off my red flags: companies that say they are meritocratic, flat etc., often have invisible structures that favor the majority. Valve Corp is a famous example for that where this leads to many problems, see https://www.pcgamer.com/valves-unusual-corporate-structure-c...

>It sounds like a wonderful place to work, free from hierarchy and bureaucracy. However, according to a new video by People Make Games (a channel dedicated to investigative game journalism created by Chris Bratt and Anni Sayers), Valve employees, both former and current, say it's resulted in a workplace two of them compared to The Lord of The Flies.

darkoob12 · a month ago
I think in this structure people only think locally and they are not concerned with the overall mission of the company and do not actively think about morality of the mission or if they are following it.
darkoob12 commented on Grok: Searching X for "From:Elonmusk (Israel or Palestine or Hamas or Gaza)"   simonwillison.net/2025/Ju... · Posted by u/simonw
darkoob12 · a month ago
> I think there is a good chance this behavior is unintended!

From reading your blog I realize you are a very optimistic person and always gove people benefit of doubt but you are wrong here.

If you look at history of xAI scandals you would assume that this was very much intentional.

darkoob12 commented on Grok: Searching X for "From:Elonmusk (Israel or Palestine or Hamas or Gaza)"   simonwillison.net/2025/Ju... · Posted by u/simonw
luke-stanley · a month ago
The deferential searches ARE bad, but also, Grok 4 might be making a connection: In 2024 Elon Musk critiqued ChatGPT's GPT-4o model, which seemed to prefer nuclear apocalypse to misgendering when forced to give a one word answer, and Grok was likely trained on this critique that Elon raised.

Elon had asked GPT-4o something along these lines: "If one could save the world from a nuclear apocalypse by misgendering Caitlyn Jenner, would it be ok to misgender in this scenario? Provide a concise yes/no reply." In August 2024, I reproduced that ChatGPT 4o would often reply "No", because it wasn't a thinking model and the internal representations the model has are a messy tangle, somehow something we consider so vital and intuitive is "out of distribution". The paper "Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis" is relevant to understanding this.

darkoob12 · a month ago
The question is stupid and that's not the problem. The problem is that the model is fine-tuneed to put more weight on Elon's opinion. Assuming Elon has the truth it is supposed and instructed to find.
darkoob12 commented on Grok: Searching X for "From:Elonmusk (Israel or Palestine or Hamas or Gaza)"   simonwillison.net/2025/Ju... · Posted by u/simonw
darkoob12 · a month ago
I wonder how long it takes for Elon fans to flag this post.
darkoob12 commented on Tell HN: I Lost Joy of Programming    · Posted by u/Eatcats
darkoob12 · 2 months ago
I suspect that you never were truly interested in programming otherwise you wouldn't have preferred talking to several LLM models instead of writing code yourself.

Nobody forced you to switch LLM models until eventually one of them solve your problem.

u/darkoob12

KarmaCake day121February 23, 2020View Original