Readit News logoReadit News
photon_lines commented on Ask HN: What LLM are you all using for coding assistance right now?    · Posted by u/anon567812
photon_lines · 15 days ago
My favorite LLMs ranked:

  DeepSeek (with thinking & reasoning & search turned on)
  Claude Code
  QWEN Coder (with thinking & reasoning & search turned on)
  ChatGPT
  Google Gemini
Each one has it's own strength and I use each one for different tasks:

  - DeepSeek: excellent at coming up with solutions and churning out prototypes / working solutions with Reasoning mode turned on.
  - Claude Code: I use this with cursor to quickly come up with overviews / READMEs for repos / new code I'm browsing and in making quick changes to the code-base (I only use it for simple tasks and don't usually trust it for implementing more advanced features). 
  - QWEN Coder: similar to deep-seek but much better at working with visual / image data sets.
  - ChatGPT: usually use it for simple answers / finding bugs in code / resolving issues. 
  - Google Gemini: is catching up to other models when it comes to coding and more advanced tasks but still produces code that is a bit too verbose for my taste. Still solid progress since initial release and will most likely catch up to other models on most coding tasks soon.

photon_lines commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
sdesol · 15 days ago
I'm not sure if I would say human reasoning is 'probabilistic' unless you are taking a very far step back and saying based on how the person lived, they have ingrained biases (weights) that dictates how they reason. I don't know if LLMs have a built in scepticism like humans do, that plays a significant role in reasoning.

Regardless if you believe LLMs are probabilistic or not, I think what we are both saying is context is king and what it (LLM) says is dictated by the context (either through training) or introduced by the user.

photon_lines · 15 days ago
'I don't know if LLMs have a built in scepticism like humans do' - humans don't have an 'in built skepticism' -- we learn in through experience and through being taught how to 'reason' within school (and it takes a very long time to do this). You believe that this is in-grained but you may have forgotten having to slog through most of how the world works and being tested when you went to school and when your parents taught you these things. On the context component: yes, context is vitally important (just as it is with humans) -- you can't produce a great solution unless you understand the 'why' behind it and how the current solution works so I 100% agree with that.
photon_lines commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
didibus · 15 days ago
You seem possibly more knowledgeable then me on the matter.

My impression is that LLMs predict the next token based on the prior context. They do that by having learned a probability distribution from tokens -> next-token.

Then as I understand, the models are never reasoning about the problem, but always about what the next token should be given the context.

The chain of thought is just rewarding them so that the next token isn't predicting the token of the final answer directly, but instead predicting the token of the reasoning to the solution.

Since human language in the dataset contains text that describes many concepts and offers many solutions to problems. It turns out that predicting the text that describes the solution to a problem often ends up being the correct solution to the problem. That this was true was kind of a lucky accident and is where all the "intelligence" comes from.

photon_lines · 15 days ago
So - in the pre-training step you are right -- they are simple 'statistical' predictors but there are more steps involved in their training which turn them from simple predictors to being able to capture patterns and reason -- I tried to come up with an intuitive overview of how they do this in the write-up and I'm not sure I can give you a simple explanation here, but I would recommend you play around with Deep-Seek and other more advanced 'reasoning' or 'chain-of-reason' models and ask them to perform tasks for you: they are not simply statistically combining information together. Many times they are able to reason through and come up with extremely advanced working solutions. To me this indicates that they are not 'accidently' stumbling upon solutions based on statistics -- they actually are able to 'understand' what you are asking them to do and to produce valid results.
photon_lines commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
sdesol · 15 days ago
LLMs (current implementation) are probabilistic so it really needs the actual code to predict the most likely next tokens. Now loading the whole code base can be a problem in itself, since other files may negatively affect the next token.
photon_lines · 15 days ago
Sorry -- I keep seeing this being used but I'm not entirely sure how it differs from most of human thinking. Most human 'reasoning' is probabilistic as well and we rely on 'associative' networks to ingest information. In a similar manner - LLMs use association as well -- and not only that, but they are capable of figuring out patterns based on examples (just like humans are) -- read this paper for context: https://arxiv.org/pdf/2005.14165. In other words, they are capable of grokking patterns from simple data (just like humans are). I've given various LLMs my requirements and they produced working solutions for me by simply 1) including all of the requirements in my prompt and 2) asking them to think through and 'reason' through their suggestions and the products have always been superior to what most humans have produced. The 'LLMs are probabilistic predictors' comments though keep appearing on threads and I'm not quite sure I understand them -- yes, LLMs don't have 'human context' i.e. data needed to understand human beings since they have not directly been fed in human experiences, but for the most part -- LLMs are not simple 'statistical predictors' as everyone brands them to be. You can see a thorough write-up I did of what GPT is / was here if you're interested: https://photonlines.substack.com/p/intuitive-and-visual-guid...
photon_lines commented on My 2.5 year old laptop can write Space Invaders in JavaScript now (GLM-4.5 Air)   simonwillison.net/2025/Ju... · Posted by u/simonw
simonw · a month ago
I scanned the code and understood what it was doing, but I didn't spend much time on it once I'd seen that it worked.

If I'm writing code for production systems using LLMs I still review every single line - my personal rule is I need to be able to explain how it works to someone else before I'm willing to commit it.

I wrote a whole lot more about my approach to using LLMs to help write "real" code here: https://simonwillison.net/2025/Mar/11/using-llms-for-code/

photon_lines · a month ago
This is why I love using the Deep-Seek chain of reason output ... I can actually go through and read what it's 'thinking' to validate whether it's basing its solution on valid facts / assumptions. Either way thanks for all of your valuable write-ups on these models I really appreciate them Simon!
photon_lines commented on LIGO detects most massive black hole merger to date   caltech.edu/about/news/li... · Posted by u/Eduard
lkuty · a month ago
Thanks. Little typo "Let’s inflate Earth once again to its regular size and see what impact placing a 10 kg weight on it has." Should be 1kg.
photon_lines · a month ago
Thank you - I will correct the mistake!
photon_lines commented on LIGO detects most massive black hole merger to date   caltech.edu/about/news/li... · Posted by u/Eduard
mkw5053 · a month ago
I think I'm going to answer my own question by saying both momentum and energy are conserved. The momentum of the entire system was zero before and after the collision. Energy must also be conserved, and since the final object is at rest, all the kinetic energy gets converted into rest mass energy, minus what is radiated away as gravitational waves.
photon_lines · a month ago
Correct. If you're curious about the 'essence' of what black holes are I actually just did a write-up on them which you can find here: https://photonlines.substack.com/p/an-intuitive-guide-to-bla...

u/photon_lines

KarmaCake day684January 5, 2019View Original