Readit News logoReadit News
mdnahas commented on Linux Reaches 5% Desktop Market Share in USA   ostechnix.com/linux-reach... · Posted by u/marcodiego
mdnahas · a month ago
Anecdote: I’ve run Linux and MacOS. I switched to Linux when Apple hardware got stupid expensive (soldered memory and harddrive) and Dell offered laptops with Linux installed. My brother was always saying “Apple just works … less hassle”. But he’s now worried about expense and privacy and moving to Linux.

I feel like the geeky bleeding edge is leaving Apple for Linux.

mdnahas commented on To be a better programmer, write little proofs in your head   the-nerve-blog.ghost.io/t... · Posted by u/mprast
hiAndrewQuinn · a month ago
Oh, I have a relevant and surprisingly simple example: Binary search. Binary search and its variants leftmost and rightmost binary search are surprisingly hard to code correctly if you don't think about the problem in terms of loop invariants. I outlined the loop invariant approach in [1] with some example Python code that was about as clear and close to plain English at I could get.

Jon Bentley, the writer of Programming Pearls, gave the task of writing an ordinary binary search to a group of professional programmers at IBM, and found that 90% of their implementations contained bugs. The most common one seems to have been accidentally running into an infinite loop. To be fair, this was back in the day when integer overflows had to be explicitly guarded against - but even then, it's a surprising number.

[1]: https://hiandrewquinn.github.io/til-site/posts/binary-search...

mdnahas · a month ago
I read about this and started using binary search as my interview question. It worked well - about 2/3rds of highly credentialed applicants could not write a working implementation in 20 minutes. Most failures went into an infinite loop on simple cases! The ones who could write it usually did so quickly.

I think part of reason is that most people were taught a bad interface. Even the code on Wikipedia says “Set L to 0 and R to n-1”. That is, R is an inclusive bound. But we’ve learned that for most string algorithms, it is better when your upper bound is an exclusive bound, that is, n.

I’ve wanted to do an experiment testing that hypothesis. That is, ask a large number of people to write it with different function prototypes and initial calls and see how many buggy implementations I get with inclusive vs exclusive upperbound vs length.

mdnahas commented on Linda Yaccarino is leaving X   nytimes.com/2025/07/09/te... · Posted by u/donohoe
motorest · 2 months ago
> Consider truth social :-) I am amazed people agree to call the messages there 'truths', and reposts, 'retruths'. So embarrassing.

The most Orwellian shit ever.

mdnahas · 2 months ago
The official newspaper of Russian Communists was also called Truth (Pravda).

https://en.m.wikipedia.org/wiki/Pravda

mdnahas commented on Mercury: Ultra-fast language models based on diffusion   arxiv.org/abs/2506.17298... · Posted by u/PaulHoule
mike_hearn · 2 months ago
A good chance to bring up something I've been flagging to colleagues for a while now: with LLM agents we are very quickly going to become even more CPU bottlenecked on testing performance than today, and every team I know of today was bottlenecked on CI speed even before LLMs. There's no point having an agent that can write code 100x faster than a human if every change takes an hour to test.

Maybe I've just got unlucky in the past, but in most projects I worked on a lot of developer time was wasted on waiting for PRs to go green. Many runs end up bottlenecked on I/O or availability of workers, and so changes can sit in queues for hours, or they flake out and everything has to start again.

As they get better coding agents are going to be assigned simple tickets that they turn into green PRs, with the model reacting to test failures and fixing them as they go. This will make the CI bottleneck even worse.

It feels like there's a lot of low hanging fruit in most project's testing setups, but for some reason I've seen nearly no progress here for years. It feels like we kinda collectively got used to the idea that CI services are slow and expensive, then stopped trying to improve things. If anything CI got a lot slower over time as people tried to make builds fully hermetic (so no inter-run caching), and move them from on-prem dedicated hardware to expensive cloud VMs with slow IO, which haven't got much faster over time.

Mercury is crazy fast and in a few quick tests I did, created good and correct code. How will we make test execution keep up with it?

mdnahas · 2 months ago
We don’t. We switch to proven-correct code. Languages like Lean, Coq, and Idris allow proofs of correctness for code. The LLM can generate proofs for most of the correctness conditions.

CI is still needed for performance, UI testing, etc. but it can have a much smaller role than it does now.

mdnahas commented on Jane Street barred from Indian markets as regulator freezes $566M   cnbc.com/2025/07/04/india... · Posted by u/bwfan123
mdnahas · 2 months ago
I was a quantitative trader for an options trading firm in the early 2010s. We were a very technical firms and hedges options trades within minutes. So, a trade with us would shift the stock price quickly. Even with that, we got scammed by traders doing this kind of thing on single stocks. My boss said that he complained to the broker and SEC, but nothing happened. We wrote code to limit our losses from this kind of scam.

We were probably able to find it because we did hedge quickly. Hedging costs money (trading fees, 1/2 spread) so some firms did it less often. We heard that Bear-Sterns only did it 1 time per day (around 4pm when spreads were small and over-night movement risk was nigh). They wouldn’t have caught this scam.

mdnahas commented on Ask HN: What Are You Working On? (June 2025)    · Posted by u/david927
ruieduardolopes · 2 months ago
I am a PhD student and for a while now I'm designing and developing a distributed network protocol that enables dynamic resource allocation across heterogeneous nodes, to which I called Rank. It's designed to handle computational, network, and temporal resources in fully distributed environments without central controllers, but that could also handle a centralized environment. Rank implements four core functions: discovery (finding paths between nodes), estimation (evaluating resource availability), allocation (reserving resources), and sharing (allowing multiple services to use the same resources). What I think it makes it unique is its ability to operate in completely decentralized environments with heterogeneous nodes, making it particularly valuable for edge computing, cloud gaming, distributed content delivery, vehicular communications, and grid computing scenarios. The protocol uses a bidding system where nodes evaluate their capability to fulfill resource requests on a scale from 0-1, enabling dynamic path selection based on current resource availability. I've implemented it in C++ and then also created a testing framework to validate its performance across different network topologies. This is still a work-in-progress and I am eager to publish results someday!
mdnahas · 2 months ago
This sounds interesting. If you want to discuss it or just want a proofreader, please email me. I have done research in both distributed algorithms and economics. hackernews@mike.nahasmail.com

If you don’t know about these already, read about “self stabilizing algorithms”. They are fault-tolerant (to a certain definition) which is important in large distributed algorithms. I used one to build virtual networks with 10,000 nodes.

mdnahas commented on Ask HN: Who wants to be hired? (July 2025)    · Posted by u/whoishiring
mdnahas · 2 months ago
Location: Austin, Texas, USA Remote: only short jobs Willing to relocate: yes Technologies: economics, financial prediction (bonds, stocks, real estate), distributed algorithms, multi-threaded algorithms, linear algebra, formal proof, many languages (C,C++,Java,JavaScript,Coq,OCaML,…) Résumé/CV: https://www.linkedin.com/in/michael-nahas-1012232 Email: hackernews@mike.nahasmail.com

Designer of .par2 file format.

Successful quantitative trader (saved company $20k per day). Employee #7 for successful startup. Been analyzing real estate data for YIMBY non-profit.

I take hard problems and find the right math tools to tame them, then write the code to make it work in production.

mdnahas commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
paradox242 · 3 months ago
I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere. He acknowledged that the tools need an expert to use properly, and as he illustrated, he refined his expertise over many years. He is of the first and last generation of experienced programmers who learned without LLM assistance. How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase? I can almost anticipate an interjection along the lines of "well we used to build everything with our hands and now we have tools etc, it's just different" but this is an order of magnitude different. This is asking a robot to design and assemble a shed for you, and you never even see the saw, nails, and hammer being used, let alone understand enough about how the different materials interact to get much more than a "vibe" for how much weight the roof might support.
mdnahas · 3 months ago
I wonder if that will make the great generation of human coders. Some of our best writers were the generation that spanned between oral education and mass production of books. Late generations read and wrote, rather than memorized and spoke. I think that was Shakespeare’s genius. Maybe our best coders will be supercharged with AI, and subsequent ones enfeabled by it.

Shakespeare was also popular because he was published as books became popular. Others copied him.

mdnahas commented on My five-year experiment with UTC   timestripe.com/magazine/b... · Posted by u/adamci
mdnahas · 3 months ago
I’m convinced we need to move to a world time. The Internet is part of life and we have people coordinating around the globe. Time is time everywhere.

Sunlight is different everywhere. But not everything timed is tied to sunlight. There is no reason that the time should be 12:00 at peak sunlight locally.

UTC is an issue near the international date line. I could see residents of Pacific islands wanting to not have the date change mid-day. Should think about adding hours 24, 25, 26, etc. to Wednesday that are equivalent to hours 0, 1, 2, etc. on Thursday.

(And as a quant who did a lot of time programming for financial markets, fuck Daylight Saving Time and the leap second.)

mdnahas commented on The world could run on older hardware if software optimization was a priority   twitter.com/ID_AA_Carmack... · Posted by u/turrini
dahart · 3 months ago
The dumbest and most obvious of realizations finally dawned on me after trying to build a software startup that was based on quality differentiation. We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.

What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.

There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.

mdnahas · 3 months ago
These economic forces exist in math too. Almost every mathematician publishes informal proofs. These contain just enough discussion in English (or other human language) to convince a few other mathematicians in the same field that they their idea is valid. But it is possible to make errors. There are other techniques: formal step-by-step proof presentations (e.g. by Leslie Lamport) or computer-checked proofs that would be more reliable. But almost no mathematician uses these.

u/mdnahas

KarmaCake day213April 7, 2020View Original