Readit News logoReadit News
esjeon commented on Emailing a one-time code is worse than passwords   blog.danielh.cc/blog/pass... · Posted by u/max__dev
anonymars · 20 days ago
The email is coming from the legitimate service, it's a man-in-the-middle attack.

How does this scheme stop you from putting a legitimate code from a legitimate sender into an illegitimate website?

esjeon · 18 days ago
Ah, sorry, I did get that part, and my idea goes a little bit further, but somehow I thought I wrote enough.

One thing is that this problem occurs because we have two independent channels that we must independently verify. I’m pretty sure this is a whack-a-mole game, and will never be possible to fix.

Another thing is that, since we don’t trust emails, we hesitate sending links over email. However, the problem here is easy to avoid if services send login links directly to user, and those emails are automatically authenticated by the system.

esjeon commented on Emailing a one-time code is worse than passwords   blog.danielh.cc/blog/pass... · Posted by u/max__dev
esjeon · 20 days ago
The actual weak link here is not the procedure itself. It’s the fact that your email services will happily accept phishing mails into your inbox.

I’m pretty sure we can prevent this by issuing some kind of proof of agreement (with sender and recipient info) thru email services. Joining a service becomes submitting a proof to the service, and any attempt to contact the user from the service side must be sealed with the proof. Mix in some signing and HMAC this should be doable. I mean, IF we really want to extend the email standard.

esjeon commented on Replacing tmux in my dev workflow   bower.sh/you-might-not-ne... · Posted by u/elashri
esjeon · a month ago
Interestingly enough, suckless folks took the opposite approach with their terminal:

> Goals … Do not reimplement tmux and his comrades.

( From https://st.suckless.org/goals/ )

esjeon commented on Zig Interface Revisited   williamw520.github.io/202... · Posted by u/ww520
hmry · a month ago
I strongly dislike "this feature could be abused, so we won't add it" as reasoning for language design decisions. It just doesn't sit right with me. I think designing to avoid "misuse" (i.e. accidentally shooting yourself in the foot) is great, but avoiding "abuse" just reads as imposing your taste onto all users of your language. I don't like this, so nobody should be able to do it.

But oh well, if you're using Zig (or any other language using auteur-driven development like Odin or Jai or C3) you've already signed up for only getting the features that the benevolent dictator thinks are useful. You take the good (tightly designed with no feature bloat) and the bad ("I consider this decision unlikely to be reversed").

esjeon · a month ago
> avoiding "abuse" just reads as imposing your taste onto all users of your language.

I believe languages are all about bias. A language must represent the preference of the community using it.

We should all learn from the case of Lisp, which is the simplest language and likely the most expressible language. The community suffered from the serious fragmentation driven by the sheer simplicity of the language and tons of NIH syndrome. It took them 30 years to get a standard CL, and another 20 years to get a de facto standard package repository (QuickLisp).

esjeon commented on OpenAI claims gold-medal performance at IMO 2025   twitter.com/alexwei_/stat... · Posted by u/Davidzheng
esjeon · a month ago
I get the feeling that modern computer systems are so powerful that they can solve almost all well-explored closed problems with a properly tuned model. The problem lies in efficiency, reliability, and cost. Increasing efficiency and reliability would require an exponential increase in cost. QC might solve that cost part, and symbolic reasoning model will significantly boost both efficiency and reliability.
esjeon commented on Evaluating publicly available LLMs on IMO 2025   matharena.ai/imo/... · Posted by u/hardmaru
esjeon · a month ago
> For Problem 5, models often identified the correct strategies but failed to prove them, which is, ironically, the easier part for an IMO participant. This contrast ... suggests that models could improve significantly in the near future if these relatively minor logical issues are addressed.

Interesting but I'm not sure if this is really due to "minor logical issues". This sounds like a failure due to the lack of the actual understanding (the world model problem). Perhaps the actual answers from AIs might have some hints, but I can't find them.

(EDIT: ooops, found the output on the main page of their website. Didn't expect that.)

> Best-of-n is Important ... the models are surprisingly effective at identifying the relative quality of their own outputs during the best-of-n selection process and are able to look past coherence to check for accuracy.

Yes, it's always easier to be a backseat driver.

esjeon commented on I avoid using LLMs as a publisher and writer   lifehacky.net/prompt-0b95... · Posted by u/tombarys
tombarys · a month ago
I am a book publisher & I love technology. It can empower people. I have been using LLM chatbots since they became widely available. I regularly test machine translation at our publishing house in collaboration with our translators. I have just completed two courses in artificial intelligence and machine learning at my alma mater, Masaryk University, and I am training my own experimental models (for predicting bestsellers :). I consider machine learning to be a remarkable invention and catalyst for progress. Despite all this, I have my doubts.
esjeon · a month ago
I know a publisher who translates books (English to Korean). He works alone these days. Using GPT, he can produce a decent-quality first draft within a day or two. His later steps are also vastly accelerated because GPT reliably catches typos and grammar errors. It doesn't take more than a month to translate and print a book from scratch. Marvelous.

But I still don't like that the same model struggles w/ my projects...

esjeon commented on I avoid using LLMs as a publisher and writer   lifehacky.net/prompt-0b95... · Posted by u/tombarys
metalrain · a month ago
Pretty similar view than others have expressed in veiks of "LLMs can be good, just not at my [area of expertise]".
esjeon · a month ago
I'm pretty sure they were generally (if not completely) correct when they said that.

It's either the tech is advancing so quickly that many people can't keep up, or simply the cost of adapting outweighs the potential profit from their remaining careers, even when taking the new tech into account.

esjeon commented on I deleted my second brain   joanwestenberg.com/p/i-de... · Posted by u/MrVandemar
esjeon · 2 months ago
I've done the same several times with different media. I've used notebooks, wikis, post-its, Obsidian, etc, to organize my thoughts and ideas, but in the end, I've rarely revisited them.

Don't get me wrong - it's still critical to keep track of important information in one way or another. But your own thoughts usually aren't part of that. You are always you, so given the same situation, your future self will likely come up with the same idea you have now (or something even better). That's why keeping track of quick ideas rarely bears fruit.

What you really need to track is unusual information:

- something not from you

- something you can't easily reproduce

- something that sparks new ideas you wouldn't have on your own

In other words, keep the sources of your ideas, not the ideas themselves. This leads to a much lower noise-to-signal ratio because you're more likely to consume well-formulated information, at least much better written than your scattered quick notes.

esjeon commented on We accidentally solved robotics by watching 1M hours of YouTube   ksagar.bearblog.dev/vjepa... · Posted by u/alexcos
jjangkke · 2 months ago
Very good point! This area faces a similar misalignment of goals in that it tries to be a generic fit-all solution that is rampant with today's LLMs.

We made a sandwich but it cost you 10x more than it would a human and slower might slowly become faster and more efficient but by the time you get really good at it, its simply not transferable unless the model is genuinely able to make the leap across into other domains that humans naturally do.

I'm afraid this is where the barrier of general intelligence and human intelligence lies and with enough of these geospatial motor skill database, we might get something that mimics humans very well but still run into problems at the edge, and this last mile problem really is a hinderance to so many domains where we come close but never complete.

I wonder if this will change with some sort of computing changes as well as how we interface with digital systems (without mouse or keyboard), then this might be able to close that 'last mile gap'.

esjeon · 2 months ago
Note that the username here is a Korean derogatory term for Chinese people.

u/esjeon

KarmaCake day1245March 30, 2013View Original