Readit News logoReadit News
lordofmoria commented on OpenAI to buy AI startup from Jony Ive   bloomberg.com/news/articl... · Posted by u/minimaxir
lordofmoria · 3 months ago
I don't understand all the pessimism and incredulity about the valuation. This is an acquisition to take on and disrupt Apple.

Ives + Altman is perceived as a viable successor to the Ives + Jobs partnership that made Apple successful.

Apple is weak and doesn't seem capable of innovating anymore, nor do they seem to understand how to build AI into products.

There's an opportunity to build an Apple-sized hardware wearables company with AI at its core, just as Altman built ChatGPT and disrupted the Google-sized search.

"Apple-sized" more than justifies a 5B valuation.

lordofmoria commented on I stopped using AI code editors   lucianonooijen.com/blog/w... · Posted by u/kiyanwang
sceptic123 · 5 months ago
What are the safety checks you talk about here?
lordofmoria · 5 months ago
Sorry, I probably phrased that poorly - when you’re coding with AI, you should get in the habit of spending more time checking for security mistakes. Not sanitizing, not scoping properly. Same mistakes a junior or mid would make, but unlike them, the AI will not doubt itself and highlight particular code it wrote asking “is this right?”. So you need to develop the habit of being careful.
lordofmoria commented on I stopped using AI code editors   lucianonooijen.com/blog/w... · Posted by u/kiyanwang
lordofmoria · 5 months ago
What? Note for any juniors reading this: DO NOT TRY THIS AT HOME.

Does the author enjoy writing code primarily because they enjoy typing?

Are they not able to have the mental discipline to think and problem solve whilst using the living heck out of an AI auto complete?

What's the fun in manually typing out code that has literally been written, copied, copied again, and then re-copied so many times before that the LLM can predict it?

Isn't it more dangerous to not learn the new patterns / habits / research loops / safety checks that come with AI coding? Surely their future career success will depend on it, unless they are currently working in a very, very niche area that is guaranteed to last the rest of their career.

I'm sorry, this is a truly unnatural and absurd reaction to a very natural feeling of being out of our comfort zone because technology has advanced, which we are currently all feeling.

lordofmoria commented on Are LLMs able to notice the “gorilla in the data”?   chiraaggohel.com/posts/ll... · Posted by u/finding_theta
forgotusername6 · 7 months ago
I had a recent similar experience with chat gpt and a gorilla. I was designing a rather complicated algorithm so I wrote out all the steps in words. I then asked chatgpt to verify that it made sense. It said it was well thought out, logical etc. My colleague didn't believe that it was really reading it properly so I inserted a step in the middle "and then a gorilla appears" and asked it again. Sure enough, it again came back saying it was well thought out etc. When I questioned it on the gorilla, it merely replied saying that it thought it was meant to be there, that it was a technical term or a codename for something...
lordofmoria · 7 months ago
The fundamental problem here is lack of context - a human at your company reading that text would immediately know that Gorilla was not an insider term, and it’d stick out like a sore thumb.

But imagine a new employee eager to please - you could easily imagine them OK’ing the document and making the same assumption the LLM did - “why would you randomly throw in that word if it wasn’t relevant”. Maybe they would ask about it though…

Google search has the same problem as LLMs - some meanings of a search text cannot be de-ambiguified with just the context in the search itself, but the algo has to best-guess anyway.

The cheaper input context for LLMs get, and the larger the context window, the more context you can throw in the prompt, and the more often these ambiguities can be resolved.

Imagine in your gorilla in the step example, if the LLM was given the steps, but you also included the full text of slack/notion and confluence as a reference in the prompt. It might succeed. I do think this is a weak point in LLMs though - they seem to really, really not like correcting you unless you display a high degree of skepticism, and then they go to the opposite end of the extreme and they will make up problems just to please you. I’m not sure how the labs are planning to solve this…

lordofmoria commented on Apple Passwords’ generated strong password format   rmondello.com/2024/10/07/... · Posted by u/tambourine_man
chrisshroba · 10 months ago
71 bits of entropy feels rather low...

It seems like many recommendations are to use at least 75-100, or even 128. Being fairly conservative, if you had 10k hosts hashing 1B passwords a second, it would take 7.5 years worst case to crack [1]. If a particular site neglects to use a slow hash function and salting, it's easy to imagine bad actors precomputing rainbow tables that would make attacks relatively easy.

You can rebut that that's still a crazy amount of computation needed, but since it's reusable, I find it easy to believe it's already being done. For comparison, if the passwords have 100 bits of entropy, it would take those same 10k servers over 4 billion years to crack the password.

[1]: (2*71 / 1e9 / 10000 / (606024*365)) ≈ 7.5

lordofmoria · 10 months ago
I think the assumption is that this is going into a somewhat modern hashing algorithm like argon, bcrypt (created 1999 - that's a quarter-century ago), or scrypt with salt. With those assumptions, the calculations aren't reusable, and definitely not 1B passwords / second.

If that's not true and the password is being stored using MD5 (something that's been NIST-banned at this point for over a decade), then honestly all bets are off, and even 128 bits of entropy might not be enough.

lordofmoria commented on What's New in Ruby on Rails 8   blog.appsignal.com/2024/1... · Posted by u/amalinovic
rgbrgb · a year ago
you might mean shopify, not spotify. I think spotify is python/go, whereas shopify was started by a rails core contributor and probably has the biggest rails deployment in the world
lordofmoria · a year ago
Yes, edited!
lordofmoria commented on What's New in Ruby on Rails 8   blog.appsignal.com/2024/1... · Posted by u/amalinovic
noobermin · a year ago
People just don't talk about Ruby anymore. For those who don't do webdev, has it just stabilised and people take it for granted now, or was it always hype and people have long since moved on?
lordofmoria · a year ago
Rails went through a down period 2014-2020 due to several reasons:

1. React burst on the scene in 2014

2. the hyperscale FANG companies were dominating the architecture meta with microservices, tooling etc, which worked for them at 500+ engineers, but made no sense for smaller companies.

3. there was a growing perception that "Rails doesn't scale" as selection bias kicked in - companies that successfully used rails to grow their companies, then were big enough to justify migrating off to microservices, or whatever.

4. Basecamp got caught up in the DEI battles and got a ton of bad press at the height of it.

5. Ruby was legitimately seen as slow.

The big companies that stuck with Rails (GH, Shopify, Gitlab, etc, etc) did a ton of work to fix Ruby perf, and it shows. Shopify in particular deserves an enormous amount of credit for keeping Ruby and Rails going. Their continued existence proves that Rails does, in fact, scale.

Also the meta - tech-architecture and otherwise - seems to be turning back to DHH's favor, make of that what you will.

lordofmoria commented on What's New in Ruby on Rails 8   blog.appsignal.com/2024/1... · Posted by u/amalinovic
cdiamand · a year ago
Anybody have any opinions on moving away from Redis for cables/caching/jobs?

I supposed it'd be nice to have one less thing to manage, but I'm wondering if there are any obvious gotchas to moving these features over to sqlite and postgresql.

lordofmoria · a year ago
I had a bad experience with Action Cable + Redis (extremely high memory overhead, tons of dead redis connections), so it's a bit "fool me once" with regard to action cables.

The main argument for caching in the DB (the slight increase in latency going from in-memory->DB-in-memroy is more than countered by the DB's cheapness of cache space allowing you to have tons more cache) is one of those brilliant ideas that I would like to try at some point.

Solid job - i just am 100% happy with Sidekiq at this point, I don't understand why I'd switch and introduce potential instability/issues.

lordofmoria commented on Jensen's Inequality as an Intuition Tool (2021)   blog.moontower.ai/jensens... · Posted by u/sebg
lordofmoria · a year ago
This was interesting, especially with the DCF example at the end - it’s pertinent to business sell decisions (assuming your ownership structure allows you to make a decision) should I sell at an 8x multiple of revenue, or hold at an X% growth rate and Y% cash flow? What’s my net after 10 years?

The point of Jensen’s inequality if I understand correctly is that you’d underestimate the value of holding using a basic estimate approach, because you’ll underestimate the compounding cash flow from growth?

lordofmoria commented on Human drivers are to blame for most serious Waymo collisions   understandingai.org/p/hum... · Posted by u/2bluesc
fragmede · a year ago
The raw data is available for download and you can compare not getting into any accidents to their number of accidents per however many hundreds of thousands of miles.

https://waymo.com/safety/impact/#downloads

There isn't much to the data available for download, but it looks like 0.00001207261588 accidents per mile, or ~1.2 accidents per 100,000 miles (268/22199000). Figuring your father drives 15k miles per year, times 30 years and rounding up to 500k miles, Waymo has a recorded 6 accidents to your father's 0.

Not sure why that's an interesting comparison, however.

Assuming your dad is good at not driving when he shouldn't (tired/drunk/angry), he's not on the road when it's worrisome. I don't worry about getting into accidents with drivers who aren't on the road, I worry about the tired/drunk/angry drivers I do have to share the road with. Waymo at 2:15am after the bars let out is much less worrisome than any other car at that time, because I have no idea who's in that other car. Your father could be the safest driver ever, but I have no idea if it's him in the other car, or if that driver is totally blacked out and shouldn't be driving.

lordofmoria · a year ago
Thanks for doing the math and making this concrete!

I think it’s interesting because:

1) it gives Waymo a higher target to shoot for - it hasn’t “solved” self-driving because its safer than the average driver. I am so impressed by Waymo, but I feel like some of this article smacked of premature “mission accomplished” vibes. The fact that it just accepted the comparison to average without caveat is an example of that. 2) As a matter of policy, everyone can agree that a Waymo ride home for the tipsy is good, but where policy will have issues is convincing good drivers such as my dad to take Waymos everywhere. Not to mention most drivers irrationally think they’re way better than average - that will affect policy in a real way.

u/lordofmoria

KarmaCake day913December 11, 2012View Original