Readit News logoReadit News
lukeramsden commented on Show HN: Llama 3.3 70B Sparse Autoencoders with API access   goodfire.ai/papers/mappin... · Posted by u/trq_
lukeramsden · a year ago
Why are AI researchers constantly handicapping everything they do under the guise of ""safety""? It's a bag of data and some math algorithms that generate text....
lukeramsden commented on Updates to H-1B   uscis.gov/newsroom/news-r... · Posted by u/sul_tasto
addicted · a year ago
Looks like that’s going through the legal system so is already illegal?

You can’t make a rule that says “hey don’t break the rules”.

That seems logically fallacious.

lukeramsden · a year ago
Can't read the full article due to paywall but ostensibly it's due to bias rules on race and not visa rules? Sounds like visas being abused and then backstopped by unrelated rules does not mean the visa rules shouldn't be fixed.
lukeramsden commented on Test Driven Development (TDD) for your LLMs? Yes please, more of that please   blog.helix.ml/p/building-... · Posted by u/lewq
bdangubic · a year ago
I just mock the answers and assert on the mock of the answer - never fails!
lukeramsden · a year ago
You joke but I have seen this far too many times... in very highly valued startups...

100% line coverage though!

lukeramsden commented on I Followed the Official AWS Amplify Guide and Was Charged $1,100   elliott-king.github.io/20... · Posted by u/thunderbong
soup10 · a year ago
Spend limits are such an obvious and necessary feature that the only reason they don't have them is shady business practices.
lukeramsden · a year ago
Not really. Do you think that this is trivial at AWS scale? What do you do when people hit their hard spend limits, start shutting down their EC2 instances and deleting their data? I can see the argument that just because its "hard" doesn't mean they shouldn't do it, but it's disingenuous to say they're shady because they don't.
lukeramsden commented on DJI ban passes the House and moves on to the Senate   dronedj.com/2024/06/14/dj... · Posted by u/huerne
chme · 2 years ago
Any concrete examples you are referring to?
lukeramsden · 2 years ago
Can start with the number of unicorns in USA vs Europe, especially when you take population in to account https://www.failory.com/unicorns
lukeramsden commented on Why new hires often get paid more than existing employees   bloomberry.com/why-new-hi... · Posted by u/altdataseller
torginus · 2 years ago
Well congrats on your org for having integrity - but the math is simple, let's say the median employee has been at the company for 2 years, during which his market rate has jumped by 20%.

To hire a new employee at market rate, you have to give your team of lets say 10 employees that much money, which is euqivalent to hiring another 2 people for 'nothing'.

lukeramsden · 2 years ago
If you can’t afford to pay the current employees that raise, you certainly can’t afford new people - the market rate is the market rate.
lukeramsden commented on It's not microservice or monolith; it's cognitive load   fernandovillalba.substack... · Posted by u/DevOpsy
jiggawatts · 2 years ago
Something I’ve wanted for a while now is a language / framework that behaves like networked micro services but without the network overheads.

E.g.: the default hosting model might be to have all of the services in a single process with pass-by-copy messages. One could even have multiple instances of a service pinned to CPU cores, with hash-based load balancing so that L2 and L3 caches could be efficiently utilised.

The “next tier” could be a multi-process host with shared memory. E.g.: there could be permanent “queue” and “cache” services coupled to ephemeral Web and API services. That way, each “app” could be independently deployed and restarts wouldn’t blow away terabytes of built up cache / state. One could even have different programming languages!

Last but not least, scale out clusters ought to use RDMA instead of horrifically inefficient JSON-over-HTTPS.

Ideally, the exact same code ought to scale to all three hosting paradigms without a rewrite (but perhaps a recompile).

Some platforms almost-but-not-quite work this way, such as EJB hosts — they can short circuit networking for local calls. However they’re not truly polyglot as they don’t support non-JVM languages. Similarly Service Fabric has some local-host optimisations but they’re special cases. Kubernetes is polyglot but doesn’t use shared memory and has no single-process mode.

lukeramsden · 2 years ago
The best way to do this is message passing. My current way of doing it is using Aeron[0] + SBE[1] to pass messages very efficiently between "services" - you can then configure it to either be using local shared memory (/dev/shm) or to replicate the log buffer over the network to another machine.

[0]: https://aeroncookbook.com/aeron/overview/ [1]: https://aeroncookbook.com/simple-binary-encoding/overview/

lukeramsden commented on The big TDD misunderstanding (2022)   linkedrecords.com/the-big... · Posted by u/WolfOliver
MoreQARespect · 2 years ago
I'm fully aware of the idea that TDD is a "design practice" but I find it to be completely wrongheaded.

The principle that tests that couple to low level code give you feedback about tightly coupled code is true but it does that because low level/unit tests couple too tightly to your code - I.e. because they too are bad code!

Have you ever refactored working code into working code and had a slew of tests fail anyway? That's the child of test driven design.

High level/integration TDD doesnt give "feedback" on your design it just tells you if your code matches the spec. This is actually more useful. It then lets you refactor bad code with a safety harness and give failures that actually mean failure and not "changed code".

I keep wishing for the idea of test driven design to die. Writing tests which break on working code is inordinately uneconomic way to detect design issues as compared to developing an eye for it and fixing it under a test harness with no opinion on your design.

So, yes this - high level test driven development - is TDD and moreover it's got a better cost/benefit trade off than test driven design.

lukeramsden · 2 years ago
Part of test-driven design is using the tests to drive out a sensible and easy to use interface for the system under test, and to make it testable from the get-go (not too much non-determinism, threading issues, whatever it is). It's well known that you should likely _delete these tests_ once you've written higher level ones that are more testing behaviour than implementation! But the best and quickest way to get to having high quality _behaviour_ tests is to start by using "implementation tests" to make sure you have an easily testable system, and then go from there.
lukeramsden commented on Why scalpers can get tickets   404media.co/why-scalpers-... · Posted by u/bookofjoe
TeMPOraL · 2 years ago
I'm honestly not sure anymore if scalping should be delegalized. I'm increasingly convinced that it's only the problem because everyone is in on it, with artists using scalpers as "heat sinks", to get more money while pretending they're the victim, and everyone is well compensated for the role they play in the scheme. If that's the case, then the entire structure is the problem, and it starts with artists exploiting the audience looking for moral/spiritual role models.
lukeramsden · 2 years ago
I did some research in to this this year in the context of maybe trying to start a business to solve this - and this was the conclusion I came to. There’s lots of threads here on HN about it too. It’s a structural, market-wide issue where the primary service Ticketmaster provide is reputation laundering, and in return, large agents and promoters agree to continue to use Ticketmaster despite their reputation.

I don’t know what the solution is.

lukeramsden commented on I don’t buy “duplication is cheaper than the wrong abstraction” (2021)   codewithjason.com/duplica... · Posted by u/Akronymus
lvncelot · 2 years ago
> Except in very minor cases, duplication is virtually always worth fixing.

I disagree with the severity of this, and would posit that there are duplications that can't be "fixed" by an abstraction.

There are many instances I've encountered where two pieces of code coincided to look similar at a certain point in time. As the codebase evolved, so did the two pieces of code, their usage and their dependencies, until the similarity was almost gone. An early abstraction that would've grouped those coincidentally similar pieces of code would then have to stretch to cover both evolutions.

A "wrong abstraction" in that case isn't an ill-fitting abstraction where a better one was available, it's any (even the best possible) abstraction in a situation that has no fitting generalization, at all.

lukeramsden · 2 years ago
> There are many instances I've encountered where two pieces of code coincided to look similar at a certain point in time. As the codebase evolved, so did the two pieces of code, their usage and their dependencies, until the similarity was almost gone

https://connascence.io/

u/lukeramsden

KarmaCake day569May 24, 2018
About
[ my public key: https://keybase.io/lukeramsden; my proof: https://keybase.io/lukeramsden/sigs/yOmLLQNb5tgyWumuoGoSbYceXwfNgFxR13YBPpJyUVg ]
View Original