Readit News logoReadit News
donkeyboy commented on Reflections on My Tech Career – Part 1   randomascii.wordpress.com... · Posted by u/breppp
donkeyboy · 2 months ago
Want to read part 2!
donkeyboy commented on The Smol Training Playbook: The Secrets to Building World-Class LLMs   huggingface.co/spaces/Hug... · Posted by u/kashifr
abossy · 2 months ago
What others would you recommend that are comparable in quality?
donkeyboy · 2 months ago
The documentation for common ai packages is pretty good too. For example, pytorch docs, peft docs, timm docs.
donkeyboy commented on Nvidia takes $1B stake in Nokia   cnbc.com/2025/10/28/nvidi... · Posted by u/kjhughes
lovelearning · 2 months ago
The radio access network (RAN) is all the RF part of a mobile network: towers, base stations, the signals between our phones and the towers, phone-to-satellite comms (non-terrestrial network or NTN).

AI-RAN uses AI/ML for adaptive behaviors and optimizations in all these links.

For example, fine-grained RF and modulation details, called the channel state information (CSI), is constantly being exchanged between a phone and a base station. The volume of information creates transmission latencies. Using autoencoder models, this information can be semantically compressed to reduce its volume and decoded with high fidelity on the other side.

That's just one example. In the upcoming 6G, RAN will be "AI-native", using AI/ML everywhere. The standards may require AI accelerator chips in most base stations, NTN satellites, phones, and other elements.

donkeyboy · 2 months ago
Thank you, the future is awesome!
donkeyboy commented on The case for the return of fine-tuning   welovesota.com/article/th... · Posted by u/nanark
simonw · 2 months ago
I ran a survey on Twitter over the past few days asking for successful case studies that produced economically valuable results from fine-tuning LLMs.

I ask a version of this every six months or so, and usually the results are quite disappointing.

This time I had more credible replies than I have had in the past.

Here's my thread with highlights: https://twitter.com/simonw/status/1979254349235925084

And in a thread viewer for people who aren't signed into Twitter: https://twitter-thread.com/t/1979254349235925084

Some of the most impressive:

Datadog got <500ms latency for their language natural querying feature, https://twitter.com/_brimtown/status/1979669362232463704 and https://docs.datadoghq.com/logs/explorer/search/

Vercel run custom fine-tuned models on v0 for Next.js generation: https://vercel.com/blog/v0-composite-model-family

Shopify have a fine-tuned vision LLM for analyzing product photos: https://shopify.engineering/leveraging-multimodal-llms

donkeyboy · 2 months ago
Finetuning is pretty much necessary for regression tasks. Also useful for classification since you can get the direct probabilities in case you want to do some thresholding.
donkeyboy commented on There's Life Inside Earth's Crust   noemamag.com/theres-life-... · Posted by u/jprohov
donkeyboy · 8 months ago
From what i read, this post doesnt announce we’ve found some crazy extremophile unicellular microbe. Just that there is evidence to suggest they are there (due to the chemical makeup of soil/boreholes).
donkeyboy commented on Utah becomes first US state to ban fluoride in its water   bbc.com/news/articles/c4g... · Posted by u/Jimmc414
0xEF · 9 months ago
Sorry for the late reply, but I'm wondering if you can explain why you tip for delivery?

In my area, pizza delivery drivers (read: not DoorDashers, etc. I am not sure what they make since I refuse to use those services) make about $12 - $15/hour and get paid for mileage (usually between $0.50 - $0.62 per mile.) I'm not seeing a reason to tip them. They are making well above minimum wage in my State, unlike the restaurant servers/bartenders that only just barely crested $4/hour as of 2025. The latter is in a position to rely on tips, the former is far from it.

I ask because we don't seem to have an established "hard line" on when tipping is appropriate in the United States, and when it is not. This extremely fuzzy understanding is allowing companies like DoorDash, coffee shops, etc to under pay their staff by off-loading part of the cost to the customer, which makes your $7 latte cost $10, or whatever. It's steamy bullshit and needs to be shoveled into the bin.

If we had a hard line on when tipping is justified, we'd quickly see a change in the other direction. I've always felt that the hard line should be "if you are making less than minimum wage, then tipping is justified." That's it. No soft maybes, no washy-washy justifications.

That being the case, if a barista (avg $15/hour in the US) is not happy _without_ the tips, then they have two options: demand more from their employer, or find a different job that pays better. Either way, the employer is left to consider either raising wages to keep people satisfied, or doing the same just to keep people in the door and stay in business. The barista is, in essence, the face of the company. They do the work the customer sees, which makes them important to the sustainability of the company. Ergo, the company needs to put more resources in the barista's pocket to ensure quality work.

It sort of blows my mind why everyone else in the US does not think this way, but I have tried to dissect my own stance on tipping (from the standpoint of having spent nearly a decade working front-of-the-house in restaurants), and I'm really having trouble poking holes in my own logic. So, I'm always interested to hear other people's takes on why they tip the way they do.

donkeyboy · 9 months ago
Imagine it’s raining, or they come really fast. Even if not so, it is always expected to tip the person doing delivery. That’s just the custom, like tipping in restaurant or tipping the bartender is the custom.
donkeyboy commented on 43-year-old Family Canoe Trip   paddlingmag.com/stories/f... · Posted by u/cameron_b
donkeyboy · 9 months ago
A very wholesome read. Thank you for sharing. I’ve never been so into outdoors/camping/fishing, but it made me reflect on some of my adventure trips I’m doing right now while I’m still young. And maybe these will be talked about in my future family.
donkeyboy commented on Bypass DeepSeek censorship by speaking in hex   substack.com/home/post/p-... · Posted by u/MedadNewman
timeattack · a year ago
Thing that I don't understand about LLMs at all, is that how it is possible to for it to "understand" and reply in hex (or any other encoding), if it is a statistical "machine"? Surely, hex-encoded dialogues is not something that is readily present in dataset? I can imagine that hex sequences "translate" to tokens, which are somewhat language-agnostic, but then why quality of replies drastically differ depending on which language you are trying to commuicate with it? How deep that level of indirection goes? What if it would be double-encoded to hex? Triple?

If someone has insight, can you explain please?

donkeyboy · a year ago
I agree. And i think other comments dont understand how utterly difficult this is. I think that there is a translation tool underneath that translates into English. I wonder if it can also figure out binary ascii or rot13 text. Hex to letter would be a very funky translation tool to have
donkeyboy commented on You could have designed state of the art positional encoding   fleetwood.dev/posts/you-c... · Posted by u/Philpax
throwawaymaths · a year ago
> I think concatenation wouldn’t work, as you indicate.

Why do you say that?

donkeyboy · a year ago
Concat could work too although less efficient because you need to make a new tensor.

Actually summing might learn a concat on its own. Imagine the embedding learned for a token takes up the first N-20 dimensions and leaves the last 20 dimensions as 0. And the positional encoding causes the first N-20 dims to be 0 and the last 20 to encode the information. Then when you sum you are actually concatenating. So I think of them as equivalent except add is more efficient/preserves the dim space, while concat would grow the dim space. And for something like position, which certainly does not need to occupy 1000+ dimensions, it would not make sense to concat all of that since it would be wasteful

donkeyboy commented on What is the history of the use of "foo" and "bar" in source code examples? (2012)   softwareengineering.stack... · Posted by u/squircle
howard941 · a year ago
Nope. Not even for xyxzzy
donkeyboy · a year ago
Looks like xyzzy and plugh originated as a magic word in the computer game Colossal Cave Adventure

u/donkeyboy

KarmaCake day64October 25, 2022View Original