Readit News logoReadit News
concats commented on Bitchat – A decentralized messaging app that works over Bluetooth mesh networks   github.com/jackjackbits/b... · Posted by u/ananddtyagi
moneywaters · 2 months ago
I’ve been toying with a concept inspired by Apple’s Find My network: Imagine a decentralized, delay-tolerant messaging system where messages hop device-to-device (e.g., via Bluetooth, UWB, Wi-Fi Direct), similar to how “Find My” relays location via nearby iPhones.

Now add a twist: • Senders pay a small fee to send a message. • Relaying devices earn a micro-payment (could be tokens, sats, etc.) for carrying the message one hop further. • End-to-end encrypted, fully decentralized, optionally anonymous.

Basically, a “postal network” built on people’s phones, without needing a traditional internet connection. Works best in areas with patchy or no internet, or under censorship.

Obvious challenges: • Latency and reliability (it’s not real-time). • Abuse/spam prevention. • Power consumption and user opt-in. • Viable incentive structures.

What do you think? Is this viable? Any real-world use cases where this might be actually useful — or is it just a neat academic toy?

concats · 2 months ago
Sounds like a solution looking for a problem.
concats commented on A non-anthropomorphized view of LLMs   addxorrol.blogspot.com/20... · Posted by u/zdw
xtal_freq · 2 months ago
Not that this is your main point, but I find this take representative, “do you believe there's anything about humans that exists outside the mathematical laws of physics?”There are things “about humans”, or at least things that our words denote, that are outside physic’s explanatory scope. For example, the experience of the colour red cannot be known, as an experience, by a person who only sees black and white. This is the case no matter what empirical propositions, or explanatory system, they understand.
concats · 2 months ago
Perhaps. But I can't see a reason why they couldn't still write endless—and theoretically valuable—poems, dissertations, or blog posts, about all things red and the nature of redness itself. I imagine it would certainly take some studying for them, likely interviewing red-seers, or reading books about all things red. But I'm sure they could contribute to the larger red discourse eventually, their unique perspective might even help them draw conclusions the rest of us are blind to.

So perhaps the fact that they "cannot know red" is ultimately irrelevant for an LLM too?

concats commented on AI is coming for agriculture, but farmers aren’t convinced   theconversation.com/shit-... · Posted by u/lr0
collinmcnulty · 2 months ago
Offshore oil rigs beg to differ. For almost any set of circumstances, there’s a salary that will entice people to fill the role. They just don’t want to shell out the mid six figure salary that would be required. It’s only a “breakdown” because we collectively feel entitled to have people fill the role but don’t want to actually pay what it costs.
concats · 2 months ago
Human entitlement really is the bane of game theory.
concats commented on Introducing Gemma 3n   developers.googleblog.com... · Posted by u/bundie
jwr · 2 months ago
I'd genuinely like to know how these small models are useful for anyone. I've done a lot of experimenting, and anything smaller than 27B is basically unusable, except as a toy. All I can say for smaller models is that they sometimes produce good answers, which is not enough for anything except monkeying around.

I solved my spam problem with gemma3:27b-it-qat, and my benchmarks show that this is the size at which the current models start becoming useful.

concats · 2 months ago
There are use cases where even low accuracy could be useful. I can't predict future products, but here are two that are already in place today:

- On the keyboard on iphones some sort of tiny language model suggest what it thinks are the most likely follow up words when writing. You only have to pick a suggested next word if it matches what you were planning on typing.

- Speculative decoding is a technique which utilized smaller models to speed up the inference for bigger models.

I'm sure smart people will invent other future use cases too.

concats commented on Meta's Llama 3.1 can recall 42 percent of the first Harry Potter book   understandingai.org/p/met... · Posted by u/aspenmayer
concats · 2 months ago
That's a clickbait title.

What they are actually saying: Given one correct quoted sentence, the model has 42% chance of predicting the next sentence correctly.

So, assuming you start with the first sentence and tell it to keep going, it has a 0.42^n odds of staying on track, where n is the n-th sentence.

It seems to me, that if they didn't keep correcting it over and over again with real quotes, it wouldn't even get to the end of the first page without descending into wild fanfiction territory, with errors accumulating and growing as the length of the text progressed.

EDIT: As the article states, for an entire 50 token excerpt to be correct the probability of each output has to be fairly high. So perhaps it would be more accurate to view it as 0.985^n where n is the n-th token. Still the same result long term. Unless every token is correct, it will stray further and further from the correct source.

concats commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
pier25 · 3 months ago
I'm mostly skeptical about AI capabilities but I also think it will never be a profitable business. Let's not forget AI companies need to recoup a trillion dollars (so far) just to break even [1].

VCs are already doubting if the billions invested into data centers are going to generate a profit [1 and 2].

AI companies will need to generate profits at some point. Would people still be optimistic about Claude etc if they had to pay say $500 per month to use it given its current capabilities? Probably not.

So far the only company generating real profits out of AI is Nvidia.

[1] https://www.goldmansachs.com/insights/articles/will-the-1-tr...

[2] https://www.nytimes.com/2025/06/02/business/ai-data-centers-...

concats · 3 months ago
What about the free open weights models then? And the open source tooling to go with them?

Sure, they are perhaps 6 months behind the closed-source models, and the hardware to run the biggest and best models isn't really consumer-grade yet (How many years could it be before regular people have GPUs with 200+ gigabytes vram? That's merely one order of magnitude away).

But they're already out there. They will only ever get better. And they will never disappear due to the company going out of business or investors raising prices.

I personally only care about the closed sourced proprietary models in so far as they let me get a glimpse of what I'll soon have access to freely and privately on my own machine. Even if all of them went out of business today, LLMs would still have a permanent effect on our future and how I'd be working.

concats commented on GitHub Copilot Coding Agent   github.blog/changelog/202... · Posted by u/net01
shepherdjerred · 3 months ago
> You can "Save" 1,000 hours every night, but you don't actuall get those 1,000 hours back.

What do you mean?

If I have some task that requires 1000 hours, and I'm able to shave it down to one hour, then I did just "save" 999 hours -- just in the same way that if something costs $5 and I pay $4, I saved $

concats · 3 months ago
I think one issue is that you won't always be able to invoice those extra 999 hours to your customer. Sometimes you'll still only be able to get paid for 1 hour, depending on the task and contract.

But the llm bill will always invoice you for all the saved work regardless.

concats commented on Gemini 2.5 Pro Preview   developers.googleblog.com... · Posted by u/meetpateltech
sirstoke · 4 months ago
I’ve been thinking about the SWE employment conundrum in a post-LLM world for a while now, and since my livelihood (and that of my loved ones’) depends on it, I’m obviously biased. Still, I would like to understand where my logic is flawed, if it is. (I.e I’m trying to argue in good faith here)

Isn’t software engineering a lot more than just writing code? And I mean like, A LOT more?

Informing product roadmaps, balancing tradeoffs, understanding relationships between teams, prioritizing between separate tasks, pushing back on tech debt, responding to incidents, it’s a feature and not a bug, …

I’m not saying LLMs will never be able to do this (who knows?), but I’m pretty sure SWEs won’t be the only role affected (or even the most affected) if it comes to this point.

Where am I wrong?

concats · 4 months ago
The way I see it:

* The world is increasingly ran on computers.

* Software/Computer Engineers are the only people who actually truly know how computers work.

Thus it seems to me highly unlikely that we won't have a job.

What that job entails I do not know. Programming like we do today might not be something that we spend a considerable amount of time doing in the future. Just like most people today don't spend much time handing punched-cards or replacing vacuum tubes. But there will still be other work to do, I don't doubt that.

concats commented on Gemini 2.5 Pro Preview   developers.googleblog.com... · Posted by u/meetpateltech
jstummbillig · 4 months ago
> no amount of prompting will get current models to approach abstraction and architecture the way a person does

I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)

I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.

concats · 4 months ago
I won't deny that in a context with perfect information, a future LLM will most likely produce flawless code. I too believe that is inevitable.

However, in real life work situations, that 'perfect information' prerequisite will be a big hurdle I think. Design can depend on any number of vague agreements and lots of domain specific knowledge, things a senior software architect has only learnt because they've been at the company for a long time. It will be very hard for a LLM to take all the correct decisions without that knowledge.

Sure, if you write down a summary of each and every meeting you've attended for the past 12 months, as well as attach your entire company confluence, into the prompt, perhaps then the LLM can design the right architecture. But is that realistic?

More likely I think the human will do the initial design and specification documents, with the aforementioned things in mind, and then the LLM can do the rest of the coding.

Not because it would have been technically impossible for the LLM to do the code design, but because it would have been practically impossible to craft the correct prompt that would have given the desired result from a blank sheet.

concats commented on Claude's system prompt is over 24k tokens with tools   github.com/asgeirtj/syste... · Posted by u/mike210
EGreg · 4 months ago
Can someone explain how to use Prompt Caching with LLAMA 4?
concats · 4 months ago
Depends on what front end you use. But for text-generation-webui for example, Prompt Caching is simply a checkbox under the Model tab you can select before you click "load model".

u/concats

KarmaCake day30April 8, 2025
About
30-something European. I believe in: Fitness, Philosophy, and future optimism.
View Original