Readit News logoReadit News
concats commented on GPT Image 1.5   openai.com/index/new-chat... · Posted by u/charlierguo
echelon · a day ago
> Somehow it feels like we’re moving backwards.

I don't understand why everyone isn't in awe of this. This is legitimately magical technology.

We've had 60+ years of being able to express our ideas with keyboards. Steve Jobs' "bicycle of the mind". But in all this time we've had a really tough time of visually expressing ourselves. Only highly trained people can use Blender, Photoshop, Illustrator, etc. whereas almost everyone on earth can use a keyboard.

Now we're turning the tide and letting everyone visually articulate themselves. This genuinely feels like computing all over again for the first time. I'm so unbelievably happy. And it only gets better from here.

Every human should have the ability to visually articulate themselves. And it's finally happening. This is a major win for the world.

I'm not the biggest fan of LLMs, but image and video models are a creator's dream come true.

In the near future, the exact visions in our head will be shareable. We'll be able to iterate on concepts visually, collaboratively. And that's going to be magical.

We're going to look back at pre-AI times as primitive. How did people ever express themselves?

concats · a day ago
“I've come up with a set of rules that describe our reactions to technologies:

1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

3. Anything invented after you're thirty-five is against the natural order of things.”

― Douglas Adams

concats commented on AI is a front for consolidation of resources and power   chrbutler.com/what-ai-is-... · Posted by u/delaugust
friendzis · a month ago
The moment your code departs from typical patterns in the training set or ("agentic environment") LLMs fall over at best (i.e. can't even find the thing) or do some random nonsense at worst.

IMO LLMs are still at the point where they require significant handholding, showing what exactly to do, exactly where. Otherwise, it's constant review of random application of different random patterns, which may or may not satisfy requirements, goals and invariants.

concats · a month ago
I don't think anyone disagrees with that. But it's a good time to learn now, to jump on the train and follow the progress.

It will give the developer a leg up in the future when the mature tools are ready. Just like the people who surfed the 90s internet seem to do better with advanced technology than the youngsters who've only seen the latest sleek modern GUI tools and apps of today.

concats commented on Wanted to spy on my dog, ended up spying on TP-Link   kennedn.com/blog/posts/ta... · Posted by u/kennedn
jraph · 3 months ago
I've been blocking by default bigger media files with uBlock Origin to avoid needless resource usage. Cover images are typically blocked, and they are usually useless anyway.

It's too bad people spend energy for generating them now.

concats · 3 months ago
>> It's too bad people spend energy for generating them now.

How do you mean?

Some quick back of the napkin math.

Creating a 'throwaway' banner image by hand, maybe 15 minutes on a 100W CPU in Photoshop:

  15 minutes human work time + 0.025 kWh (100W*0.25h)
Creating a 'throwaway' banner image by stable diffusion on a 600W GPU. In reality it's probably less than 20 seconds to generate, but let's round it up to one full minute of compute time:

  5 minutes human work time + 0.01 kWh (600W*(1/60)h)
The way I see it it seems to spend less energy, regardless of whether you're talking about human energy or electrical energy. What's the issue here exactly?

concats commented on Bitchat – A decentralized messaging app that works over Bluetooth mesh networks   github.com/jackjackbits/b... · Posted by u/ananddtyagi
moneywaters · 5 months ago
I’ve been toying with a concept inspired by Apple’s Find My network: Imagine a decentralized, delay-tolerant messaging system where messages hop device-to-device (e.g., via Bluetooth, UWB, Wi-Fi Direct), similar to how “Find My” relays location via nearby iPhones.

Now add a twist: • Senders pay a small fee to send a message. • Relaying devices earn a micro-payment (could be tokens, sats, etc.) for carrying the message one hop further. • End-to-end encrypted, fully decentralized, optionally anonymous.

Basically, a “postal network” built on people’s phones, without needing a traditional internet connection. Works best in areas with patchy or no internet, or under censorship.

Obvious challenges: • Latency and reliability (it’s not real-time). • Abuse/spam prevention. • Power consumption and user opt-in. • Viable incentive structures.

What do you think? Is this viable? Any real-world use cases where this might be actually useful — or is it just a neat academic toy?

concats · 5 months ago
Sounds like a solution looking for a problem.
concats commented on A non-anthropomorphized view of LLMs   addxorrol.blogspot.com/20... · Posted by u/zdw
xtal_freq · 5 months ago
Not that this is your main point, but I find this take representative, “do you believe there's anything about humans that exists outside the mathematical laws of physics?”There are things “about humans”, or at least things that our words denote, that are outside physic’s explanatory scope. For example, the experience of the colour red cannot be known, as an experience, by a person who only sees black and white. This is the case no matter what empirical propositions, or explanatory system, they understand.
concats · 5 months ago
Perhaps. But I can't see a reason why they couldn't still write endless—and theoretically valuable—poems, dissertations, or blog posts, about all things red and the nature of redness itself. I imagine it would certainly take some studying for them, likely interviewing red-seers, or reading books about all things red. But I'm sure they could contribute to the larger red discourse eventually, their unique perspective might even help them draw conclusions the rest of us are blind to.

So perhaps the fact that they "cannot know red" is ultimately irrelevant for an LLM too?

concats commented on AI is coming for agriculture, but farmers aren’t convinced   theconversation.com/shit-... · Posted by u/lr0
collinmcnulty · 5 months ago
Offshore oil rigs beg to differ. For almost any set of circumstances, there’s a salary that will entice people to fill the role. They just don’t want to shell out the mid six figure salary that would be required. It’s only a “breakdown” because we collectively feel entitled to have people fill the role but don’t want to actually pay what it costs.
concats · 5 months ago
Human entitlement really is the bane of game theory.
concats commented on Introducing Gemma 3n   developers.googleblog.com... · Posted by u/bundie
jwr · 6 months ago
I'd genuinely like to know how these small models are useful for anyone. I've done a lot of experimenting, and anything smaller than 27B is basically unusable, except as a toy. All I can say for smaller models is that they sometimes produce good answers, which is not enough for anything except monkeying around.

I solved my spam problem with gemma3:27b-it-qat, and my benchmarks show that this is the size at which the current models start becoming useful.

concats · 6 months ago
There are use cases where even low accuracy could be useful. I can't predict future products, but here are two that are already in place today:

- On the keyboard on iphones some sort of tiny language model suggest what it thinks are the most likely follow up words when writing. You only have to pick a suggested next word if it matches what you were planning on typing.

- Speculative decoding is a technique which utilized smaller models to speed up the inference for bigger models.

I'm sure smart people will invent other future use cases too.

concats commented on Meta's Llama 3.1 can recall 42 percent of the first Harry Potter book   understandingai.org/p/met... · Posted by u/aspenmayer
concats · 6 months ago
That's a clickbait title.

What they are actually saying: Given one correct quoted sentence, the model has 42% chance of predicting the next sentence correctly.

So, assuming you start with the first sentence and tell it to keep going, it has a 0.42^n odds of staying on track, where n is the n-th sentence.

It seems to me, that if they didn't keep correcting it over and over again with real quotes, it wouldn't even get to the end of the first page without descending into wild fanfiction territory, with errors accumulating and growing as the length of the text progressed.

EDIT: As the article states, for an entire 50 token excerpt to be correct the probability of each output has to be fairly high. So perhaps it would be more accurate to view it as 0.985^n where n is the n-th token. Still the same result long term. Unless every token is correct, it will stray further and further from the correct source.

concats commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
pier25 · 6 months ago
I'm mostly skeptical about AI capabilities but I also think it will never be a profitable business. Let's not forget AI companies need to recoup a trillion dollars (so far) just to break even [1].

VCs are already doubting if the billions invested into data centers are going to generate a profit [1 and 2].

AI companies will need to generate profits at some point. Would people still be optimistic about Claude etc if they had to pay say $500 per month to use it given its current capabilities? Probably not.

So far the only company generating real profits out of AI is Nvidia.

[1] https://www.goldmansachs.com/insights/articles/will-the-1-tr...

[2] https://www.nytimes.com/2025/06/02/business/ai-data-centers-...

concats · 6 months ago
What about the free open weights models then? And the open source tooling to go with them?

Sure, they are perhaps 6 months behind the closed-source models, and the hardware to run the biggest and best models isn't really consumer-grade yet (How many years could it be before regular people have GPUs with 200+ gigabytes vram? That's merely one order of magnitude away).

But they're already out there. They will only ever get better. And they will never disappear due to the company going out of business or investors raising prices.

I personally only care about the closed sourced proprietary models in so far as they let me get a glimpse of what I'll soon have access to freely and privately on my own machine. Even if all of them went out of business today, LLMs would still have a permanent effect on our future and how I'd be working.

concats commented on GitHub Copilot Coding Agent   github.blog/changelog/202... · Posted by u/net01
shepherdjerred · 7 months ago
> You can "Save" 1,000 hours every night, but you don't actuall get those 1,000 hours back.

What do you mean?

If I have some task that requires 1000 hours, and I'm able to shave it down to one hour, then I did just "save" 999 hours -- just in the same way that if something costs $5 and I pay $4, I saved $

concats · 7 months ago
I think one issue is that you won't always be able to invoice those extra 999 hours to your customer. Sometimes you'll still only be able to get paid for 1 hour, depending on the task and contract.

But the llm bill will always invoice you for all the saved work regardless.

u/concats

KarmaCake day33April 8, 2025
About
30-something European. I believe in: Fitness, Philosophy, and future optimism.
View Original