Readit News logoReadit News
summerlight commented on Gemini 2.5 Flash Image   developers.googleblog.com... · Posted by u/meetpateltech
fariszr · 8 hours ago
This is the gpt 4 moment for image editing models. Nano banana aka gemini 2.5 flash is insanely good. It made a 171 elo point jump in lmarena!

Just search nano banana on Twitter to see the crazy results. An example. https://x.com/D_studioproject/status/1958019251178267111

summerlight · 5 hours ago
I wonder how the creative workflow looks like when this kind of models are natively integrated into digital image tools. Imagine fine-grained controls on each layer and their composition with the semantic understanding on the full picture.
summerlight commented on Google's Liquid Cooling   chipsandcheese.com/p/goog... · Posted by u/giuliomagnifico
m463 · a day ago
I wonder what the economics of water cooling really is.

Is it because chips are getting more expensive, so it is more economical to run them faster by liquid cooling them?

Or is it data center footprint is more expensive, so denser liquid cooling makes more sense?

Or is it that wiring distances (1ft = 1nanosecond) make dense computing faster and more efficient?

summerlight · a day ago
Not sure about classical computing demands, but I think wiring distances definitely matter for TPU-like memory heavy computation.
summerlight commented on DeepMind releases Lyria 2 music generation model   deepmind.google/discover/... · Posted by u/velcrobeg
eucryphia · 4 months ago
You know what the biggest problem with pushing all-things-Al is? Wrong direction.

I want Al to do my laundry and dishes so that I can do art and writing, not for Al to do my art and writing so that I can do my laundry and dishes.

- Joanna Maciejewska

You could add music

summerlight · 4 months ago
Well you know, all those fun and creative parts in software engineering has been taken over by vibe coding and now humans are supposed to do only tedious works, so probably the same thing applies here.
summerlight commented on Gemini 2.5 Flash   developers.googleblog.com... · Posted by u/meetpateltech
andrei_says_ · 4 months ago
Not the same logic because interns can make meaning out of the data - that’s built-in error correction.

They also remember what they did - if you spot one misunderstanding, there’s a chance they’ll be able to check all similar scenarios.

Comparing the mechanics of an LLM to human intelligence shows deep misunderstanding of one, the other, or both - if done in good faith of course.

summerlight · 4 months ago
Not sure why you're trying to conflate intellectual capability problems into this and complicate the argument? The problem layout is the same. You delegate the works to someone so you cannot understand all the details. This makes a fundamental tension between trust and confidence. Their parameters might be different due to intellectual capability, but whoever you're going to delegate, you cannot evade this trade-off.

BTW, not sure if you have experiences of delegating some works to human interns or new grads and being rewarded by disastrous results? I've done that multiple times and don't trust anyone too much. This is why we typically develop review processes, guardrails etc etc.

summerlight commented on Gemini 2.5 Flash   developers.googleblog.com... · Posted by u/meetpateltech
deanmoriarty · 4 months ago
Genuine naive question: when it comes to Google HN has generally a negative view of it (pick any random story on Chrome, ads, search, web, working at faang, etc. and this should be obvious from the comments), yet when it comes to AI there is a somewhat notable “cheering effect” for Google to win the AI race that goes beyond a conventional appreciation of a healthy competitive landscape, which may appear as a bit of a double standard.

Why is this? Is it because OpenAI is seen as such a negative player in this ecosystem that Google “gets a pass on this one”?

And bonus question: what do people think will happen to OpenAI if Google wins the race? Do you think they’ll literally just go bust?

summerlight · 4 months ago
Because now it has brought real competitions to the field. GPT was the king and Claude had been the only meaningful challenger for a while but OpenAI didn't care about Anthropic but just be obsessed with Google. Gemini took a quite time to set the pipeline so initial version was not enough to push the frontier; you remember the days when Google released a new model, OpenAI just responded with some old models in their silo within a day only to crush them. That does not happen anymore and they're forced to develop a better model.
summerlight commented on Gemini 2.5 Flash   developers.googleblog.com... · Posted by u/meetpateltech
jdthedisciple · 4 months ago
> thousands of points of nasty unstructured client data

What I always wonder in these kinds of cases is: What makes you confident the AI actually did a good job since presumably you haven't looked at the thousands of client data yourself?

For all you know it made up 50% of the result.

summerlight · 4 months ago
Though the same logic can be applied to everywhere, right? Even if it's done by human interns, you need to audit everything to be 100% confident or just have some trust on them.
summerlight commented on DolphinGemma: How Google AI is helping decode dolphin communication   blog.google/technology/ai... · Posted by u/alphabetting
summerlight · 4 months ago
I wonder what's the status quo on the non-LLM side; are we able to manually decode sound patterns to recognize dolphin's communication to some degree? If that's the case, I guess this may have a chance.
summerlight commented on Google is winning on every AI front   thealgorithmicbridge.com/... · Posted by u/vinhnx
mike_hearn · 4 months ago
They must very much compete with others. All these chips are being fabbed at the same facilities in Taiwan and capacity trades off against each other. Google has to compete for the same fab capacity alongside everyone else, as well as skilled chip designers etc.

> The revenue delta from this is more than enough to pay off the entire investment history for TPU.

Possibly; such statements were common when I was there too but digging in would often reveal that the numbers being used for what things cost, or how revenue was being allocated, were kind of ad hoc and semi-fictional. It doesn't matter as long as the company itself makes money, but I heard a lot of very odd accounting when I was there. Doubtful that changed in the years since.

Regardless the question is not whether some ads launches can pay for the TPUs, the question is whether it'd have worked out cheaper in the end to just buy lots of GPUs. Answering that would require a lot of data that's certainly considered very sensitive, and makes some assumptions about whether Google could have negotiated private deals etc.

summerlight · 4 months ago
> They must very much compete with others. All these chips are being fabbed at the same facilities in Taiwan and capacity trades off against each other.

I'm not sure what you're trying to deliver here. Following your logic, even if you have a fab you need to compete for rare metals, ASML etc etc... That's a logic built for nothing but its own sake. In the real world, it is much easier to compete outside Nvidia's own allocation as you get rid of the critical bottleneck. And Nvidia has all the incentives to control the supply to maximize its own profit, not to meet the demands.

> Possibly; such statements were common when I was there too but digging in would often reveal that the numbers being used for what things cost, or how revenue was being allocated, were kind of ad hoc and semi-fictional.

> Regardless the question is not whether some ads launches can pay for the TPUs, the question is whether it'd have worked out cheaper in the end to just buy lots of GPUs.

Of course everyone can build their own narratives in favor of their launch, but I've been involved in some of those ads quality launches and can say pretty confidently that most of those launches would not be launchable without TPU at all. This was especially true in the early days of TPU as the supply of GPU for datacenter was extremely limited and immature.

More GPU can solve? Companies are talking about 100k~200k of H100 as a massive cluster and Google already has much larger TPU clusters with computation capability in a different order of magnitudes. The problem is, you cannot simply buy more computation even if you have lots of money. I've been pretty clear about how relying on Nvidia's supply could be a critical limiting factor in a strategic point of view but you're trying to move the point. Please don't.

summerlight commented on Google is winning on every AI front   thealgorithmicbridge.com/... · Posted by u/vinhnx
mike_hearn · 4 months ago
TPUs aren't necessarily a pro. They go back 15 years and don't seem to have yielded any kind of durable advantage. Developing them is expensive but their architecture was often over-fit to yesterday's algorithms which is why they've been through so many redesigns. Their competitors have routinely moved much faster using CUDA.

Once the space settles down, the balance might tip towards specialized accelerators but NVIDIA has plenty of room to make specialized silicon and cut prices too. Google has still to prove that the TPU investment is worth it.

summerlight · 4 months ago
Not sure how familiar you are with the internal situation... But from my experience think it's safe to say that TPU basically multiplies Google's computation capability by 10x, if not 20x. Also they don't need to compete with others to secure expensive nvidia chips. If this is not an advantage, I don't see there's anything considered to be an advantage. The entire point of vertical integration is to secure full control of your stack so your capability won't be limited by potential competitors, and TPU is one of the key component of its strategy.

Also worth noting that its Ads division is the largest, heaviest user of TPU. Thanks to it, it can flex running a bunch of different expensive models that you cannot realistically afford with GPU. The revenue delta from this is more than enough to pay off the entire investment history for TPU.

summerlight commented on Google will let companies run Gemini models in their own data centers   cnbc.com/2025/04/09/googl... · Posted by u/jonbaer
miohtama · 4 months ago
Is Gemini tied/benefitting from Google TPU hardware? Because you need hardware in the data center to run this, and I feel it is somewhat specialised.
summerlight · 4 months ago
The model itself is likely built upon their own open source system JAX so they should be usable in Nvidia. Of course cost efficiency is going to be a different story.

u/summerlight

KarmaCake day3421June 17, 2013View Original