I work at Google on the Gemma team, and while not on the core team for this model, participated a bit on this project.
I personally was happy to see this project get built. The dolphin researchers have been doing great science for years, from the computational/mathematics side it was quite neat see how that was combined with the Gemma models.
It's great that dolphins are getting audio decoders in language models first, does the Gemma team intend to roll that out for human speech at some point eventually too?
This sounds very cool at a conceptual level, but the article left me in the dark about what they're actually doing with DolphinGemma. The closest to an answer is:
>By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication — a task previously requiring immense human effort.
But this doesn't really tell me anything. What does it mean to "help researchers uncover" this stuff? What is the model actually doing?
As far as I can tell, it hasn't actually done anything yet.
The article reads like the press releases you see from academic departments, where an earth shattering breakthrough is juuuuust around the corner. In every single department, of every single university.
Cool to see the use of consumer phones as part of the setup. Having a suite of powerful sensors, processing, display, and battery in a single, compact, sealed package must be immensely useful for science.
Tangential, but this brings up a really interesting question for me.
LLMs are multi-lingual without really trying assuming the languages in question are sufficiently well-represented in their training corpus.
I presume their ability to translate comes from the fact that there are lots of human-translated passages in their corpus; the same work in multiple languages, which lets them figure out the necessary mappings between semantic points (words.)
But I wonder about the translation capability of a model trained on multiple languages but with completely disjoint documents (no documents that were translations of another, no dictionaries, etc).
Could the emerging latent "concept space" of two completely different human languages be similar enough that the model could translate well, even without ever seeing examples of how a multilingual human would do a translation?
I don't have a strong intuition here but it seems plausible. And if so, that's remarkable because that's basically a science-fiction babelfish or universal translator.
Check out this recent benchmark MTOB (Machine Translation from One Book) -- relevant to your comment, though the book does have parallel passages so not exactly what you have in mind: https://arxiv.org/pdf/2309.16575
In the case of non-human communication, I know there has been some fairly well-motivated theorizing about the semantics of individual whale vocalizations. You could imagine a first pass at something like this if the meaning of (say) a couple dozen vocalizations could be characterized with a reasonable degree of confidence.
Super interesting domain that's ripe for some fresh perspectives imo. Feels like at this stage, all people can really do is throw stuff at the wall. The interesting part will begin when someone can get something to stick!
> that's basically a science-fiction babelfish or universal translator
Ten years ago I would have laughed at this notion, but today it doesn't feel that crazy.
I'd conjecture that over the next ten years, this general line of research will yield some non-obvious insights into the structure of non-human communication systems.
Increasingly feels like the sci-fi era has begun -- what a time to be alive.
>lots of human-translated passages in their corpus
Yes. I remember reading that the EU parliamentary proceedings in particular are used to train machine translation models. Unfortunately, I cant remember where I read that. I did find the dataset: https://paperswithcode.com/dataset/europarl
Languages encode similar human experiences, so their conceptual spaces probably have natural alignments even without translation examples. Words for common objects or emotions might cluster similarly.
But without seeing actual translations, a model would miss nuances, idioms, and how languages carve up meaning differently. It might grasp that "dog" and "perro" relate to similar concepts without knowing they're direct translations.
To agree and extend, that's actually how human language works too. The cultural connotations of "dog" in english might be quite different from "perro".
And it gets even more complex because the connotations of "dog" in the USA in 2025 are unquestionably different from "dog" in England in 1599. I can only assume these distinctions also hold across languages. They're not a direct translation.
Let alone extreme cultural specificities... To follow the same example, how would one define "doge" now?
Wow, there's a lot of cynicism in this thread, even for HN.
Regardless of whether or not it works perfectly, surely we can all relate to the childhood desire to 'speak' to animals at one point or another?
You can call it a waste of resources or someones desperate attempt at keeping their job if you want, but these are marine biologists. I imagine cross species communication would be a major achievement and seems like a worthwhile endeavor to me.
I'm as or more cynical than the next guy - but it seems to me that being able to communicate with animals has high utility for humans. Partly from an emotional or companionship perspective as we've been doing with dogs for a long time, but maybe even on purely utilitarian grounds.
If we want to know something about what's going on in the ocean, or high on a mountain or in the sky or whatever - what if we can just ask some animals about it? What about for things that animals can naturally perceive that humans have trouble with - certain wavelengths of light or magnetic fields for example? How about being able to recruit animals to do specific tasks that they are better suited for? Seems like a win for us, and maybe a win for them as well.
Not sure what else, but history suggests that the more people have been able to communicate with each other, the better the outcomes. I assume this holds true more broadly as well.
I was just reading how fishing industry’s longlines have caught many dolphins and other bycatches. It would be great to be able to give them warnings, or even better, to ask them to keep other big animals away from the longlines.
I am all for the Disney utopian fantasy of us living with animals.
However if universal communication was to be made. Don't you think that animals are going to be pretty pissed to discover what we have done with their kingdom?
"Hi Mr Dolphin, how's the sea today?" "Wet and a plastic bottle cap got lodged in my blowhole last night..."
It's not even about the communication! Just having more insight into the brains and communication of other mammals has a ton of scientific value in its own right.
Sometimes it's good just to know things. If we needed to find a practical justification for everything before we started exploring it, we'd still be animals.
I for one am simply happy to see us trying to apply LLMs to something other than replacing call centers... humankind SHOULD be exploring and learning sometimes even when there isn't an ROI.
"virtue signalling" really is one of those words/turns of phrase that needs to be put on a high shelf.
Plenty of people genuinely dislike the concentration of economic and computing power that big tech represents. Them expressing this is not "virtue signaling", it is an authentic moral position they hold.
Plenty of people genuinely dislike the disregard for labor and intellectual property rights that anything Gen AI represents. Again, an authentic moral position.
"Virtue signaling" is for example, when a corporate entity doesn't authentically support diversity through any kind of consequential action but does make sure to sponsor the local pride event (in exchange for their logo being everywhere) and swaps their social media logos to rainbow versions.
Calling something "trendy" is a great way to try to dismiss it without actually providing any counterargument. The deep suspicion of anything Google does is extremely well justified IMHO.
Google is where great technology and innovation goes to die.
Please give me one example in the last decade where meta or Google research has led to actual products or open-sourced technologies, and not just expensive proprietary experiments shelved after billions were spent on them?
Regardless of your or my feelings on this specific topic, "virtue signalling" is good because virtue is good and signalling to others that we ought to be good is also good. The use of that term as a pejorative is itself toxic cynicism
Don’t understand the cynicism either. Is this not way cooler than the latest pre-revenue series F marketing copy slop bot startup?
To me this task looks less like next token prediction language modeling and more like translating a single “word” at a time into English. It’s a pretty tractable problem. The harder parts probably come from all the messiness of hearing and playing sounds underwater.
I would imagine adapting to new vocab would be pretty clunky in an LLM based system. It would be interesting if it were able to add new words in real time.
Ah this is different. The Nytimes article is about identifying/classifying dolphins from audio. This new model is about communicating with dolphins from generated audio.
The difference between recognizing someone from hearing them, and actually talking to them!
Gemini supposedly allows for conversational speech w/your data. Have you tried it? We have; it's laughably bad and can't get the most basic stuff right from a well-crafted datamart.
If it can't do the most basic stuff, please explain to be how in the fuck it is going to understand dolphin language and why would should believe its results anyway?
This crowd seems to be cross pollinated with the sci-fi / space exploration set. Communication with cetaceans seems like such an obvious springboard for developing frameworks and techniques for first contact with E.T. /If/ you believe they're out there... And if not, what an incredible opportunity in its own right.
But, since context is so important to communication, I think this would be easier to accomplish with carefully built experiments with captive dolphin populations first. Beginning with wild dolphins is like dropping a guy from New York City into rural Mongolia and hoping he'll learn the language.
I wonder what's the status quo on the non-LLM side; are we able to manually decode sound patterns to recognize dolphin's communication to some degree? If that's the case, I guess this may have a chance.
The experiment design sounds pretty cool. I hope they see some cool results. It would be very cool if humans could talk to another intelligent creature here on earth. This is certainly a step on the way there.
I personally was happy to see this project get built. The dolphin researchers have been doing great science for years, from the computational/mathematics side it was quite neat see how that was combined with the Gemma models.
>By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication — a task previously requiring immense human effort.
But this doesn't really tell me anything. What does it mean to "help researchers uncover" this stuff? What is the model actually doing?
The article reads like the press releases you see from academic departments, where an earth shattering breakthrough is juuuuust around the corner. In every single department, of every single university.
It's more PR fluff than substance.
LLMs are multi-lingual without really trying assuming the languages in question are sufficiently well-represented in their training corpus.
I presume their ability to translate comes from the fact that there are lots of human-translated passages in their corpus; the same work in multiple languages, which lets them figure out the necessary mappings between semantic points (words.)
But I wonder about the translation capability of a model trained on multiple languages but with completely disjoint documents (no documents that were translations of another, no dictionaries, etc).
Could the emerging latent "concept space" of two completely different human languages be similar enough that the model could translate well, even without ever seeing examples of how a multilingual human would do a translation?
I don't have a strong intuition here but it seems plausible. And if so, that's remarkable because that's basically a science-fiction babelfish or universal translator.
In the case of non-human communication, I know there has been some fairly well-motivated theorizing about the semantics of individual whale vocalizations. You could imagine a first pass at something like this if the meaning of (say) a couple dozen vocalizations could be characterized with a reasonable degree of confidence.
Super interesting domain that's ripe for some fresh perspectives imo. Feels like at this stage, all people can really do is throw stuff at the wall. The interesting part will begin when someone can get something to stick!
> that's basically a science-fiction babelfish or universal translator
Ten years ago I would have laughed at this notion, but today it doesn't feel that crazy.
I'd conjecture that over the next ten years, this general line of research will yield some non-obvious insights into the structure of non-human communication systems.
Increasingly feels like the sci-fi era has begun -- what a time to be alive.
Yes. I remember reading that the EU parliamentary proceedings in particular are used to train machine translation models. Unfortunately, I cant remember where I read that. I did find the dataset: https://paperswithcode.com/dataset/europarl
Languages encode similar human experiences, so their conceptual spaces probably have natural alignments even without translation examples. Words for common objects or emotions might cluster similarly.
But without seeing actual translations, a model would miss nuances, idioms, and how languages carve up meaning differently. It might grasp that "dog" and "perro" relate to similar concepts without knowing they're direct translations.
And it gets even more complex because the connotations of "dog" in the USA in 2025 are unquestionably different from "dog" in England in 1599. I can only assume these distinctions also hold across languages. They're not a direct translation.
Let alone extreme cultural specificities... To follow the same example, how would one define "doge" now?
Regardless of whether or not it works perfectly, surely we can all relate to the childhood desire to 'speak' to animals at one point or another?
You can call it a waste of resources or someones desperate attempt at keeping their job if you want, but these are marine biologists. I imagine cross species communication would be a major achievement and seems like a worthwhile endeavor to me.
If we want to know something about what's going on in the ocean, or high on a mountain or in the sky or whatever - what if we can just ask some animals about it? What about for things that animals can naturally perceive that humans have trouble with - certain wavelengths of light or magnetic fields for example? How about being able to recruit animals to do specific tasks that they are better suited for? Seems like a win for us, and maybe a win for them as well.
Not sure what else, but history suggests that the more people have been able to communicate with each other, the better the outcomes. I assume this holds true more broadly as well.
However if universal communication was to be made. Don't you think that animals are going to be pretty pissed to discover what we have done with their kingdom?
"Hi Mr Dolphin, how's the sea today?" "Wet and a plastic bottle cap got lodged in my blowhole last night..."
Sometimes it's good just to know things. If we needed to find a practical justification for everything before we started exploring it, we'd still be animals.
The cynicism on display here is little more than virtue signalling and/or upvote farming.
Sad to see such thoughtless behaviour has reached even this bastion of reason.
Plenty of people genuinely dislike the concentration of economic and computing power that big tech represents. Them expressing this is not "virtue signaling", it is an authentic moral position they hold.
Plenty of people genuinely dislike the disregard for labor and intellectual property rights that anything Gen AI represents. Again, an authentic moral position.
"Virtue signaling" is for example, when a corporate entity doesn't authentically support diversity through any kind of consequential action but does make sure to sponsor the local pride event (in exchange for their logo being everywhere) and swaps their social media logos to rainbow versions.
Calling something "trendy" is a great way to try to dismiss it without actually providing any counterargument. The deep suspicion of anything Google does is extremely well justified IMHO.
Please give me one example in the last decade where meta or Google research has led to actual products or open-sourced technologies, and not just expensive proprietary experiments shelved after billions were spent on them?
I'll wait.
Dead Comment
To me this task looks less like next token prediction language modeling and more like translating a single “word” at a time into English. It’s a pretty tractable problem. The harder parts probably come from all the messiness of hearing and playing sounds underwater.
I would imagine adapting to new vocab would be pretty clunky in an LLM based system. It would be interesting if it were able to add new words in real time.
Deleted Comment
Deleted Comment
https://www.nytimes.com/2017/12/08/science/dolphins-machine-...
The difference between recognizing someone from hearing them, and actually talking to them!
If it can't do the most basic stuff, please explain to be how in the fuck it is going to understand dolphin language and why would should believe its results anyway?
It's rather unsound reasoning, but you certainly can.
But, since context is so important to communication, I think this would be easier to accomplish with carefully built experiments with captive dolphin populations first. Beginning with wild dolphins is like dropping a guy from New York City into rural Mongolia and hoping he'll learn the language.