Readit News logoReadit News
radicaldreamer · 2 years ago
Why? Attach seasoned product teams from Google to the research org and have experienced PMs paired with experienced EMs launch products.

Trying to transition a research org into a product org is going to be needlessly painful, especially since the research org needs to be firing on all cylinders in this hyper-competitive space.

TaylorAlexander · 2 years ago
Yep. I joined Google X Robotics (which became Everyday Robots, which got canceled) just as the org was winding down a big R&D push and moving to product development. Engineers were palpably hurt by their various projects being canceled, and in my opinion they never had a viable product strategy beyond “let’s see if we can find a consumer use for this robot”. This strategy ultimately failed and they canceled the project, let go a bunch of the people, and now the robots are being used for AI research. So the whole shift from R&D to product development was a failure. They could have saved a lot of grief and money if they just continued as an R&D org, and in my opinion they would have left open some important doors which would have really helped with AI research.
astromaniak · 2 years ago
I'm afraid this is going to be another Everyday Robots or Boston Dynamics. Google is ruled by managers, not visionaries. Strategy is 'try and see if it works in 4 years. if not cancel'. They followed it in many cases. So, DeepMind's cancellation is long overdue. A couple of years back one of Google's top managers talking about AGI said it's most likely to happen in DM. But current LLM boom happened elsewhere. Likely managers are disappointed in DM.
inamberclad · 2 years ago
Same experience at EDR. It was a great research platform, but nowhere near ready for the real world. Compute, power, functional safety... The "real" working robots of today are completely different. A successful product from EDR would have looked completely different.

That said, I love your farm robots.

Deleted Comment

advael · 2 years ago
God, yea. I've flipped between R&D and product development a few times in my career, sometimes at the same company, and it's a really rough transition even for an individual experienced in both. I can't imagine trying to flip a whole research team to make products is going to go well, especially when the products are getting a ton of well-deserved bad press and a lot of those researchers were coming from academia rather than elsewhere in industry in the first place
stephen_cagle · 2 years ago
I tend to agree. I'm curious whether this is Deepmind saying "I think we could do things better, let's do this ourselves" or the leadership of Alphabet saying "Get these ivy league intellectuals to prioritize productionizing products!"

Would seem far more sensible to allow Deepmind to continue to release hit after hit in the ML research world, and simply embed "fly on the wall" PM's into their org that can independently productionize any golden nuggets they happen to create.

TeMPOraL · 2 years ago
I feel it's more of leadership of Alphabet saying, "Our people literally invented transformers, so how come we're at the bottom of AI race instead of at the top? A random non-profit took out research and run with it, and they're the hottest company in the world now. This is unacceptable![0]".

Bad idea. People good or lucky enough to land in R&D like doing R&D. Force them to be product people, I expect most of them will leave.

--

[0] - Like Muffin from Bluey, https://youtu.be/hZVlBQXVtZA?t=8.

gaogao · 2 years ago
Agreed. As an example of that, Meta's kept its research org, FAIR, still doing fundamental research. Research orgs are great at demos, but actual productionalization takes a different mindset.
nomad_horse · 2 years ago
FAIR is considerably downscaled from what it was before, in eg 2022.
fooker · 2 years ago
Fun anecdote: some folks from FAIR reached out to a friend, asked her to apply, and then rejected her without an interview!
boyka · 2 years ago
Likely that some McKinsey type consultants or ex-consultants in senior management deem this to be absolutely necessary and the only way to go.
hotstickyballs · 2 years ago
That would require Google to have enough "seasoned product teams"
michaelt · 2 years ago
Google has cancelled more products that most companies have ever launched.

You'd think they'd have plenty of spare people with product launch experience.

oivey · 2 years ago
More than likely for the sorts of products they want to make you still need very deep research expertise that random product teams won’t have.
lupire · 2 years ago
Deep research expertise into tuning an LLM into a user friendly product?

Who do you think has that expertise? The people working on the model or the people studying users?

pixiemaster · 2 years ago
well actually i don’t think google has good PMs - besides adwords and android, there are no really successful products.
lenerdenator · 2 years ago
> Why?

Must transfer value, and the guy in charge of the company is not good at allocating the company's resources to do that with an eye on long-term results.

whywhywhywhy · 2 years ago
> Why?

Because the current way they were working squandered over a decade lead in the space. Deep Dream was 2015... Google Magenta was 2017...

hot_gril · 2 years ago
Maybe that's what they're doing. The article just says they're combining the two labs, not much further detail.

Deleted Comment

dylan604 · 2 years ago
Isn't this the mindset that has PoCs released to production?
lupire · 2 years ago
The researchers aren't suddenly building products.
Vt71fcAqt7 · 2 years ago
>seasoned product teams from Google

.

Dead Comment

kevindamm · 2 years ago
One thing that classic Google did right was embed researchers into product groups. It's true there were always some teams that were pure research but for the most part researchers were working within a product group.

Then some acquisitions and internal musical chairs and it became less like that. Now I'm not all doom and gloom like this article (although with DeepMind why not leave well enough alone? They do excellent research). But, it does seem suboptimal to pivot all the way to AI Product Factory... were there no other existing product factories they could have turned instead?

flakiness · 2 years ago
The heads of the research once wrote about that. They called it a "Hybrid Approach to Research" (as other comments pointed out).

https://static.googleusercontent.com/media/research.google.c...

moandcompany · 2 years ago
The "Hybrid Approach to Research" paper describes how "Google Research" first started when Google was mostly, if not entirely, about Search and "Research" was part of the Search organization.

In these days, there were no "pure research" roles, nor were there formal designations for "research scientists" as a career ladder at Google. There were "SWEs" and in some cases "Members of Technical Staff."

Since then, "Research" became its own organization or "Product Area" at Google (i.e. the equivalent of a company division). "Google Brain" was also created. Deepmind was acquired. All of these existed simultaneously, however Deepmind remained as an organizationally separate entity. In this era, the "Research Scientist" role was created, which generally existed exclusively within "Google Research." A large span of this era had John Giannandrea ("JG") at the helm of the Google Research org; (note: Giannandrea left to head and build Apple's "AI/ML" organization, which includes Siri, a few years ago.

After JG's departure, Google Brain and Google Research were brought together under the common leadership of Jeff Dean, as an organization called still called "Google Research" with a branch still called "Google Brain." For perspective, it may be useful to consider too that the size of "Google Research" in staff headcount here measured in the several-thousands. This configuration existed for the last few years, with the latest changes being the merging of "Google Research" and Deepmind into "Google Deepmind."

-

I am a Xoogler, formerly from this product area. One of the things I and at least a few others observed was that "Research" was becoming defacto synonymous with "Machine Learning / AI," yet not all of Google's storied research accomplishments, or problem areas, are limited to Machine Learning and AI.

In the last few years, Google made its public statements of being an "AI-first," previously "mobile-first," company in recognition that it would be incorporating and leveraging ML and AI technology across all of its products and services.

This raised a significant question: What should "Google Research" or Research at Google be if product areas across Google began full incorporation of AI/ML technology and methods in their products? What if they incorporated their own AI/ML teams? If Google was truly successful at becoming AI-first, how should "Google Research" define and focus its organizational purpose, research portfolio, and show its value when Moonshots/X also exists within Alphabet? Over time, there were many parts of "Google Research" and research at large across Alphabet that felt that their purpose, or at least their individual reason for joining, was to do "pure research," yet this is not how the organizations started at all in the beginning. Many researchers and teams also knew that for practical reasons (e.g. promotion) that they generally needed to present and align their work with things like product launches with partner organizations.

I suppose we are seeing some of the answer to this with Google DeepMind stating that they will be aligning more strongly with creating AI products, but in addition to the question of what happens to foundational research (for AI), what happens to foundational research in non-AI areas for Google and Alphabet?

curious_cat_163 · 2 years ago
I agree: hybrid teams with a diversity of product/research skills at the team level is the way to go. It is thinkers and doers that need to come together.

It is way easier said than done, though. You need true buy in from a ton of stakeholders — employees being the primary ones. And people get set in their ways.

I do like a product bias though. Not because it is more valuable somehow but because it provides the applied scientists deeper exposure to the problem space, early and often.

ethbr1 · 2 years ago
> Hassabis says that he’s learning more about introducing products and that Google’s product teams, in turn, are dealing with the novel challenges of generative AI, which has the potential to behave unusually when placed in the hands of the general public.

This part isn't rocket science.

Step 1) Post on 4chan and SomethingAwful "What is the worst thing you could do with genAI? Go."

Step 2) Test your beta product against all the answers you get.

Barrin92 · 2 years ago
the tricky part with systems like these isn't to find and fix the worst things that people can come up with on 4chan because they're by definition obvious. The much trickier part is finding the little problems that way more people run into and that most people might not even immediately recognize or report.

And that is a very complicated science in particular with something that can be as fuzzy and intransparent as generative AI.

eitland · 2 years ago
This has been going on since long before[1] the recent AI craze.

IMNSHO it seems Google just cannot miss an opportunity to mess up the basics in the quest for amazing and then fail at amazing or cancel it just as they are about to achieve it. And, ironically this has transformed their search engine from unbeatable leader in its field to something much closer to what it replaced.

[1]:This is from 5 years ago: https://erik.itland.no/more-fun-with-google-mixing-images-fr...

robertlagrant · 2 years ago
> the worst things that people can come up with on 4chan because they're by definition obvious

If they're finding your flag through star patterns and flying drones at it to set it on fire, I think they go a bit deeper than "obvious".

cloudking · 2 years ago
I think the challenge is you can't QA every edge case, because there's unlimited edge cases.
ethbr1 · 2 years ago
There's also a standard distribution that most people will think of.

"Generate photos of Nazis" wouldn't have been my first use, but in retrospect it does seem like something that of course The Internet is going to try.

That Google didn't even identify that sort of low-hanging fruit as a QA case is what points to a process in need of external input.

HeatrayEnjoyer · 2 years ago
The Control Problem.
curious_cat_163 · 2 years ago
> While no one is getting as much computing power as they want, the supply is tighter for teams engaged in pure research, say the former employee and others familiar with the lab.

Good! Maybe they will focus on researching how to make these things more compute efficient.

caycep · 2 years ago
true. I wonder how much energy in food/farming to develop a 6 yr old human is required vs. the amount required to run a hojillion GPUs running the latest generative algorithms
ericd · 2 years ago
Most people can probably answer about some subset of questions better than GPT4 can, but I don't think there's a human alive who could answer nearly as competently on >50% of the questions if gets asked. So I don't know why you'd benchmark it against a 6 year old. If you compare the carbon impact of training one of these to the carbon footprint of the average American family, it's an incredible deal in terms of utility.
dontlikeyoueith · 2 years ago
Assuming dollar cost is a relevant metric, the 6yr old human is far cheaper.
Izikiel43 · 2 years ago
But can a 6 year old answer millions upon millions of queries?

You need a lot of 6 year olds.

whamlastxmas · 2 years ago
The training costs of the 6 year old are millions of years of evolution and suffering of billions of people.
a_bonobo · 2 years ago
Looking back, I feel like the AlphaFold 3 launch a month ago was a precursor to this move. The public-facing side of AlphaFold 3 ('AlphaFold Server') is severely constrained; if you want the novel parts around drug binding prediction you need to pay Isomorphic Labs instead.

https://blog.google/technology/ai/google-deepmind-isomorphic...

I expect other developments to follow suite: a bit of R&D with a lot of hype and commercialisation.

Deleted Comment

nelsonic · 2 years ago
Let's not forget that Demis Hassabis (DeepMind CEO) created Theme Park so he knows how to create products. I have full confidence in his leadership. Buy more Alphabet (GOOG) shares!

Ref: https://en.wikipedia.org/wiki/Demis_Hassabis#Bullfrog

pixelpoet · 2 years ago
Saw him briefly at my first job at Lionhead Studios, and also worked with Alex Evans (Media Molecule cofounder, coauthor of InstantNGP, legendary demoscener, ...) there. Pretty amazing how much talent was buzzing around there.
stephen_cagle · 2 years ago
...and (burying the lead) wrote it while he was under 18! But, that is one of the odder takes for why he would be good at building a product?
nelsonic · 2 years ago
Demis understands the “customer” and can use everything he has learned in the last 20 years to build something incredible. If he can build a great/successful game with low resources, he will smash a consumer product with unlimited resources and excellent people.
therobots927 · 2 years ago
Hadn’t these two AI orgs within Google been fighting over resources for a long time? At the end of the day that’s just counterproductive. A merger was all but guaranteed and it’s clear given current stock market sentiment why the product team was chosen. Doesn’t mean I don’t feel bad for the Deepmind researchers impacted. The genAI hype is sparing no one, not even the foremost AI labs in the country.