I was also at Google when the AOL deal was signed and remember the decision and outcome quite differently. It was definitely a high risk deal, deliberately so, but in no way was it only "the best-case scenario had us breaking even". Nor was it the case that "all of our models were wrong". The product manager in charge of ads at the time had a clear understanding of exactly how the deal could be hugely profitable for Google because of the value of extra advertisers attracted to the Google platform thanks to the added AOL inventory. It was by no means a sure thing, but it was a likely outcome. Fortunately the decision makers believed him, they took the risk, and it paid off enormously.
That's what I've heard from other early Googlers who were there at the time. It was a calculated risk. If the deal went badly, it would bankrupt the company, but the best estimates from existing revenue numbers and likely ad inventory increases had it being hugely profitable for Google. Their internal numbers supported that but few people outside the company believed it, hence the need for a guaranteed revenue clause. It certainly wasn't a shot-in-the-dark, though.
Seeing as this happened 20 years ago and she was quoting from memory, I think we can give her the benefit of the doubt. It could have also been that her view at the time was different than OP's/the person who made the deal & she recalls her opinion as "how it was".
Such things in-fact happened to me (even tho I'm a bit younger than Mayer & my career started a few years later than hers), human memory is really unreliable & a lot of times you remember your feelings & translate those to facts upon recollection even when that wasn't quite the case.
And unless the people who were at Google at the time were in the meetings, they are merely speculating.
I was at Netscape and was actually in meetings that marca, and others were not in, and it's funny to hear what they say happened, when they weren't in the room, nor were they consulted. Even the people at the top can be guilty of speculation.
Can you speak at all to what sort of modeling work is done around this sort of major partnership? I'm really curious how many people are involved in that sort of exercise at that level and how clean the data is, what sorts of data are used, etc.
I do this kind of work! I don’t know at Google, but typically there will be some amount of third-party due diligence for deals over a certain size / risk profile. Some companies have an internal strategic deals group; which I would assume Google likely does (but probably didn’t back then).
If both companies’ data needs to be combined an analyzed, they usually bring in an outside deals consulting firm. Those teams tend to be very small due to the sensitive nature of the discussions involved — usually 2 or 3 people (backed by a large shared support staff and tooling) over the course of a few weeks.
Often the data used is a combination of proprietary data from both companies, commercially sourced data or proprietary data platforms built by the consulting companies. Deals are a big, sensitive, relationship-driven business.
It's often one guy with a spreadsheet, and 1-2 people to review the spreadsheet and a few business people to validate assumptions. The modeling is easy, getting the assumptions right is hard.
To be honest, if you're skilled very simple models that you can do in your head or in a few minutes usually give you a perfectly good answer. The more complex exhaustive models are usually there to make sure you didn't overlook something or 20 small inputs all cross multiplied to throw your answer off.
Getting the answer exactly right also doesn't matter, if you make $90m in profit off a deal or $100m your going to do it. What you are most concerned about is making sure that you don't lose money and what factors would push you to do that.
Inventory is slots to put ads. AOL had the eyeballs, Google had the platform to turn eyeballs into money, better than AOL was doing by itself. Google provided a guaranteed rate (cpm? cpc?) so that AOL would have confidence it wasn't bait and switch.
See also the Yahoo/Bing deal which didn't work out as well. Microsoft didn't end up actually hitting the targets, and convinced Yahoo to take less; and Yahoo also didn't reduce employee count anywhere near plan on searchy/advertising stuff, so they missed targets on revenue and cost and user experience.
Marissa described possibly the most thorough and analytical job search process I've heard from anyone, when she was talking about how she joined Google. I really liked her reflection on this in hindsight on how being overly analytical is dangerous and it's something I try to remind myself of when I'm in danger of overthinking a decision:
"I think this is a common thing that very analytical people trip themselves up with. They look at things as if there’s a right answer and a wrong answer when, the truth is, there’s often just good choices, and maybe a great choice in there."
Too much focus on utility functions, not enough focus on novelty functions, even though it's been proven that utility functions decline in usefulness as a search space expands. Given an infinite search space, a utility function can only find local optima, there is no global optima. In such situations, a novelty function that finds a path from one happy local optima to another happy local optima is a better bet than using a utility function.
The above paragraph is rational, and yet people who consider themselves hyper rational often ignore the truth of this. And the irony is that some of them do this for an emotional reason: they want the security that comes from believing that there is an absolute right answer. They are irrationally rational.
Here's a simple counterexample to what I understand your theorem to say: consider an infinite search space: 𝕽∞ and a utility function: 1-|x|. There's a single global optimum at (0, ...), and the gradient of the utility function would find it quickly.
I understand the point you're making, but these gross assumptions aren't how the world works. Reminds me of econ models with ridiculous assumptions that don't pan out when reality is a constraint.
As someone who feels they're from the outside looking in (left college to work, still ended up in the technology but without a traditional college education) one of the most frustrating things is watch folks who I perceive as traditionally trained CS and similar folks ... is their desire to go hyper analytical ... and then REALLY commit to the result as the best choice above all others because of whatever analysis they made.
Now granted there are time to hunker down and commit but sometimes all that data doesn't really tell you anything and you're still facing an unknown no matter how much work you do, and it might be worth thinking about it after taking a few steps down that road / experience. It's not uncommon to come across a variable(s) that plays a far stronger role than any other, only AFTER you tried doing something.
For hyper analytical folks the data on hand is the hammer for every nail it seems sometimes.
Even without unknowns people elevating rationality to something that would in consequence just be horrible for everyone. Shouldn't be too hard to see if you follow through with the consideration.
Hard data also suggest how often the allegedly rational result suddenly became wrong. The rational conclusion here should be to decrease hubris then, shouldn't it? Nope...
I wonder if they recognize that luck plays a significant role. Right place, right time sometimes matters more than anything you can predict or control.
> "I think this is a common thing that very analytical people trip themselves up with. They look at things as if there’s a right answer and a wrong answer when, the truth is, there’s often just good choices, and maybe a great choice in there."
This absolutely drives me crazy in design/engineering decisions. Very commonly there are a lot of good solutions and one great one, and the good ones are good enough. Yet all the brilliant intellectuals want to find the VERY BEST METHOD EVER instead of just getting stuff done.
> Yet all the brilliant intellectuals want to find the VERY BEST METHOD EVER instead of just getting stuff done.
For many mathematical, CS problems, it _does_ help to think very hard to find the very best solution to the problem, sometimes irrationally hard. I do agree that we operate in a real world, and the facts of running a business mean that you can't be spending all your time trying to figure out the best.
However, it was only by thinking very, very deeply about these problems have many of the technological improvements been possible. MapReduce, AI, ML, Cloud Computing... all started as ideas in companies where people dedicate quite a bit of thought into how to solve some basic problems.
I'll be honest: I am glad that I can reap the fruits of the labor of all these smart people, that they have enabled me to change the way computing is done, to make it easier for anyone to get started and to generate value very quickly, using the building blocks which they created after thinking about and working about this for so long.
Do an anesthesiology residency. I love how, when residents with engineering backgrounds as undergrads run up against the immutable fact of 5 minutes of hypoxia = brain death, they quickly abandon their old way of thinking in which finding the optimal solution is paramount in favor of whatever works, however kludgy.
This used to drive me nuts too. Now I just look at it as an opportunity to outmaneuver folks who are too wedded to making their solution ‘perfect.’ (Whatever that means.)
Sheryl Sandberg described a similarly thorough weighing of her decision to join Google.
Mayer: "I had a long analytical evening with a friend of mine where we looked at all the job offers I had received. We created a giant matrix with one row per job offer and one column per value. We compared everything from the basics like cash and stock to where I'd be living, happiness factor, and trajectory factor—all of these different elements. And so we went to work analyzing this problem."
Sandberg: "After a while I had a few offers and I had to make a decision, so what did I do? I am MBA trained, so I made a spreadsheet. I listed my jobs in the columns and my criteria in the rows, and compared the companies and the missions and the roles."
It's a fun bit of trivia that Sandberg put the criteria in the rows, which enables sorting the criteria - a nice way to see the upsides and downsides of each choice.
"I had a long analytical evening with a friend of mine where we looked at all the job offers I had received. We created a giant matrix with one row per job offer and one column per value"
I think this is a luxury problem. How many people have competing job offers that are even close to each other in attractiveness?
Agreed. I think for most people here the risk is the indecision rather than the wrong decision. For a lot of people there's a deathly fear of ending up under a bridge, and while that's undoubtably true for many people unfortunately, I'd wager most people here have a lot more runway than they'd think even.
It's funny, this is the nugget of wisdom that stood out to me as well. I waste so much time in my daily life trying to make the "perfect" decision, when choosing something good and moving on would be a much better use of my time.
A good plan, violently executed now, is better than a perfect plan next week. - George S. Patton
I think about this quote a lot. It's so easy to get trapped in analysis paralysis which is really just procrastinating a decision. Like most things, there is a balance. Notice he says 'good' plan, not any plan.
But that's an entirely analytical way of thinking about it! You're looking at the decision process and asking if the marginal return on investing another unit of time in it, in terms of the improvement of the goodness of the selected outcome, is greater than the return on using that unit of time in some other way.
The very term "overthinking" implies that there's a right amount of thinking for any decision, so your real problem is working out how much thinking to do.
I've said something similar when mentoring engineering managers about how to let go of certain decision making. 90% of the decisions a team makes will have very little impact on the success of the project, but the other 10% do. You only learn from experience which decisions are the 90% and which decisions are the 10%.
It's how leaders need to operate to survive if they want to avoid micromanaging, honestly.
I'm going to reveal myself as entirely too geeky here, but my primary complaint about this approach is that it relies on a linear scale for evaluating utility, when much research suggests that utility curves are frequently logarithmic. (Example: Going from earning $20k to $100k per year is a huge difference with substantial implications for financial security, but $1m to $1.08m has a substantially lower impact.)
One could argue this article looks at a restricted range where the log behaves more lineary, but if we're going to apply mathematical modeling to our life choices, ... :-)
Yes, I agree - it was nice to hear her iterate that. I have learnt the exact same thing in my two decades as an adult: it doesn't matter really in the end what decision you make, it's how you make it work (and you do have to work at it).
Seem like one of those in group type biases. Where the more similar something is the more we obsesses over the differences. Presumably because we can relate to a lot more of the information.
Somewhat ironically being irrational can actually be a good way to make unknown, but largely equal, decisions. Because at least you picked something with conviction, rather than having analyzed the situation incorrectly.
Of course for a lot of us good choices aren't the problem so much as the downside. I remember someone made a calculator online for how many time one would most likely see their parents before they died.
"I realized that, while I had a very deep understanding of artificial intelligence, I did not yet have some of the basics down. I knew how a database worked. I knew how an operating system worked. I knew how a compiler worked. But I hadn't taken classes on those topics, so I went back for my master's and took the rest of the AI offerings as well as a lot of programming basics. That way, I could actually go and market myself as a software engineer and say, “I've written a compiler. I've written an operating system. I've written a database. I know how they work"
I am confused. Is she talking about foundation CS courses like OS & database systems OR AI courses?
Keep in mind that she is a queen of self promotion. When she was talking about 140 hour work weeks she conveniently left out the fact she was paying someone else to do her domestic work [0]. Or that most of her days involved meetings, lunches and dinners.
I am reminded of the story of Henry IV who stood barefoot in the snow for three days. And through the grace of God not getting frost bite. We have come so far when we no longer believe you need God for acts like this.
>When she was talking about 140 hour work weeks she conveniently left out the fact she was paying someone else to do her domestic work [0]
This is such an odd post. Who expects someone working 140 hour weeks to do all of their own house work? Sure, they're working 20 hours a day but they ordered chinese takeout and drop off their laundry!
Then, you link to child care as an example of "her domestic work"? Who complains about someone leaving their kid in daycare?
Got to be honest, if someone told me they worked 80 hour workweeks, I'd assume they paid someone to do their housework. That's just comparative advantage at play. In fact, I wouldn't expect them to even bring up their housework. I honestly don't think that's dishonest.
As I understand it, foundational CS classes are not a requirement for that degree. Although I do know people who majored in Symbolic Systems and completed such courses in undergrad, I assume they were electives rather than requirements.
A university here has a cognitive science degree. Many people doing it and then a masters in CS to get at least a bit more practical with all the AI stuff they learned.
I did a CS undergrad and skipped compilers, O.S., DBs, and many others cause I just took as many crosslisted math/CS electives as possible (at least a theorems or math heavy course like automata if I couldn’t do better), then the minimum CS requirements to graduate
> I've written a compiler. I've written an operating system. I've written a database.
Yeah... I did both a Bachelor's and Master's degree in CS, and I've never written a complete working compiler, OS or database - I've written and tested "toy" versions of such, but that claim seems to be a bit hyperbolic. Maybe she did do all of those things, but none of those were coursework.
Considering how Mayer absolutely cratered Yahoo I'd take her advice with a grain of salt. Quarterly operating profit dropped by more than 50% during her tenure and she was the driving force behind the acquisition of dozens of worthless companies leading to the write-off of billions of dollars in goodwill value.
"When management with a reputation for brilliance tackles a business with a reputation for bad economics, it is the reputation of the business that remains intact." -- Warren Buffett.
Did Marissa Mayer have any accomplishments at either Google or Yahoo? It looks like she just /was/ there, but she didn't make any significant decisions which can be undoubtedly attributed to her.
"Our final question: Why isn't everyone happy all the time?
I don't know. Overall, I'm a pretty happy person, and one of my theories in life is that people fundamentally want to be happy. So, if you ever find a moment when you aren't happy, you should just wait. Something is likely to change in the scenario. Someone else will change what they're doing, or you'll get motivated to change what you're doing, so the situation will change overall for the better."
Sort of in a slump and I don't have anything smart to say but this made me feel a little better. I feel like I have far more ability than my company utilizes but I cannot quit because I need this job. I don't have the balls to start a company because I don't have a great idea. I just write code. So I will wait. Something will give eventually.
You will wait forever dude. Don't take advice from someone who won the lottery. Try taking action if you can and put yourself in areas (or companies) where the chance to encounter good ideas and possibly build upon them yourself is greater. It is incrediblely different to work with motivated people who love what they do vs folks who just clock in and clock out.
I would say being patient is what she is talking about here, but are there other jobs that would be more fulfilling for you? Happy to chat and help. It gets better!
Was it really necessary to put in the phrase - "..bought with my babysitting money" ? Does she mean her job as a babysitter or is it just an adjective ? How much did babysitting pay that you could afford a computer with that stuff ?
Such things in-fact happened to me (even tho I'm a bit younger than Mayer & my career started a few years later than hers), human memory is really unreliable & a lot of times you remember your feelings & translate those to facts upon recollection even when that wasn't quite the case.
I was at Netscape and was actually in meetings that marca, and others were not in, and it's funny to hear what they say happened, when they weren't in the room, nor were they consulted. Even the people at the top can be guilty of speculation.
If both companies’ data needs to be combined an analyzed, they usually bring in an outside deals consulting firm. Those teams tend to be very small due to the sensitive nature of the discussions involved — usually 2 or 3 people (backed by a large shared support staff and tooling) over the course of a few weeks.
Often the data used is a combination of proprietary data from both companies, commercially sourced data or proprietary data platforms built by the consulting companies. Deals are a big, sensitive, relationship-driven business.
AOL had ad inventory and Google had to get enough eye balls?
See also the Yahoo/Bing deal which didn't work out as well. Microsoft didn't end up actually hitting the targets, and convinced Yahoo to take less; and Yahoo also didn't reduce employee count anywhere near plan on searchy/advertising stuff, so they missed targets on revenue and cost and user experience.
"I think this is a common thing that very analytical people trip themselves up with. They look at things as if there’s a right answer and a wrong answer when, the truth is, there’s often just good choices, and maybe a great choice in there."
The above paragraph is rational, and yet people who consider themselves hyper rational often ignore the truth of this. And the irony is that some of them do this for an emotional reason: they want the security that comes from believing that there is an absolute right answer. They are irrationally rational.
Here's a simple counterexample to what I understand your theorem to say: consider an infinite search space: 𝕽∞ and a utility function: 1-|x|. There's a single global optimum at (0, ...), and the gradient of the utility function would find it quickly.
There's no such thing.
I understand the point you're making, but these gross assumptions aren't how the world works. Reminds me of econ models with ridiculous assumptions that don't pan out when reality is a constraint.
It is also a very good example of exactly what the previous post refers to: overthinking stuff.
Dead Comment
Well done.
Now granted there are time to hunker down and commit but sometimes all that data doesn't really tell you anything and you're still facing an unknown no matter how much work you do, and it might be worth thinking about it after taking a few steps down that road / experience. It's not uncommon to come across a variable(s) that plays a far stronger role than any other, only AFTER you tried doing something.
For hyper analytical folks the data on hand is the hammer for every nail it seems sometimes.
Hard data also suggest how often the allegedly rational result suddenly became wrong. The rational conclusion here should be to decrease hubris then, shouldn't it? Nope...
And this is already a stereotype for softies...
This absolutely drives me crazy in design/engineering decisions. Very commonly there are a lot of good solutions and one great one, and the good ones are good enough. Yet all the brilliant intellectuals want to find the VERY BEST METHOD EVER instead of just getting stuff done.
For many mathematical, CS problems, it _does_ help to think very hard to find the very best solution to the problem, sometimes irrationally hard. I do agree that we operate in a real world, and the facts of running a business mean that you can't be spending all your time trying to figure out the best.
However, it was only by thinking very, very deeply about these problems have many of the technological improvements been possible. MapReduce, AI, ML, Cloud Computing... all started as ideas in companies where people dedicate quite a bit of thought into how to solve some basic problems.
I'll be honest: I am glad that I can reap the fruits of the labor of all these smart people, that they have enabled me to change the way computing is done, to make it easier for anyone to get started and to generate value very quickly, using the building blocks which they created after thinking about and working about this for so long.
Mayer: "I had a long analytical evening with a friend of mine where we looked at all the job offers I had received. We created a giant matrix with one row per job offer and one column per value. We compared everything from the basics like cash and stock to where I'd be living, happiness factor, and trajectory factor—all of these different elements. And so we went to work analyzing this problem."
Sandberg: "After a while I had a few offers and I had to make a decision, so what did I do? I am MBA trained, so I made a spreadsheet. I listed my jobs in the columns and my criteria in the rows, and compared the companies and the missions and the roles."
It's a fun bit of trivia that Sandberg put the criteria in the rows, which enables sorting the criteria - a nice way to see the upsides and downsides of each choice.
I think this is a luxury problem. How many people have competing job offers that are even close to each other in attractiveness?
Deleted Comment
I think about this quote a lot. It's so easy to get trapped in analysis paralysis which is really just procrastinating a decision. Like most things, there is a balance. Notice he says 'good' plan, not any plan.
The very term "overthinking" implies that there's a right amount of thinking for any decision, so your real problem is working out how much thinking to do.
It's how leaders need to operate to survive if they want to avoid micromanaging, honestly.
https://putanumonit.com/2017/03/12/goddess-spreadsheet/
One could argue this article looks at a restricted range where the log behaves more lineary, but if we're going to apply mathematical modeling to our life choices, ... :-)
Somewhat ironically being irrational can actually be a good way to make unknown, but largely equal, decisions. Because at least you picked something with conviction, rather than having analyzed the situation incorrectly.
Of course for a lot of us good choices aren't the problem so much as the downside. I remember someone made a calculator online for how many time one would most likely see their parents before they died.
I am confused. Is she talking about foundation CS courses like OS & database systems OR AI courses?
I am reminded of the story of Henry IV who stood barefoot in the snow for three days. And through the grace of God not getting frost bite. We have come so far when we no longer believe you need God for acts like this.
[0] https://www.businessinsider.com.au/marissa-mayer-who-just-ba...
This is such an odd post. Who expects someone working 140 hour weeks to do all of their own house work? Sure, they're working 20 hours a day but they ordered chinese takeout and drop off their laundry!
Then, you link to child care as an example of "her domestic work"? Who complains about someone leaving their kid in daycare?
Someone asked a technical question, and you managed to turn the topic into accusations against Marissa Mayer (and ones that at least invoke sexism).
There's a time and a place for criticism, but it's not every time someone asks a question about her.
HER domestic work? There is absolutely no reason why she should or would mention this other than your sexist expectations.
Do you expect male CEOs to mention they pay someone to do their laundry and are abdicating their "domestic responsibilities"? Seriously?!
As I understand it, foundational CS classes are not a requirement for that degree. Although I do know people who majored in Symbolic Systems and completed such courses in undergrad, I assume they were electives rather than requirements.
In Masters she got to cover those off. I think her majoring in Symbolic Systems not CS meant she missed out on compilers, DBs, etc..
A university here has a cognitive science degree. Many people doing it and then a masters in CS to get at least a bit more practical with all the AI stuff they learned.
Yeah... I did both a Bachelor's and Master's degree in CS, and I've never written a complete working compiler, OS or database - I've written and tested "toy" versions of such, but that claim seems to be a bit hyperbolic. Maybe she did do all of those things, but none of those were coursework.
Hope that helps.
Sort of in a slump and I don't have anything smart to say but this made me feel a little better. I feel like I have far more ability than my company utilizes but I cannot quit because I need this job. I don't have the balls to start a company because I don't have a great idea. I just write code. So I will wait. Something will give eventually.