I initially thought that this was an announcement for a new pledge and thought, "they're going to forget about this the moment it's convenient." Then I read the article and realized, "Oh, it's already convenient."
Google is a megacorp, and while megacorps aren't fundamentally "evil" (for some definitions of evil), they are fundamentally unconcerned with goodness or morality, and any appearance that they are is purely a marketing exercise.
> while megacorps aren't fundamentally "evil" (for some definitions of evil),
I think megacorps being evil is universal. It tends to be corrupt cop evil vs serial killer evil, but being willing to do anything for money has historically been categorized as evil behavior.
That doesn’t mean society would be better or worse off without them, but it would be interesting to see a world where companies pay vastly higher taxes as they grow.
You're taking about pre-Clinton consumerism. That system is dead. It used to dictate that the company who could offer the best value deserved to take over most of the market.
That's old thinking. Now we have servitization. Now the business who can most efficiently offer value deserves the entire market.
Basically, iterate until you're the only one left standing and then never "sell" anything but licenses ever again.
Historically, unchecked corporate power tends to mirror the flaws of the systems that enable it. For example, the Gilded Age robber barons exploited weak regulations, while tech giants thrive on data privacy gray areas. Maybe the problem isn’t size itself, but the lack of guardrails that scale with corporate influence (e.g., antitrust enforcement, environmental accountability, or worker protections), but what do I know!
I guess corrupt cop vs serial killer is like amorality (profit-driven systems) vs immorality (active malice)? A company is a mix of stakeholders, some of whom push for ethical practices. But when shareholders demand endless growth, even well-intentioned actors get squeezed.
Agreed, I think part of it boils down to the concept of 'limited liability' itself which is a euphemism for 'the right to carry out some degree of evil without consequence.'
Also, scale plays a significant part as well. Any high-exposure organization which operates on a global scale has access to an extremely large pool of candidates to staff its offices... And such candidate pools necessarily include a large number of any given personas... Including large numbers of ethically-challenged individuals and criminals. Without an interview process which actively selects for 'ethics', the ethically-challenged and criminal individuals have a significant upper-hand in getting hired and then later wedging themselves into positions of power within the company.
Criminals and ethically-challenged individuals have a bigger risk appetite than honest people so they are more likely to succeed within a corporate hierarchy which is founded on 'positive thinking' and 'turning a blind eye'. On a global corporate playing field, there is a huge amount of money to be made in hiding and explaining away irregularities.
A corporate employee can do something fraudulent and then hold onto their jobs while securing higher pay, simply by signaling to their employer that they will accept responsibility if the scheme is exposed; the corporate employer is happy to maintain this arrangement and feign ignorance while extracting profits so long as the scheme is kept under wraps... Then if the scheme is exposed, the corporations will swiftly throw the corporate employee under the bus in accordance to the 'unspoken agreement'.
The corporate structure is extremely effective at deflecting and dissipating liability away from itself (and especially its shareholders) and onto citizens/taxpayers, governments and employees (as a last layer of defense). The shareholder who benefits the most from the activities of the corporation is fully insulated from the crimes of the corporation. The scapegoats are lined up, sandwiched between layers of plausible deniability in such a way that the shareholder at the end of the line can always claim complete ignorance and innocence.
Most suggestions of this nature fail to explain how they will deal with the problem of people just seeing there’s no point in trying for more. On a personal level, I’ve heard people from Norway describe this problem for personal income tax—at some point (notably below a typical US senior software engineer’s earnings) the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating, and you either coast or emigrate. Perhaps that’s not entirely undesirable, but I don’t know if people have contemplated the consequences of the existence of such a de facto ceiling seriously.
As scale grows the moral ambiguity does also. Megacorps default to “evil” because action in a large number of circumstances for a large number of events does as well, particularly when economic factors are motivating behavior (implicitly or explicitly). Essentially being “non-evil” becomes more expensive than the value it adds. There is always someone on the other end of a transaction, by definition.
Right! I was going to say something like that. Google is in all honesty, corrupt. Then again, most big corporations are this way. Google and Microsoft seem to be a bit more than others, though.
My problem with this take is that you forget, the corporations are made up of people, so in order for the corporation to be evil you have to take into account the aggregate desires and decision making of the employees and shareholders and, frankly, call them all evil. Calling them evil is kind of a silly thing to do anyway, but you can not divorce the actions of a company from those who run and support it, and I would argue you can't divorce those actions from those who buy the products the company puts out either.
So in effect you have to call the employees and shareholders evil. Well those are the same people who also work and hold public office from time to time, or are shareholders, or whatever. You can't limit this "evilness" to just an abstract corporation. Not only is it not true, you are setting up your "problem" so that it can't be addressed because you're only moralizing over the abstract corporation and not the physical manifestation of the corporation either. What do you do about the abstract corporation being evil if not taking action in the physical world against the physical people who work at and run the corporation and those who buy its products?
I've noticed similar behavior with respect to climate change advocacy and really just "government" in general. If you can't take personal responsibility, or even try to change your own habits, volunteer, work toward public office, organize, etc. it's less than useless to rail about these entities that many claim are immoral or need reform if you are not personally going to get up and do something about it. Instead you (not you specifically) just complain on the Internet or to friends and family, those complaints do nothing, and you feel good about your complaining so you don't feel like you need to actually do anything to make change. This is very unproductive because you have made yourself feel good about the problem but haven't actually done anything.
With all that being said, I'm not sure how paying vastly higher taxes would make Google (or any other company) less evil or more evil. What if Google pays more taxes and that tax money does (insert really bad thing you don't like)? Paying taxes isn't like a moral good or moral bad thing.
> while megacorps aren't fundamentally "evil" (for some definitions of evil)
A couple years ago, my state banned single use plastic bags. The very moment they did, all of my local Walmarts switched to heavier plastic bags that technically weren't single use. They still gave them away for free just as they did with the first ones. (These we're good quality bags and I was frustrated that Walmart didn't just give them away by default). Eventually my state banned those too, and like clockwork, Walmart was giving away paper bag bags -- decent quality ones, too. Though I still really liked the thicker plastic ones since I could use them for other things.
This made me realize that no corporation would do anything slightly better for the environment unless forced. I think this is the case for anything a corporation would do, including evil things. I think they just follow the money, no ethics, and it's up to the government to provide those ethics.
What is Googs going to do, leave money on the table?
And if Googs doesn't do it, someone else will, so it might as well be them that makes money for their shareholders. Technically, couldn't activist shareholders come together and claim by not going after this market the leadership should be replaced for those that would? After all, share prices is the only metric that matters
What are you are saying is: optimising for commercial success is incompatible with morality. The conclusion is that publicly traded megacorps must inevitably trend towards amorality.
So yes, they aren't "evil" but I think amorality is the closest thing to "evil" that actually exists in the real world.
I don't buy that argument. There are things Google does better than competitors, so them doing an evil thing means they are doing it better. Also, they could be spending those resources on something less evil.
A megacorp is made up of people. So it's people who are fundamentally evil.
The main thing here I think is anonymity through numbers and complexity. You and thousands of others just want to see the numbers go up. And that desire is what ultimately influences decisions like this.
If google stock dropped because of this then google wouldn't do it. But it is the actions of humans in aggregate that keeps it up.
Megacorporations are scapegoats when in actuality they are just a set of democratic rules. The corporation is just a window into the true nature of humanity.
You're half right. Corporations are just made of people. But, they're more than the sum of their parts. The numbers and complexity do more than provide anonymity: they provide a mechanism where individuals can work in concert to accomplish bad things in the aggregate, without (necessarily) requiring any particular individual to violate their conscience. It just happens through the power of incentives and specialization. If you're in upper management, the complexity also makes it easier to turn a blind eye to what is happening down below.
>A megacorp is made up of people. So it's people who are fundamentally evil.
That is to make a mistake of composition. An entity can have properties that none of its parts have. A cube made out of bricks is round, but none of the bricks are round. You might be evil, your cells aren't evil.
It's often the case that institutions are out of alignment with its members. It can even be the case that all participants of an organization are evil, but the system still functions well. (usually one of the arguments for markets, which is one such system). When creating an organization that is effectively the most basic task, how to structure it such that even when its individual members are up to no good, the functioning of the organization is improved.
Not a useful framing in my view. People follow private incentives. Private incentives are by default not perfectly aligned with external stakeholders. That leads to "evil" behavior. But it's not the people or the org, it's the incentives. You can substitute other people into the same system and get the same outcome.
> they are fundamentally unconcerned with goodness or morality
No, no. Call a spade a spade. This behavior and attitude is evil. Corporations under modern American capitalism must be evil. That's how capitalism works.
You succeed in capitalism not by building a better mousetrap, but by destroying anyone who builds a better moustrap than you. You litigate, acquire, bribe, and rewrite legislation to ensure yours is the best and only mousetrap available to purchase, with a token 'competitor' kept on life support so you can plausibly deny anticompetitive practices.
If you're a good company trying to do good things, you simply can't compete. The market just does not value what is good, just, or beneficial. The market only wants number to go up, and to go up right now at any cost. Amazon will start pumping out direct clones of your product for pennies. What are you gonna do, sue Amazon?! best of luck.
"The market" is just a lot of people making decisions about what to do with their money. If you want the market to behave differently, be the change you want to see, and teach others to do the same.
Paperclip maximizing robot making the excuse that it's just maximizing paperclips, that's what it was designed to do, there's even a statute saying that robots must do only what it was designed to do, so it's not evil just amoral.
Weird thing is for corporations, it's humans running the whole thing.
> they’re amoral and are designed to maximize profits
Isn't that a contradiction? Morality is fundamentally a sense of "right and wrong". If they reward anything that maximizes short term profit and punish anything that works against it then it appears to me that they have a simple, but clearly defined sense of morality centered around profit.
> they are fundamentally unconcerned with goodness or morality,
I would argue that is fundamentally evil. Because evil pays the best. Its like drunk driving, on an empty road it can only harm you, but we live in a society full of other people.
Ethical pledges from corporations, especially ones as large as Google, are PR tools first and foremost. They last only as long as they align with strategic and financial interests
I guess a question becomes, how does dropping these self-imposed limitations work as a marketing exercise? Probably most of their customers or prospective customers won't care, but will a cheery multi-colored new product land a little differently? If Northrop Grumman made a smart home hub, you might be reluctant to put it in your living room.
They are dropping these pledges to avoid securities lawsuits. “Everything is securities fraud” and presumably if they have a stated corporate pledge to do something, and knowingly violate it, any drop in the stock price could use this as grounds.
Being a defense contractor isn't a problem that a little corporate rearrangement can't fix. Put the consumer division under a new subsidiary with a friendly name and you're golden. Even among the small percentage who know the link, it's likely nobody will really care. For certain markets ("tacticool" gear, consumer firearms) being a defense contractor is even a bonus.
A megacorp is amoral. They have no concern over an individual anymore than a human has concern for an ant, because individuals simply don’t register to them. The ant may regard the human as pure evil for the destruction it rains upon its colony, but the ants are not even a thought in the human’s mind most of the time.
"We won't use your dollars and efforts for bad and destructive activities, until we accumulate enough of your dollars and efforts that we no longer care about your opinions".
> they are fundamentally unconcerned with goodness or morality, and any appearance that they are is purely a marketing exercise
This is flatly untrue. Corporations are made up of humans who make decisions. They are indeed concerned with goodness and/or morality. Saying otherwise lets them off the hook for the explicit decisions they make every day about how to operate their company. It's one reason why there are shareholder meetings, proxy votes, activist investors, Certified B-Corporations, etc.
Google is a special case because they specifically removed the "Don't Be Evil" clause, therefore, I can only assume they are in fact fundamentally "evil"
Not evil, perhaps, but run by Moloch[1] -- which is possibly just as bad. Their incentives are set up to throw virtually all human values under the bus because even if they don't, they will be out-marginal-profited by someone that does.
Well, the US gov blew away its opportunity to break down Google and other mega-corps and restore any sense of decency. Google just entered the Trump bandwagon, which means the monopoly lawsuit will go nowhere, and in exchange Google will do Trump's bidding.
This is a very important point to remember when assessing ideas like "Is it good to build swarms of murderbots to mow down rioting peasants angry over having expenses but no jobs?" Most people might answer "no," but if the people with money answer "yes," that becomes the market's objective. Then the incentives diffuse through the economy and you don't just get the murderbots, you also get the news stations explaining how the violent peasants brought this on themselves and the politicians making murderbots tax deductible and so on.
It is partially the markets fault. If they were demonized for this, there's at least be a veneer of trying to look moral. Instead they can simply go full mask off. That's why you shouldn't tolerate the intolerant.
I have full faith that the market[1] will direct the trolley onto the morally optimal track. It's invisible hand will guide mine when I decide or decide against pulling the lever. Either way, I can be sure that the result is maximally beneficial to the participants, myself included.
Being unconcerned with goodness and morality is literally the definition of evil. Megacorps are sociopathic and evil by design. The only thing that matters is shareholder value, not ethics or morals. Morals and ethics only seem to have value, if they result in increased value for tye shareholder, which again is the only thing that these sociopathic entities are concerned with.
this, but broader. Goodness and morality is a subjective and more importantly relative measure, making it useless in many situations (as this one).
while knowing this seems useless, it's actually the missing intrinsic compass and the cause for a lot of bad and stupid behavior (by the definition that something is stupid if chosen knowing it will cause negative consequences for the doer)
Everything should primarily be measured based on its primary goal. For "for-profit" companies that's obvious in their name and definition.
That there's nothing that should be assumed beyond what's stated is the premise of any contract whether commercial, public or personal (like friendship) is a basic tool for debate and decision making.
I want to be upset over this in an exasperated expression of oddly naive "why can't we all get along?" frame of mind. I want to, because I know how I would like the world to look like, but as a species we, including myself, continually fail to disappoint when it comes nearly guaranteed self-destruction.
I want to get upset over it, but I sadly recognize the reality of the why this is not surprising to anyone. We actually have competitors in that space, who will do that and more. We already have seen some of the more horrifying developments in that area.. and, when you think about it, those are the things that were allowed to be shown publicly. All the fun stuff is happening behind closed doors away from social media.
A vague “stuff is happening behind closed doors” isn’t enough of a reason to build AI weapons. If you shared a specific weapon that could only be countered with AI weapons, that might make me feel differently. But right now I can’t imagine a reason we’d need or want robots to decide who to kill.
When people talk about AI being dangerous, or possibly bringing about the end of the world, I usually disagree. But AI weapons are obviously dangerous, and could easily get out of control. Their whole point is that they are out of control.
The issue isn’t that AI weapons are “evil”. It’s that value alignment isn’t a solved problem, and AI weapons could kill people we wouldn’t want them to kill.
Have a look at what explosive drones are doing in the fight for Ukraine.
Now tell me how you counter a thousand small EMP hardened autonomous drones intent on delivering an explosive payload to one target without AI of some kind?
How about 30k drones come from a shipping vessel in the port of Los Angeles that start shooting at random people? To insert a human into the loop (somehow rapidly wake up, move, log hundreds of people in to make the kill/nokill decision per target) would be accepting way more casualties.
What if some of the 30k drones were manned?
The timeframes of battles are drastically reduced with the latest technology to where humans just can't keep up.
I guess there's a lot missing in semantics, is the AI specifically for targeting or is a drone that can adapt to changes in wind speed using AI considered an AI weapon?
At the end of the day though, the biggest use of AI in defense will always be information gathering and processing.
I agree. I don't think there's really a case for the US developing any offensive weapons. Geographically, economically and politically, we are not under any sort of credible threat. Maybe AI based missile defense or something, but we already have a completely unjustified arsenal of offensive weapons and a history of using them amorally.
> AI weapons are obviously dangerous, and could easily get out of control.
The real danger is when they can't. When they, without hesitation or remorse, kill one or millions of people with maximum efficiency, or "just" exist with that capability, to threaten them with such a fate. Unlike nuclear weapons, in case of a stalemate between superpowers they can also be turned inwards.
Using AI for defensive weapons is one thing, and maybe some of those would have to shoot explosives at other things to defend; but just going with "eh, we need to have the ALL possible offensive capability to defend against ANY possible offensive capability" is not credible to me.
The threat scenario is supposed to be masses of enemy automated weapons, not huddled masses; so why isn't the objective to develop weapons that are really good at fighting automatic weapons, but literally can't/won't kill humans, because that's would remain something only human soldiers do? Quite the elephant on the couch IMO.
People try to cope and say others are guided by lies. In the US, people knew exactly what they were getting and I’m true the same is true in other “democracies”.
The path we're on was inevitable the second man discovered fire.
No matter which way you look at it, we live on a planet where resources are scarce. Which means there will be competition. Which means there will be innovation in weaponry.
That said, we've had nukes for decades, and have collectively decided to not use them for decades. So there is some room for optimism.
It took the use of poison gas to get countries on board, and some will still use it. Just more carefully.
Would China, Russia, or Iran agree to such a preemptive AI weapons ban? Doubtful, it’s their chance to close the gap. I’m onboard if so, but I don’t see anything happening on that front until well after they start dominating the landscape.
Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or why not use rocketry to move into space by building space habitats for more land?
Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?
These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. ...
Likewise, even United States three-letter agencies like the NSA and the CIA, as well as their foreign counterparts, are becoming ironic institutions in many ways. Despite probably having more computing power per square foot than any other place in the world, they seem not to have thought much about the implications of all that computer power and organized information to transform the world into a place of abundance for all. Cheap computing makes possible just about cheap everything else, as does the ability to make better designs through shared computing. ...
There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ...
The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream.
We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working"). ...
Still, we must accept that there is nothing wrong with wanting some security. The issue is how we go about it in a non-ironic way that works for everyone. ...
-----
Here is something I posted to the Project Virgle mailing list in April 2008 that in part touches on the issue of Google's identity as a scarcity vs. post-scarcity organization:
"A Rant On Financial Obesity and an Ironic Disclosure"
https://pdfernhout.net/a-rant-on-financial-obesity-and-Proje...
"Look at Project Virgle and "An Open Source Planet" ... Even just in jest some of the most financially obese people on the planet (who have built their company with thousands of servers all running GNU/Linux free software) apparently could not see any other possibility but seriously becoming even more financially obese off the free work of others on another planet (as well as saddling others with financial obesity too :-). And that jest came almost half a century after the "Triple Revolution" letter of 1964 about the growing disconnect between effort and productivity (or work and financial fitness)...Even not having completed their PhDs, the top Google-ites may well take many more decades to shake off that ideological discipline. I know it took me decades (and I am still only part way there. :-) As with my mother, no doubt Googlers have lived through periods of scarcity of money relative to their needs to survive or be independent scholars or effective agents of change. Is it any wonder they probably think being financially obese is a good thing, not an indication of either personal or societal pathology? :-( ..."
Last April, inspired by some activities a friend was doing, I asked an LLM AI ( chatpdf ) to write a song about my sig, using the prompt 'Please make a song about "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."'. Then that friend made the results into an AI-generated song:
"Challenge to Abundance"
https://suno.com/song/d3d8c296-c2c4-46c6-80fb-ca9882c5e00a
"(Verse 1) In the 21st century, we face a paradox so clear, Technologies
of abundance, yet scarcity we fear, Irony in our hands, what will we
choose to see, A world of endless possibilities or stuck in scarcity?
(Chorus) The biggest challenge we face, it's plain to see, Embracing
abundance or stuck in scarcity, Let's break free from old ways, embrace
what could be, The irony of our times, let's set our minds free. ..."
I hope Googlers and others eventually get the perspective shift that comes with recognizing the irony of what they and many others are doing with weaponizing and otherwise competetizing AI...
Also on that larger theme by Alfie Kohn:
"No Contest: The Case Against Competition"
https://www.alfiekohn.org/contest/
"No Contest, which has been stirring up controversy since its publication in 1986, stands as the definitive critique of competition. Drawing from hundreds of studies, Alfie Kohn eloquently argues that our struggle to defeat each other — at work, at school, at play, and at home — turns all of us into losers. Contrary to the myths with which we have been raised, Kohn shows that competition is not an inevitable part of “human nature.” It does not motivate us to do our best (in fact, the reason our workplaces and schools are in trouble is that they value competitiveness instead of excellence.) Rather than building character, competition sabotages self-esteem and ruins relationships. It even warps recreation by turning the playing field into a battlefield. No Contest makes a powerful case that “healthy competition” is a contradiction in terms. Because any win/lose arrangement is undesirable, we will have to restructure our institutions for the benefit of ourselves, our children, and our society. ..."
Most of the early research into computers was funded for military applications. There is a reason why the silicon valley became a hub for technological development.
Is this more or less ethical than OpenAI getting a DoD contract to deploy models on the battlefield less than a year after saying that would never happen, with the excuse being well we only meant certain kinds of warfare or military purposes, obviously. I guess my question is, isn't there something more honest about an open heel-turn, like Google has made, compared to one where you maintain the fiction that you're still trying to do the right thing?
I think it's unfair to bring up OpenAI's commitment to its own principles as any sort of bar of success for anyone else. That's a bit like saying "Yes, this does look like they're yielding to foreign tyrants, but is this more or less ethical than Vidkun Quisling's tenure as head of Norway?"
At least Google employees will sign petitions and do things that follow a moral code.
OpenAI is sneaky slimey and headed by a psycho narcissist. Makes Pichia looks like a saint.
Ethically, it’s the same. But if someone was pointing a gun at me I’d rather have someone with some empathy behind the trigger rather than the personification of a company that bleeds high level execs and… insert many problems here
> At least Google employees will sign petitions and do things that follow a moral code.
It hardly matters what employees think anymore when the executives are weather-vanes who point in the direction of wealth and power over all else (just like the executives at their competitors).
In case you missed it, a few days back Google asked all employees who don't believe in their "mission" to voluntarily resign.
One of my chief worries about LLMs for intelligence agencies is the ability to scale textual analysis. Previously there at least had to be an agent taking an interest in you; today an LLM could theoretically read all text you've ever touched and flag anything from legal violations to political sentiments.
This has already been possible long before LLMs came along. I also doubt that an LLM is the best tool for this at scale, if you're talking about sifting through billions of messages it gets too expensive very fast.
It's only expensive if you throw all data directly at the largest models that you have. But the usual way to apply LMs to such large amounts of data is by staggering them: you have very small & fast classifiers operating first to weed out anything vaguely suspicious (and you train them to be aggressive - false positives are okay, false negatives are not). Things that get through get reviewed by a more advanced model. Repeat the loop as many times as needed for best throughput.
No, OP is right. We are truly at the dystopian point where a sufficiently rich government can track the loyalty of its citizens in real time by monitoring all electronic communications.
Also, "expensive" is relative. When you consider how much US has historically been willing to spend on such things...
LLMs can do more than whatever we had before. Sentiment analysis and keyword searches only worked so well; LLMs understand meaning and intent. Cost and scale are not bottlenecks for long.
But now instead of a human going "yes yes after a few hours of work i have chosen the target" they can go "we did more processing on who to best blow away, and it chose 100 more names than any human ever could! efficiency!"
I feel like we’re just in that period of Downtown Abbey where everyone is waiting for the World War I to start. Everyone can feel that it’s coming and no one can do anything about it.
Reality is in a war between the West vs Russia/Iran/North Korea/China whomever we end up fighting, we’re going to do whatever we can so the Western civilization and soldiers survive and win.
Ultimately Google is a western company and if war breaks out not supporting our civilization/military is going to be wildly unpopular and turn them into a pariah and anything to the contrary was never going to happen.
The reason war may be coming is because the West is falling apart. The US is isolating itself and bullying its allies. Alternative powers wanting to do something expansive never had a better moment in time to do so.
There was no war forthcoming between an integrated West and any other power. War is coming because there no longer is a West.
The reasons are not the main focus here. The fact is that China's aggressive stance on Taiwan, Russia's invasion of Ukraine, and the alignment with China, North Korea, and Iran are leading to military buildups and alliances worldwide. Google, being a company founded and controlled by Americans, is likely to support the effort if a war occurs, rather than remain passive while their friends and family's children are dying.
Today people have differing views of nuclear weapons, but people who fought near Japan and survived believe the bomb saved their life.
It's easy to pretend you don't have a sides when there is peace, but in this environment google's going to take a side.
So... when the Russian tanks start rolling on the way to Berlin and Chinese troops are marching along that nice new (old) road they finished fixing up - otw to Europe, so if that happens, which looks possible - you think there will be no West??
If the world is to be divided Europe is the lowest hanging and sweetest fruit.
I think there will still be West even if there is a King in the US demanding fealty to part of it - we are the same as they are, it's ridiculous to pretend we are.
Ideology is one thing, survival of people and culture is another.
There is no such thing as "our" or "their" civilization. We have only one.
Maybe such concept had some ground a few centuries ago still, but by now this idea that "we" are significantly different than "them" is a dangerous fantasy for most people.
A country that now threatens the annexation of Greenland and advocates for a complete resettlement of all Palestinians to Jordan and Egypt certainly needs weapons for crowd control.
These weapons could also come in handy domestically if people find out that both parties screw them all the time.
I wonder why people claim that China is a threat out side of economics. Has China tried to invade the US? Has Russia tried to invade the EU? The answer is no. The only current threats to the EU come from the orange man.
The same person who also revoked the INF treaty. The US now installs intermediate range nuclear missiles in Europe. Russia does so in Belarus.
So both great powers have convenient whipping boys to be nuked first, after which they will get second thoughts.
It is beyond ridiculous that both the US and Russia constantly claim that they are in danger, when all international crises in the last 40 years have been started by one of them.
"Russia hasn't tried to invade the EU" is quite weasel-word-y. They certainly have invaded countries in Europe, specifically Ukraine; the only reason they didn't invade countries in the European Union itself is that would trigger a war that they would face massive casualties from and inevitably lose, in part due to NATO alliances.
Military power is what has kept the EU safe, and countries without strong enough military power — such as Ukraine, which naively gave up its nuclear arsenal in the 90s in exchange for Russian promises to not invade — are repeatedly battered by the power-hungry.
Isn’t China building a large modern sea fleet and increasing military pressure on many of our allies? I would not call that threat illusionary. Also their economic policies are very predatory where they support other countries in exchange for things which cannot be taken back. Why invade when you can just take what you need.
The orange man is completely ineffectual on both fronts. Will not spend the money on the military and too inept to make a deal that doesn’t cost in the long run.
It is interesting how these companies shift with the political winds
Just like Meta announced some changes around the time of inauguration, I'm sure Google management has noticed the AI announcements, and they don't want to be perceived in a certain way by the current administration
I think the truth is more in the middle (there is tons of disagreement within the company), but they naturally care about how they are perceived by those in power
I would say it's natural. Their one and only incentive isn't as they are trying to tell you "make a word better place" or similiar awkward corpo charade but to make a profit. That's a purpose why companies are created and they are always following it.
Sure, but I'd also say that the employee base has a line that is different than the government's, and that does matter for making profit. Creative and independent employees generally produce more than ones who are just following what the boss says
Actually, this reminds me of when Paul Graham came to Google, around 2005. Before that, I had read an essay or two, and thought he was kind of a blowhard.
But I actually thought he was a great speaker in person, and that lecture changed my opinion. He was talking about "Don't Be Evil", and he also said something very charming about how "Don't Be Evil" is conditional upon having the luxury to live up to that, which is true.
That applies to both companies and people:
- If Google wasn't a money-printing machine in 2005, then "don't be evil" would have been less appealing. And now in 2020, 2021, .... 2025, we can see that Google clearly thinks about its quarterly earning in a way that it didn't in 2005, so "don't be evil" is too constraining, and was discarded.
- For individuals, we may not pay much attention to "don't be evil" early in our careers. But it is more appealing when you're more established, and have had a couple decades to reflect on what you did with your time!
I see it as the natural extension of the Chomsky "manufacturing consent" propaganda model. The people in key positions of power and authority know who their masters are, and everyone below them falls into line.
Google is a megacorp, and while megacorps aren't fundamentally "evil" (for some definitions of evil), they are fundamentally unconcerned with goodness or morality, and any appearance that they are is purely a marketing exercise.
I think megacorps being evil is universal. It tends to be corrupt cop evil vs serial killer evil, but being willing to do anything for money has historically been categorized as evil behavior.
That doesn’t mean society would be better or worse off without them, but it would be interesting to see a world where companies pay vastly higher taxes as they grow.
That's old thinking. Now we have servitization. Now the business who can most efficiently offer value deserves the entire market.
Basically, iterate until you're the only one left standing and then never "sell" anything but licenses ever again.
I guess corrupt cop vs serial killer is like amorality (profit-driven systems) vs immorality (active malice)? A company is a mix of stakeholders, some of whom push for ethical practices. But when shareholders demand endless growth, even well-intentioned actors get squeezed.
Also, scale plays a significant part as well. Any high-exposure organization which operates on a global scale has access to an extremely large pool of candidates to staff its offices... And such candidate pools necessarily include a large number of any given personas... Including large numbers of ethically-challenged individuals and criminals. Without an interview process which actively selects for 'ethics', the ethically-challenged and criminal individuals have a significant upper-hand in getting hired and then later wedging themselves into positions of power within the company.
Criminals and ethically-challenged individuals have a bigger risk appetite than honest people so they are more likely to succeed within a corporate hierarchy which is founded on 'positive thinking' and 'turning a blind eye'. On a global corporate playing field, there is a huge amount of money to be made in hiding and explaining away irregularities.
A corporate employee can do something fraudulent and then hold onto their jobs while securing higher pay, simply by signaling to their employer that they will accept responsibility if the scheme is exposed; the corporate employer is happy to maintain this arrangement and feign ignorance while extracting profits so long as the scheme is kept under wraps... Then if the scheme is exposed, the corporations will swiftly throw the corporate employee under the bus in accordance to the 'unspoken agreement'.
The corporate structure is extremely effective at deflecting and dissipating liability away from itself (and especially its shareholders) and onto citizens/taxpayers, governments and employees (as a last layer of defense). The shareholder who benefits the most from the activities of the corporation is fully insulated from the crimes of the corporation. The scapegoats are lined up, sandwiched between layers of plausible deniability in such a way that the shareholder at the end of the line can always claim complete ignorance and innocence.
Even megacorps will do categorically good things if it helps their bottom line.
So in effect you have to call the employees and shareholders evil. Well those are the same people who also work and hold public office from time to time, or are shareholders, or whatever. You can't limit this "evilness" to just an abstract corporation. Not only is it not true, you are setting up your "problem" so that it can't be addressed because you're only moralizing over the abstract corporation and not the physical manifestation of the corporation either. What do you do about the abstract corporation being evil if not taking action in the physical world against the physical people who work at and run the corporation and those who buy its products?
I've noticed similar behavior with respect to climate change advocacy and really just "government" in general. If you can't take personal responsibility, or even try to change your own habits, volunteer, work toward public office, organize, etc. it's less than useless to rail about these entities that many claim are immoral or need reform if you are not personally going to get up and do something about it. Instead you (not you specifically) just complain on the Internet or to friends and family, those complaints do nothing, and you feel good about your complaining so you don't feel like you need to actually do anything to make change. This is very unproductive because you have made yourself feel good about the problem but haven't actually done anything.
With all that being said, I'm not sure how paying vastly higher taxes would make Google (or any other company) less evil or more evil. What if Google pays more taxes and that tax money does (insert really bad thing you don't like)? Paying taxes isn't like a moral good or moral bad thing.
A couple years ago, my state banned single use plastic bags. The very moment they did, all of my local Walmarts switched to heavier plastic bags that technically weren't single use. They still gave them away for free just as they did with the first ones. (These we're good quality bags and I was frustrated that Walmart didn't just give them away by default). Eventually my state banned those too, and like clockwork, Walmart was giving away paper bag bags -- decent quality ones, too. Though I still really liked the thicker plastic ones since I could use them for other things.
This made me realize that no corporation would do anything slightly better for the environment unless forced. I think this is the case for anything a corporation would do, including evil things. I think they just follow the money, no ethics, and it's up to the government to provide those ethics.
And if Googs doesn't do it, someone else will, so it might as well be them that makes money for their shareholders. Technically, couldn't activist shareholders come together and claim by not going after this market the leadership should be replaced for those that would? After all, share prices is the only metric that matters
What are you are saying is: optimising for commercial success is incompatible with morality. The conclusion is that publicly traded megacorps must inevitably trend towards amorality.
So yes, they aren't "evil" but I think amorality is the closest thing to "evil" that actually exists in the real world.
Seems fundamentally evil.
The main thing here I think is anonymity through numbers and complexity. You and thousands of others just want to see the numbers go up. And that desire is what ultimately influences decisions like this.
If google stock dropped because of this then google wouldn't do it. But it is the actions of humans in aggregate that keeps it up.
Megacorporations are scapegoats when in actuality they are just a set of democratic rules. The corporation is just a window into the true nature of humanity.
That is to make a mistake of composition. An entity can have properties that none of its parts have. A cube made out of bricks is round, but none of the bricks are round. You might be evil, your cells aren't evil.
It's often the case that institutions are out of alignment with its members. It can even be the case that all participants of an organization are evil, but the system still functions well. (usually one of the arguments for markets, which is one such system). When creating an organization that is effectively the most basic task, how to structure it such that even when its individual members are up to no good, the functioning of the organization is improved.
No, no. Call a spade a spade. This behavior and attitude is evil. Corporations under modern American capitalism must be evil. That's how capitalism works.
You succeed in capitalism not by building a better mousetrap, but by destroying anyone who builds a better moustrap than you. You litigate, acquire, bribe, and rewrite legislation to ensure yours is the best and only mousetrap available to purchase, with a token 'competitor' kept on life support so you can plausibly deny anticompetitive practices.
If you're a good company trying to do good things, you simply can't compete. The market just does not value what is good, just, or beneficial. The market only wants number to go up, and to go up right now at any cost. Amazon will start pumping out direct clones of your product for pennies. What are you gonna do, sue Amazon?! best of luck.
Weird thing is for corporations, it's humans running the whole thing.
This is a meme that needs to die, for 99% of cases out there the line between good/bad is very clear cut.
Dumb nihilists keep the world from moving forward with regards to human rights and lawful behavior.
Most people consider neglect evil in my experience.
Isn't that a contradiction? Morality is fundamentally a sense of "right and wrong". If they reward anything that maximizes short term profit and punish anything that works against it then it appears to me that they have a simple, but clearly defined sense of morality centered around profit.
Seems it would be informative to many of the people posting on this thread.
I would argue that is fundamentally evil. Because evil pays the best. Its like drunk driving, on an empty road it can only harm you, but we live in a society full of other people.
This is flatly untrue. Corporations are made up of humans who make decisions. They are indeed concerned with goodness and/or morality. Saying otherwise lets them off the hook for the explicit decisions they make every day about how to operate their company. It's one reason why there are shareholder meetings, proxy votes, activist investors, Certified B-Corporations, etc.
[1]: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
... or at least that's what these people have to be telling themselves at all times.
This is a very important point to remember when assessing ideas like "Is it good to build swarms of murderbots to mow down rioting peasants angry over having expenses but no jobs?" Most people might answer "no," but if the people with money answer "yes," that becomes the market's objective. Then the incentives diffuse through the economy and you don't just get the murderbots, you also get the news stations explaining how the violent peasants brought this on themselves and the politicians making murderbots tax deductible and so on.
1. https://drakelawreview.org/wp-content/uploads/2015/01/lrdisc...
Dead Comment
while knowing this seems useless, it's actually the missing intrinsic compass and the cause for a lot of bad and stupid behavior (by the definition that something is stupid if chosen knowing it will cause negative consequences for the doer)
Everything should primarily be measured based on its primary goal. For "for-profit" companies that's obvious in their name and definition.
That there's nothing that should be assumed beyond what's stated is the premise of any contract whether commercial, public or personal (like friendship) is a basic tool for debate and decision making.
I want to get upset over it, but I sadly recognize the reality of the why this is not surprising to anyone. We actually have competitors in that space, who will do that and more. We already have seen some of the more horrifying developments in that area.. and, when you think about it, those are the things that were allowed to be shown publicly. All the fun stuff is happening behind closed doors away from social media.
When people talk about AI being dangerous, or possibly bringing about the end of the world, I usually disagree. But AI weapons are obviously dangerous, and could easily get out of control. Their whole point is that they are out of control.
The issue isn’t that AI weapons are “evil”. It’s that value alignment isn’t a solved problem, and AI weapons could kill people we wouldn’t want them to kill.
Now tell me how you counter a thousand small EMP hardened autonomous drones intent on delivering an explosive payload to one target without AI of some kind?
I guess there's a lot missing in semantics, is the AI specifically for targeting or is a drone that can adapt to changes in wind speed using AI considered an AI weapon?
At the end of the day though, the biggest use of AI in defense will always be information gathering and processing.
The real danger is when they can't. When they, without hesitation or remorse, kill one or millions of people with maximum efficiency, or "just" exist with that capability, to threaten them with such a fate. Unlike nuclear weapons, in case of a stalemate between superpowers they can also be turned inwards.
Using AI for defensive weapons is one thing, and maybe some of those would have to shoot explosives at other things to defend; but just going with "eh, we need to have the ALL possible offensive capability to defend against ANY possible offensive capability" is not credible to me.
The threat scenario is supposed to be masses of enemy automated weapons, not huddled masses; so why isn't the objective to develop weapons that are really good at fighting automatic weapons, but literally can't/won't kill humans, because that's would remain something only human soldiers do? Quite the elephant on the couch IMO.
Lies run the planet, and it stinks.
Deleted Comment
Dead Comment
Dead Comment
Successful politicans and sociopaths are experts in double meanings.
"I will not drop bombs on Acmeland." Instead, I will send missiles.
"At this point in time, we do not intend to end the tariffs." The intent will change when conditions change, which is forecast next week.
"We are not in negotations to acquire AI Co for $1B." We are negotiating for $0.9B.
"Our results show an improvement for a majority of recipients." 51% saw an improvement of 1%, 49% saw a decline of 5%...
No matter which way you look at it, we live on a planet where resources are scarce. Which means there will be competition. Which means there will be innovation in weaponry.
That said, we've had nukes for decades, and have collectively decided to not use them for decades. So there is some room for optimism.
Would China, Russia, or Iran agree to such a preemptive AI weapons ban? Doubtful, it’s their chance to close the gap. I’m onboard if so, but I don’t see anything happening on that front until well after they start dominating the landscape.
Now that's off the table, I think America should have AI weapons because everyone else will be developing them as quickly as possible.
From there:
-----
Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or why not use rocketry to move into space by building space habitats for more land?
Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?
These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. ...
Likewise, even United States three-letter agencies like the NSA and the CIA, as well as their foreign counterparts, are becoming ironic institutions in many ways. Despite probably having more computing power per square foot than any other place in the world, they seem not to have thought much about the implications of all that computer power and organized information to transform the world into a place of abundance for all. Cheap computing makes possible just about cheap everything else, as does the ability to make better designs through shared computing. ...
There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ...
The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream.
We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working"). ...
Still, we must accept that there is nothing wrong with wanting some security. The issue is how we go about it in a non-ironic way that works for everyone. ...
-----
Here is something I posted to the Project Virgle mailing list in April 2008 that in part touches on the issue of Google's identity as a scarcity vs. post-scarcity organization: "A Rant On Financial Obesity and an Ironic Disclosure" https://pdfernhout.net/a-rant-on-financial-obesity-and-Proje... "Look at Project Virgle and "An Open Source Planet" ... Even just in jest some of the most financially obese people on the planet (who have built their company with thousands of servers all running GNU/Linux free software) apparently could not see any other possibility but seriously becoming even more financially obese off the free work of others on another planet (as well as saddling others with financial obesity too :-). And that jest came almost half a century after the "Triple Revolution" letter of 1964 about the growing disconnect between effort and productivity (or work and financial fitness)...Even not having completed their PhDs, the top Google-ites may well take many more decades to shake off that ideological discipline. I know it took me decades (and I am still only part way there. :-) As with my mother, no doubt Googlers have lived through periods of scarcity of money relative to their needs to survive or be independent scholars or effective agents of change. Is it any wonder they probably think being financially obese is a good thing, not an indication of either personal or societal pathology? :-( ..."
Last April, inspired by some activities a friend was doing, I asked an LLM AI ( chatpdf ) to write a song about my sig, using the prompt 'Please make a song about "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."'. Then that friend made the results into an AI-generated song: "Challenge to Abundance" https://suno.com/song/d3d8c296-c2c4-46c6-80fb-ca9882c5e00a
"(Verse 1) In the 21st century, we face a paradox so clear, Technologies of abundance, yet scarcity we fear, Irony in our hands, what will we choose to see, A world of endless possibilities or stuck in scarcity?
(Chorus) The biggest challenge we face, it's plain to see, Embracing abundance or stuck in scarcity, Let's break free from old ways, embrace what could be, The irony of our times, let's set our minds free. ..."
I hope Googlers and others eventually get the perspective shift that comes with recognizing the irony of what they and many others are doing with weaponizing and otherwise competetizing AI...
Also on that larger theme by Alfie Kohn: "No Contest: The Case Against Competition" https://www.alfiekohn.org/contest/ "No Contest, which has been stirring up controversy since its publication in 1986, stands as the definitive critique of competition. Drawing from hundreds of studies, Alfie Kohn eloquently argues that our struggle to defeat each other — at work, at school, at play, and at home — turns all of us into losers. Contrary to the myths with which we have been raised, Kohn shows that competition is not an inevitable part of “human nature.” It does not motivate us to do our best (in fact, the reason our workplaces and schools are in trouble is that they value competitiveness instead of excellence.) Rather than building character, competition sabotages self-esteem and ruins relationships. It even warps recreation by turning the playing field into a battlefield. No Contest makes a powerful case that “healthy competition” is a contradiction in terms. Because any win/lose arrangement is undesirable, we will have to restructure our institutions for the benefit of ourselves, our children, and our society. ..."
So what? Can't Google find other sources of revenue than building weapons?
Dead Comment
OpenAI is sneaky slimey and headed by a psycho narcissist. Makes Pichia looks like a saint.
Ethically, it’s the same. But if someone was pointing a gun at me I’d rather have someone with some empathy behind the trigger rather than the personification of a company that bleeds high level execs and… insert many problems here
It hardly matters what employees think anymore when the executives are weather-vanes who point in the direction of wealth and power over all else (just like the executives at their competitors).
In case you missed it, a few days back Google asked all employees who don't believe in their "mission" to voluntarily resign.
No, OP is right. We are truly at the dystopian point where a sufficiently rich government can track the loyalty of its citizens in real time by monitoring all electronic communications.
Also, "expensive" is relative. When you consider how much US has historically been willing to spend on such things...
Who's paying for that tho ? The same dumbass who get spied over, i don't see it as a reason why it wouldn't happen. Cash is unlimited.
Reality is in a war between the West vs Russia/Iran/North Korea/China whomever we end up fighting, we’re going to do whatever we can so the Western civilization and soldiers survive and win.
Ultimately Google is a western company and if war breaks out not supporting our civilization/military is going to be wildly unpopular and turn them into a pariah and anything to the contrary was never going to happen.
There was no war forthcoming between an integrated West and any other power. War is coming because there no longer is a West.
Today people have differing views of nuclear weapons, but people who fought near Japan and survived believe the bomb saved their life.
It's easy to pretend you don't have a sides when there is peace, but in this environment google's going to take a side.
So... when the Russian tanks start rolling on the way to Berlin and Chinese troops are marching along that nice new (old) road they finished fixing up - otw to Europe, so if that happens, which looks possible - you think there will be no West??
If the world is to be divided Europe is the lowest hanging and sweetest fruit.
I think there will still be West even if there is a King in the US demanding fealty to part of it - we are the same as they are, it's ridiculous to pretend we are.
Ideology is one thing, survival of people and culture is another.
There is no such thing as "our" or "their" civilization. We have only one. Maybe such concept had some ground a few centuries ago still, but by now this idea that "we" are significantly different than "them" is a dangerous fantasy for most people.
These weapons could also come in handy domestically if people find out that both parties screw them all the time.
I wonder why people claim that China is a threat out side of economics. Has China tried to invade the US? Has Russia tried to invade the EU? The answer is no. The only current threats to the EU come from the orange man.
The same person who also revoked the INF treaty. The US now installs intermediate range nuclear missiles in Europe. Russia does so in Belarus.
So both great powers have convenient whipping boys to be nuked first, after which they will get second thoughts.
It is beyond ridiculous that both the US and Russia constantly claim that they are in danger, when all international crises in the last 40 years have been started by one of them.
Military power is what has kept the EU safe, and countries without strong enough military power — such as Ukraine, which naively gave up its nuclear arsenal in the 90s in exchange for Russian promises to not invade — are repeatedly battered by the power-hungry.
Would you say that the chances / motives / possibilities to invade Ukraine is remotely comparable with any other European country ?
And no, Turkey for example is not a European country.
The orange man is completely ineffectual on both fronts. Will not spend the money on the military and too inept to make a deal that doesn’t cost in the long run.
Just like Meta announced some changes around the time of inauguration, I'm sure Google management has noticed the AI announcements, and they don't want to be perceived in a certain way by the current administration
I think the truth is more in the middle (there is tons of disagreement within the company), but they naturally care about how they are perceived by those in power
Companies technically have disproportionate power.
It's better that they shift according to the will of the people.
The alternative, that companies act according to their own will, could be much worse.
Actually, this reminds me of when Paul Graham came to Google, around 2005. Before that, I had read an essay or two, and thought he was kind of a blowhard.
But I actually thought he was a great speaker in person, and that lecture changed my opinion. He was talking about "Don't Be Evil", and he also said something very charming about how "Don't Be Evil" is conditional upon having the luxury to live up to that, which is true.
That applies to both companies and people:
- If Google wasn't a money-printing machine in 2005, then "don't be evil" would have been less appealing. And now in 2020, 2021, .... 2025, we can see that Google clearly thinks about its quarterly earning in a way that it didn't in 2005, so "don't be evil" is too constraining, and was discarded.
- For individuals, we may not pay much attention to "don't be evil" early in our careers. But it is more appealing when you're more established, and have had a couple decades to reflect on what you did with your time!