"You can only turn off this setting 3 times a year."
Astonishing. They clearly feel their users have no choice but to accept this onerous and ridiculous requirement. As if users wouldn't understand that they'd have to go way out of their way to write the code which enforces this outcome. All for a feature which provides me dubious benefit. I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Privacy legislation is clearly lacking. This type of action should bring the hammer down swiftly and soundly upon these gross and inappropriate corporate decision makers. Microsoft has needed that hammer blow for quite some time now. This should make that obvious. I guess I'll hold my breath while I see how Congress responds.
It's hilarious that they actually say that right on the settings screen. I wonder why they picked 3 instead of 2 or 4. Like, some product manager actually sat down and thought about just how ridiculous they could be and have it still be acceptable.
My guess is it was an arbitrary guess and the limit is due to creating a mass scan of photos. Depending on if they purge old data when turned off, it could mean toggling the switch tells microsoft's servers to re-scan every photo in your (possibly very large) library.
Odd choice and poor optics (just limit the number of times you can enable and add a warning screen) but I wouldn't assume this was intentionally evil bad faith.
3 is the smallest odd prime number. 3 is a HOLY number. It symbolizes divine perfection, completeness, and unity in many religions: the Holy Trinity in Christianity, the Trimurti in Hinduism, the Tao Te Ching in Taoism (and half a dozen others)
The number seems likely to be a deal that could be altered upward someday for those willing to rise above the minimal baseline tier.
Right now it doesn't say if these are supposed to be three different "seasons" of the year that you are able to opt-out, or three different "windows of opportunity".
Or maybe it means your allocation is limited to three non-surveillance requests per year. Which should be enough for average users. People aren't so big on privacy any more anyway.
Now would these be on a calendar year basis, or maybe one year after first implementation?
And what about rolling over from one year to another?
> Why is Microsoft so eager to also be able to know this?
A database of pretty much all Western citizen's faces? That's a massive sales opportunity for all oppressive and wanna-be oppressive governments. Also, ads.
I agree with you but there's nothing astonishing about any of this unfortunately, it was bound to happen. Almost all of cautionary statements about AI abuse fall on deaf ears of HN's overenthusiastic and ill-informed rabble, stultified by YC tech lobbyists.
Worst part about it was all the people fretting on about ridiculous threats like the chatbot turning into skynet sucked the oxygen out of the room for the more realistic corporate threats
Actually, most users probably don't understand, that this ridiculous policy is more effort to implement. They just blindly follow whatever MS prescribes and have long given up on making any sense of the digital world.
Can someone explain to me why the immediate perception is that this is some kind of bad, negative, evil thing? I don't understand it.
My assumption is that when this feature is on and you turn it off, they end up deleting the tags (since you've revoked permission for them to tag them). If it gets turned back on again, I assume that means they need to rescan them. So in effect, it sounded to me like a limit on how many times you can toggle this feature to prevent wasted processing.
Their disclaimer already suggests they don't train on your photos.
This is Microsoft. They have a proven record of turning these toggles back on automatically without your consent.
So you can opt out of them taking all of your most private moments and putting them into a data set that will be leaked, but you can only opt out 3 times. What are the odds a "bug" (feature) turns it on 4 times? Anything less than 100% is an underestimate.
And what does a disclaimer mean, legally speaking? They won't face any consequences when they use it for training purposes. They'll simply deny that they do it. When it's revealed that they did it, they'll say sorry, that wasn't intentional. When it's revealed to be intentional, they'll say it's good for you so be quiet.
If that was the case, the message should be about a limit on re-enabling the feature n times, not about turning it off.
Also the if they are concerned about processing costs, the default for this should be off, NOT on. The default should for any feature like this that use customers personal data should be OFF for any company that respects their customers privacy.
> You are trying to reach really far out to find a plausible
This behavior tallies up with other things MS have been trying to do recently to gather as much personal data as possible from users to feed their AI efforts.
Their spokes person also avoided answering why they are doing this.
On the other hand, you comment seem to be trying to reach really far trying to find portray this as normal behavior.
Yeah exactly. Some people have 100k photo collections. The cost of scanning isn’t trivial.
They should limit the number of times you turn it on, not off. Some PM probably overthought it and insisted you need to tell people about the limit before turning it off and ended up with this awkward language.
If it was that simple, there would be no practical reason to limit that scrub to three ( and in such a confusion inducing ways ). If I want to waste my time scrubbing, that should be up to me -- assuming it is indeed just scrubbing tagged data, because if anything should have been learned by now, it is that:
worst possible reading of any given feature must be assumed to the detriment of the user and benefit of the company
Honestly, these days, I do not expect much of Microsoft. In fact, I recently thought to myself, there is no way they can still disappoint. But what do they do? They find a way damn it.
> Their disclaimer already suggests they don't train on your photos.
We know all major GenAI companies trained extensively on illegally acquired material, and they were hiding this fact. Even the engineers felt this isn't right, but there were no whistleblowers. I don't believe for a second it would be different with Microsoft. Maybe they'd introduce the plan internally as a kind of CSAM, but, as opposed to Apple, they wouldn't inform users. The history of their attitude towards users is very consistent.
Then you would limit the number of times the feature can be turned on, not turned off. Turned off uses less resources, while turned on potentially continues using their resources. Also I doubt if they actually remove data that requires processing to obtain, I wouldn't expect them to delete it until they're actually required to do so, especially considering the metadata obtained is likely insignificant in size compared to the average image.
It's an illusion of choice. For over a decade now companies either are spamming you with modals/notifications up until you give up and agree to a compromising your privacy settings or "accidentally" turn these on and pretend that change happened by a mistake or bug.
Language used is deceptive and comes with "not now" or "later" options and never a permanent "no". Any disagreement is followed by a form of "we'll ask you again later" message.
Companies are deliberately removing user's control over software by dark patterns to achieve their own goals.
Advanced user may not want to have their data scanned for whatever reasons and with this setting it cannot control the software because vendor decided it's just 3 times and later settings goes permanent "on".
And considering all the AI push within Windows, Microsoft products is rather impossible to assume that MS will not be interested in training their algorithms on their customers/users data.
---
And I really don't know how else you can interpret this whole talk with an unnamed "Microsoft's publicist" when:
> Microsoft's publicist chose not to answer this question
and
> We have nothing more to share at this time
but as a hostile behavior. Of course they won't admit they want your data but they want it and will have it.
It sounds like you have revoked their permission to tag(verb) the photos, why should this interfere with what tag(noun) the photo already has?
But really I know nothing about the process, I was going to make an allegory about how it would be the same as adobe deleting all your drawings after you let your photoshop subscriptions lapse. But realized that this is exactly the computing future that these sort of companies want and my allegory is far from the proof by absurdity I wanted it to be. sigh, now I am depressed.
Honestly, I hated when they removed automatic photo tagging. It was handy as hell when uploading hundreds of pictures from a family event, which is about all I use it for.
"You can only turn off this setting 3 times a year."
I look forward to getting a check from Microsoft for violating my privacy.
I live in a state with better-than-average online privacy laws, and scanning my face without my permission is a violation. I expect the class action lawyers are salivating at Microsoft's hubris.
I got $400 out of Facebook because it tagged me in the background of someone else's photo. Your turn, MS.
If you don't trust Microsoft but need to use Onedrive, there are encrypted volume tools (e.g. Cryptomator) specifically designed for use with Onedrive.
You seem to be implying that users won't accept this. But users have accepted all the other bullshit Microsoft has pulled so far. It genuinely baffles me why anyone would choose to use their products yet many do and keep making excuses why alternatives are not viable.
They could have avoided the negative press by changing the requirement to be that you can’t re-enable the feature after switching it off 3 times per year.
It’s not hard to guess the problem: Steady state operation will only incur scanning costs for newly uploaded photos, but toggling the feature off and then on would trigger a rescan of every photo in the library. That’s a potentially very expensive operation.
If you’ve ever studied user behavior you’ve discovered situations where users toggle things on and off in attempts to fix some issue. Normally this doesn’t matter much, but when a toggle could potentially cost large amounts of compute you have to be more careful.
For the privacy sensitive user who only wants to opt out this shouldn’t matter. Turn the switch off, leave it off, and it’s not a problem. This is meant to address the users who try to turn it off and then back on every time they think it will fix something. It only takes one bad SEO spam advice article about “How to fix _____ problem with your photos” that suggests toggling the option to fix some problem to trigger a wave of people doing it for no reason.
I cancelled Facebook in part due to a tug-of-war over privacy defaults. They kept getting updated with some corporate pablum about how opting in benefited the user. It was just easier to permanently opt out via account deletion rather than keep toggling the options. I have no doubt Microsoft will do the same. I'm wiping my Windows partition and loading Steam OS or some variant and dual booting into some TBD Linux distro for development.
When I truly need Windows, I have an ARM VM in Parallels. Right now it gets used once a year at tax time.
But tomorrow they’ll add a new feature, with a different toggle, that does the same thing but will be distinct enough. That toggle will default on, and you’ll find it in a year and a half after it’s been active.
Control over your data is an illusion. The US economy is built upon corporations mining your data. That’s why ML engineers got to buy houses in the 2010s, and it’s why ML/AI engineers get to buy houses in the 2020s.
> - "Scan photos I upload" yes/no. No batch processing needed, only affects photos from now on.
This would create a situation where some of the photos have tags and some don’t. Users would forget why the behavior is different across their library.
Their solution? Google it and start trying random suggestions. Toggle it all on and off. Delete everything and start over with rescanning. This gets back to the exact problem they’re trying to avoid.
> - "Scan all missing photos (1,226)" can only be done 3x per year
There is virtually no real world use case where someone would want to stop scanning new photos but also scan all photos but only when they remember to press this specific button. The number of users who would get confused and find themselves in unexpected states of half-scanned libraries would outweigh the number of intentional uses of this feature by 1000:1 or more.
Tell you what, Microsoft: turn it off, leave it off, remove it, fire the developers who made it, forget you ever had the idea. Bet that saved some processing power?
Most of us wouldn't mind if the limitation was that you can't opt IN more than 3 times/year, but of course Microsoft dark patterned it to limit the opt outs.
> It’s not hard to guess the problem: toggling the feature off and then on would trigger a rescan of every photo in the library.
That's would be a wild way to implement this feature.
I mean it's Microsoft so I wouldn't be surprised if it was done in the dumbest way possible but god damn this would be such a dumb way to implement this feature.
This would be because of the legal requirement to purge (erase) all the previous scan data once a user opts out. So the only way to re-enable is to scan everything again — unless you have some clever way I’ve not thought of?
And not just advertising. If ICE asks Microsoft to identify accounts of people who have uploaded a photo of "Person X", do you think they're going to decline?
They'd probably do it happily even without a warrant.
I'd bet Microsoft is doing this more because of threats from USG than because of advertising revenue.
> They'd probably do it happily even without a warrant
I'm old enough to remember when companies were tripping over themselves after 9/11 trying to give the government anything they could to help them keep an eye on Americans. They eventually learned to monetize this, and now we have the surveillance economy.
> and follow Microsoft's compliance with General Data Protection Regulation
Not in a million years. See you in court. As often, just because a press statement says something, it's not necessarily true and maybe only used to defuse public perception.
Truly bizarre. I'm so glad I detached from Windows a few years back, and now when I have to use it or another MS product (eg an Xbox) it's such an unpleasant experience, like notification hell with access control checks to read the notifications.
The sad thing is that they've made it this way, as opposed to Windows being inherently deficient; it used to be a great blend of GUI convenience with ready access to advanced functionality for those who wanted it, whereas MacOS used to hide technical things from a user a bit too much and Linux desktop environments felt primitive. Nowadays MS seems to think of its users as if they were employees or livestock rather than customers.
Insider here, in m365 though not onedrive. It did change, but not because of satya ; because of rules and legislation and bad press. Privacy and security are taken very seriously (at least by people who care to follow internal rules) not because "we're nice", but because
- EU governments keep auditing us, so we gotta stay on our toes, do things by the book, and be auditable
- it's bad press when we get caught doing garbage like that. And bad press is bad for business
In my org, doing anything with customer's data that isn't directly bringing them value is theoretically not possible. You can't deliver anything that isn't approved by privacy.
Don't forget that this is a very big company. It's composed of people who actually care and want to do the right thing, people who don't really care and just want to ship and would rather not be impeded by compliance processes, and people who are actually trying to bypass these processes because they'd sell your soul for a couple bucks if they could. For the little people like me, the official stance is that we should care about privacy very much.
Our respect for privacy was one of the main reasons I'm still there. There has been a good period of time where the actual sentiment was "we're the good guys", especially when comparing to google and Facebook. A solid portion of that was that our revenue was driven by subscriptions rather than ads. I guess the appeal to take customer's money and exploit their data is too big. The kind of shit that will get me to leave.
Do we even think that was real? I think social media has been astroturfed for a long time now. If enough people make those claims, it starts to feel true even without evidence to support it.
Did they ever open source anything that really make you think "wow"? The best I could see was them "embracing" Linux, but embrace, extend, extinguish was always a core part of their strategy.
They are like shitty Midas, everything they touch, turns into pile of crap. However people stil buy their products. They think the turd is tasty, because billion of flies can't we wrong...
Meanwhile Apple is applying different set of toxic patterns. Lack of interoperability with other OS, their apps try to store data mainly on iCloud, iPhone has no jack connector etc.
This week I have received numerous reminders from Microsoft to renew my Skype credit..
Everything I see from that company is farcical. Massive security lapses, lazy AI features with huge privacy flaws, steamrolling OS updates that add no value whatsoever, and heavily relying on their old playbook of just buying anything that looks like it could disrupt them.
P.S. The skype acquisition was $8.5B in 2011 (That's $12.24B in today's money.)
I don't understand how this is losing their mind. Toggling this setting is expensive on the backend: opting in means "go and rescan all the photos". opting out means "delete all the scanned information for this user". As a user just make up your mind and set the setting. They let you opt in, they ley you opt out, they just don't want to let you trigger tons of work every minute.
By each passing day since I switched from using Windows to Linux at home, with decreasing friction, I am increasingly happy that I took time to learn Linux and stuck with it. This not a come to Linux call because I know it is easier said than done for most of non technical folks. But it is a testimony that if you do, the challenges eventually will be worth it. Because at this point, Microsoft is just openly insulting their captive users.
This is such a norm in society now; PR tactics take priority over any notion of accountability, and most journalists and publishers act as stenographers, because challenging or even characterizing the PR line is treated as an unjustified attack and inflated claims of bias.
Just as linking to original documents, court filings etc. should be a norm in news reporting, it should also be a norm to summarize PR responses (helpful, dismissive, evasive or whatever) and link to a summary of the PR text, rather than treating it as valid body copy.
People need to treat PR like they do AIs. "You utterly failed to answer the question, try again and actually answer the question I asked this time." I'd love to see corporate representatives actually pressed to answer. "Did you actually do X, yes or no, if you dodge the question I'll present you as dodging the question and let people assume the worst."
They take people for idiots. This can work a few times, but even someone who isn't the brightest will eventually put two and two together when they get screwed again and again and again.
The worst part of all this is even respectable news organisations like the BBC publish so many articles that are just the companies PR response verbatim. Even worse when it's like
- victim says hi, this thing is messed up and people need to know about this
-Company says "bla bla bla" legal speak we don't recognise an issue "bla bla bla"
End of article, instead of saying
"This comment doesn't seem to reflect the situation" or other pointing out that anybody with a brain can see the two statements are not equal in evidence nor truth
Astonishing. They clearly feel their users have no choice but to accept this onerous and ridiculous requirement. As if users wouldn't understand that they'd have to go way out of their way to write the code which enforces this outcome. All for a feature which provides me dubious benefit. I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Privacy legislation is clearly lacking. This type of action should bring the hammer down swiftly and soundly upon these gross and inappropriate corporate decision makers. Microsoft has needed that hammer blow for quite some time now. This should make that obvious. I guess I'll hold my breath while I see how Congress responds.
Odd choice and poor optics (just limit the number of times you can enable and add a warning screen) but I wouldn't assume this was intentionally evil bad faith.
Right now it doesn't say if these are supposed to be three different "seasons" of the year that you are able to opt-out, or three different "windows of opportunity".
Or maybe it means your allocation is limited to three non-surveillance requests per year. Which should be enough for average users. People aren't so big on privacy any more anyway.
Now would these be on a calendar year basis, or maybe one year after first implementation?
And what about rolling over from one year to another?
Or is it use it or lose it?
Enquiring minds want to know ;)
A database of pretty much all Western citizen's faces? That's a massive sales opportunity for all oppressive and wanna-be oppressive governments. Also, ads.
Deleted Comment
My assumption is that when this feature is on and you turn it off, they end up deleting the tags (since you've revoked permission for them to tag them). If it gets turned back on again, I assume that means they need to rescan them. So in effect, it sounded to me like a limit on how many times you can toggle this feature to prevent wasted processing.
Their disclaimer already suggests they don't train on your photos.
So you can opt out of them taking all of your most private moments and putting them into a data set that will be leaked, but you can only opt out 3 times. What are the odds a "bug" (feature) turns it on 4 times? Anything less than 100% is an underestimate.
And what does a disclaimer mean, legally speaking? They won't face any consequences when they use it for training purposes. They'll simply deny that they do it. When it's revealed that they did it, they'll say sorry, that wasn't intentional. When it's revealed to be intentional, they'll say it's good for you so be quiet.
If that was the case, the message should be about a limit on re-enabling the feature n times, not about turning it off.
Also the if they are concerned about processing costs, the default for this should be off, NOT on. The default should for any feature like this that use customers personal data should be OFF for any company that respects their customers privacy.
> You are trying to reach really far out to find a plausible
This behavior tallies up with other things MS have been trying to do recently to gather as much personal data as possible from users to feed their AI efforts.
Their spokes person also avoided answering why they are doing this.
On the other hand, you comment seem to be trying to reach really far trying to find portray this as normal behavior.
They should limit the number of times you turn it on, not off. Some PM probably overthought it and insisted you need to tell people about the limit before turning it off and ended up with this awkward language.
worst possible reading of any given feature must be assumed to the detriment of the user and benefit of the company
Honestly, these days, I do not expect much of Microsoft. In fact, I recently thought to myself, there is no way they can still disappoint. But what do they do? They find a way damn it.
We know all major GenAI companies trained extensively on illegally acquired material, and they were hiding this fact. Even the engineers felt this isn't right, but there were no whistleblowers. I don't believe for a second it would be different with Microsoft. Maybe they'd introduce the plan internally as a kind of CSAM, but, as opposed to Apple, they wouldn't inform users. The history of their attitude towards users is very consistent.
Language used is deceptive and comes with "not now" or "later" options and never a permanent "no". Any disagreement is followed by a form of "we'll ask you again later" message.
Companies are deliberately removing user's control over software by dark patterns to achieve their own goals.
Advanced user may not want to have their data scanned for whatever reasons and with this setting it cannot control the software because vendor decided it's just 3 times and later settings goes permanent "on".
And considering all the AI push within Windows, Microsoft products is rather impossible to assume that MS will not be interested in training their algorithms on their customers/users data.
---
And I really don't know how else you can interpret this whole talk with an unnamed "Microsoft's publicist" when:
> Microsoft's publicist chose not to answer this question
and
> We have nothing more to share at this time
but as a hostile behavior. Of course they won't admit they want your data but they want it and will have it.
That would be a limit on how many times you can enable the setting, not preventing you from turning it off.
Deleted Comment
I bet you "have nothing to hide".
We work with computers. Every thing that gets in the way of working is wasting time and nerves.
Deleted Comment
Deleted Comment
But really I know nothing about the process, I was going to make an allegory about how it would be the same as adobe deleting all your drawings after you let your photoshop subscriptions lapse. But realized that this is exactly the computing future that these sort of companies want and my allegory is far from the proof by absurdity I wanted it to be. sigh, now I am depressed.
Did you read it all ? They also sugest that they care about your privacy. /s
This was pre-AI hype, perhaps 15 years ago. It seems Microsoft feel it is normalised. More you are their product. It strikes me as great insecurity.
Presumably it can be used for filtering as well - find me all pictures of me with my dad, etc.
I look forward to getting a check from Microsoft for violating my privacy.
I live in a state with better-than-average online privacy laws, and scanning my face without my permission is a violation. I expect the class action lawyers are salivating at Microsoft's hubris.
I got $400 out of Facebook because it tagged me in the background of someone else's photo. Your turn, MS.
Deleted Comment
If you don't trust Microsoft but need to use Onedrive, there are encrypted volume tools (e.g. Cryptomator) specifically designed for use with Onedrive.
Deleted Comment
Deleted Comment
It’s not hard to guess the problem: Steady state operation will only incur scanning costs for newly uploaded photos, but toggling the feature off and then on would trigger a rescan of every photo in the library. That’s a potentially very expensive operation.
If you’ve ever studied user behavior you’ve discovered situations where users toggle things on and off in attempts to fix some issue. Normally this doesn’t matter much, but when a toggle could potentially cost large amounts of compute you have to be more careful.
For the privacy sensitive user who only wants to opt out this shouldn’t matter. Turn the switch off, leave it off, and it’s not a problem. This is meant to address the users who try to turn it off and then back on every time they think it will fix something. It only takes one bad SEO spam advice article about “How to fix _____ problem with your photos” that suggests toggling the option to fix some problem to trigger a wave of people doing it for no reason.
Assuming that it doesn't mysteriously (due to some error or update, no doubt) move back to the on position by itself.
When I truly need Windows, I have an ARM VM in Parallels. Right now it gets used once a year at tax time.
But tomorrow they’ll add a new feature, with a different toggle, that does the same thing but will be distinct enough. That toggle will default on, and you’ll find it in a year and a half after it’s been active.
Control over your data is an illusion. The US economy is built upon corporations mining your data. That’s why ML engineers got to buy houses in the 2010s, and it’s why ML/AI engineers get to buy houses in the 2020s.
- "Scan photos I upload" yes/no. No batch processing needed, only affects photos from now on.
- "Delete all scans (15,101)" if you are privacy conscious
- "Scan all missing photos (1,226)" can only be done 3x per year
"But users are dummies who cannot understand anything!" Not with that attitude they can't.
This would create a situation where some of the photos have tags and some don’t. Users would forget why the behavior is different across their library.
Their solution? Google it and start trying random suggestions. Toggle it all on and off. Delete everything and start over with rescanning. This gets back to the exact problem they’re trying to avoid.
> - "Scan all missing photos (1,226)" can only be done 3x per year
There is virtually no real world use case where someone would want to stop scanning new photos but also scan all photos but only when they remember to press this specific button. The number of users who would get confused and find themselves in unexpected states of half-scanned libraries would outweigh the number of intentional uses of this feature by 1000:1 or more.
I mean it's Microsoft so I wouldn't be surprised if it was done in the dumbest way possible but god damn this would be such a dumb way to implement this feature.
If disabling the feature kept the data, that would be a real problem.
I don’t know why you think it’s dumb that they purge the data when you turn a feature off. That’s what you want.
Dead Comment
The privacy violations they are racking up are very reminiscent of prior behavior we've seen from Facebook and Google.
They'd probably do it happily even without a warrant.
I'd bet Microsoft is doing this more because of threats from USG than because of advertising revenue.
I'm old enough to remember when companies were tripping over themselves after 9/11 trying to give the government anything they could to help them keep an eye on Americans. They eventually learned to monetize this, and now we have the surveillance economy.
https://www.pcmag.com/news/the-10-most-disturbing-snowden-re...
...and build them a nice portal to submit their requests and get the results back in real time.
Not in a million years. See you in court. As often, just because a press statement says something, it's not necessarily true and maybe only used to defuse public perception.
The sad thing is that they've made it this way, as opposed to Windows being inherently deficient; it used to be a great blend of GUI convenience with ready access to advanced functionality for those who wanted it, whereas MacOS used to hide technical things from a user a bit too much and Linux desktop environments felt primitive. Nowadays MS seems to think of its users as if they were employees or livestock rather than customers.
They are a hard nosed company focused with precision on dominance for themselves.
- EU governments keep auditing us, so we gotta stay on our toes, do things by the book, and be auditable - it's bad press when we get caught doing garbage like that. And bad press is bad for business
In my org, doing anything with customer's data that isn't directly bringing them value is theoretically not possible. You can't deliver anything that isn't approved by privacy.
Don't forget that this is a very big company. It's composed of people who actually care and want to do the right thing, people who don't really care and just want to ship and would rather not be impeded by compliance processes, and people who are actually trying to bypass these processes because they'd sell your soul for a couple bucks if they could. For the little people like me, the official stance is that we should care about privacy very much.
Our respect for privacy was one of the main reasons I'm still there. There has been a good period of time where the actual sentiment was "we're the good guys", especially when comparing to google and Facebook. A solid portion of that was that our revenue was driven by subscriptions rather than ads. I guess the appeal to take customer's money and exploit their data is too big. The kind of shit that will get me to leave.
Did they ever open source anything that really make you think "wow"? The best I could see was them "embracing" Linux, but embrace, extend, extinguish was always a core part of their strategy.
Meanwhile Apple is applying different set of toxic patterns. Lack of interoperability with other OS, their apps try to store data mainly on iCloud, iPhone has no jack connector etc.
I don't know what this Microsoft thing is that you speak of. I only know a company called Copilot Prime.
Everything I see from that company is farcical. Massive security lapses, lazy AI features with huge privacy flaws, steamrolling OS updates that add no value whatsoever, and heavily relying on their old playbook of just buying anything that looks like it could disrupt them.
P.S. The skype acquisition was $8.5B in 2011 (That's $12.24B in today's money.)
Oh what time does to things!
Looks like nothing has changed.
They are exactly where I left them 20 years ago.
It's very sad that I can't stop using them again for doing this.
Just as linking to original documents, court filings etc. should be a norm in news reporting, it should also be a norm to summarize PR responses (helpful, dismissive, evasive or whatever) and link to a summary of the PR text, rather than treating it as valid body copy.
Should have just said 'link to a screenshot of the PR text', apologies for the confusion
- victim says hi, this thing is messed up and people need to know about this
-Company says "bla bla bla" legal speak we don't recognise an issue "bla bla bla"
End of article, instead of saying
"This comment doesn't seem to reflect the situation" or other pointing out that anybody with a brain can see the two statements are not equal in evidence nor truth