Readit News logoReadit News
Posted by u/xslowzone 2 years ago
Tell HN: t.co is adding a five-second delay to some domains
Go to Twitter and click on a link going to any url on "NYTimes.com" or "threads.net" and you'll see about a ~5 second delay before t.co forwards you to the right address.

Twitter won't ban domains they don't like but will waste your time if you visit them.

I've been tracking the NYT delay ever since it was added (8/4, roughly noon Pacific time), and the delay is so consistent it's obviously deliberate.

epistasis · 2 years ago
This is what I have come to expect from every person that calls themselves a "free speech absolutist." What they actually believe is that they should be able to say whatever they want and do whatever they want, personally, without any consequences for themselves. There is no grander principle than "my ability to do what I want and exert power over others however I want, without critique or criticism."

I really wish the term hadn't been polluted this way.

epistasis · 2 years ago
Update: hours after being exposed and publicized in the Washington Post, the behavior has stopped:

> On Tuesday afternoon, hours after this story was first published, X began reversing the throttling on some of the sites, dropping the delay times back to zero. It was unknown if all the throttled websites had normal service restored.

https://archive.is/2023.08.15-210250/https://www.washingtonp...

rhaksw · 2 years ago
I still see a roughly 2 second delay on first grab. The second is immediate.
FireBeyond · 2 years ago
Disclaimer: I am not comparing Twitter to warlords, dictators or genocide. But this quote (from Lord of War) really encapsulates a lot of what you say:

> Yuri Orlov: [Narrating] Every faction in Africa calls themselves by these noble names - Liberation this, Patriotic that, Democratic Republic of something-or-other... I guess they can't own up to what they usually are: the Federation of Worse Oppressors Than the Last Bunch of Oppressors. Often, the most barbaric atrocities occur when both combatants proclaim themselves Freedom Fighters.

Deleted Comment

julianozen · 2 years ago
History truly is a circle
hn1986 · 2 years ago
Time for news orgs to boycott Twitter, just like NPR did.
hn1986 · 2 years ago
even worse, It's had this 5 second delay for Threads for a month. https://www.threads.net/@jank0/post/CuV_5fprO3z/?igshid=NTc4...
raxxorraxor · 2 years ago
I call myself a free speech absolutist (or advocate at least, absolutist is more of a slur). False compromises belong in the past. What X is doing isn't free speech at all and they have stated that advertisers will dictate what content will be seen, there is no commitment to freedom of speech at all.

But at least I can hold them responsible for violating their own stated values. The former Twitter leadership just hid content that didn't fit theirs or third parties sensitivities and told me they are doing me a favor.

Restricting speech is always in the interests of those that have the power to shape discussions, so limiting speech is always counter productive.

epistasis · 2 years ago
> The former Twitter leadership just hid content that didn't fit theirs or third parties sensitivities and told me they are doing me a favor.

The former Twitter leadership was very clear about what sort of content would be his. And is was based entirely on the type of content ahead of time. Critiquing this sort of content policy is like saying that newspapers should not be allowed to have clear standard for what is publishable in classified ads.

All claims of "I'm being oppressed" by Twitter policies have been absolutely ridiculous, and discrediting to supposed free speech advocate/absolutist positions.

Similarly discrediting is the silence on Musk's attacks on the free web and attempts at censorship of specific disprefeerred news outlets.

We all see what gets fought ago and what is not faught against, and the answer is clears the right to attack and intimidate groups with threatening behavior is defending, but actual censorship of reasonable discourse is tolerated.

semi-extrinsic · 2 years ago
> Restricting speech is always in the interests of those that have the power to shape discussions, so limiting speech is always counter productive.

This is not true. Restricting hate speech is an obvious counterexample.

oneeyedpigeon · 2 years ago
> advocate at least, absolutist is more of a slur

Those two are enormously different, though. I'd consider myself an advocate, just as anyone who believes in a fair and free democracy should. But I am very far from being an absolutist — and I have a secret suspicion that nobody actually is. Musk certainly isn't.

singleshot_ · 2 years ago
When a company that provides a coherent speech product, their editorial decisions are made according to how they will affect the goal of user growth. The obvious result of a “free speech absolutist” social media coupled with the rules of network effects is one enormous, undifferentiated social network.

It probably goes without saying that this would be an extremely unpleasant place, but there would be nowhere else to go once the last platform won.

What we have today is a number of smaller social networks, each with a different strategy to shape the conversation. It may very well be true that the creators of a platform choose editorial methods and goals that resonate with them personally, but what’s important to the dynamic of the platforms and free speech is that until we are all on that one terrible platform, that methods used to moderate your speech are nothing more than a company’s efforts to differentiate their product from others.

Restricting speech is in the interest of product differentiation. This, of course, is in the interest of the owner of the product, but it is always also in the interest of the consumer who wants a rich speech market to choose from, and who loathes the idea of a global 4chan style megasite to the exclusion of all other social media. This is why failure to limit speech in the context of a coherent speech product is always counterproductive.

hlandau · 2 years ago
Worth pointing out that t.co has always been an instance of an annoying and seemingly unjustified practice I named "nonsemantic redirect". Rather than legitimately redirecting using an HTTP Location header, it instead is an HTML page with a META refresh tag on it.

You don't see this with curl/wget because they use user agent sniffing. If they don't think you're a browser they _will_ give you a Location header. To see it, capture a request in Firefox developer tools, right click on the request, copy as CURL. (May need to remove the Accept-Encoding tag and add -i to see the headers).

ShamelessC · 2 years ago
Could you explain what the intended/expected outcome is for this? What is accomplished by doing that?
yahelc · 2 years ago
The purpose is so that Twitter is seen as the source of the traffic. A lot of Twitter-sourced traffic comes from native apps, so when people click links from tweets, they usually don’t send referrer information.

If the redirects were server side (setting the Location header), a blank referrer remains blank. Client side redirects will set the referral value.

From Twitter’s POV, there’s value in more fully conveying how much traffic they send to sites, even if it minorly inconveniences users.

dalore · 2 years ago
Crawlers and tools will get the right location http header but browsers and users will get the delay.
spiderfarmer · 2 years ago
Cookies?
mzs · 2 years ago
No, in fact now t.co even returns an empty body with it's 301 response:

  % curl -vgsSw'< HTTP/size %{size_download}\n' https://t.co/DzIiCFp7Ti 2>&1 | grep '^< \(HTTP/\)\|\(location: \)'
  < HTTP/2 301 
  < location: https://www.threads.net/@chaco_mmm_room
  < HTTP/size 0

mlyle · 2 years ago
You didn't read the second paragraph of the comment you replied to-- which explained this exact issue before you replied "no":

> You don't see this with curl/wget because they use user agent sniffing. If they don't think you're a browser they _will_ give you a Location header. To see it, capture a request in Firefox developer tools, right click on the request, copy as CURL.

flutas · 2 years ago
Firefox:

    <head><noscript><META http-equiv="refresh" content="0;URL=https://www.threads.net/@chaco_mmm_room"></noscript><title>https://www.threads.net/@chaco_mmm_room</title></head><script>window.opener = null; location.replace("https:\/\/www.threads.net\/@chaco_mmm_room")</script>

kens · 2 years ago
I can confirm. NYT shows a five-second redirect delay: "wget https://t.co/4fs609qwWt". It redirects to gov.uk immediately: "wget https://t.co/iigzas6QBx"
craftkiller · 2 years ago
Oddly enough the delay is reduced to 1 second by using curl's useragent string (wget --user-agent='curl/8.2.1' https://t.co/4fs609qwWt)
ilikehurdles · 2 years ago
Seeing this makes me wonder if it's some sort of server-side header bidding ad server gone haywire, rather than something nefarious. Why would they only delay browser agents otherwise?
pityJuke · 2 years ago
Could this be explained by the UA derived redirect behaviour described in this other comment on the thread? https://news.ycombinator.com/item?id=37130982
mbernstein · 2 years ago
Agree/confirmed - just recorded a number of different nytimes urls that pass through t.co, all 4.7s+. various cnbc and google articles through t.co were ~130-200ms response time from t.co specifically (not total redirect->page load).
jquery · 2 years ago
I almost didn't believe OP, because it's so comically inept and petty. But, I can also confirm in some private testing there is a deliberate delay.
adhesive_wombat · 2 years ago
Considering how common it is to deliberately break (not just with a "you log in now" modal, but not loading content, spinning forever, breaking layouts, etc) websites for non-logged-in mobile users, that's exactly how petty in imagine them to be. Twitter and Reddit do it, and Imgur comes and goes so I can't decide if for them it's deliberate or just incompetence.
TheRealSteel · 2 years ago
"because it's so comically inept and petty"

This is precisely why I did believe OP. This is Elon Musk we're talking about.

_a9 · 2 years ago
Im not getting the same time delay with curl

- `time wget https://t.co/4fs609qwWt` -> `0m5.389s`

- `time curl -L https://t.co/4fs609qwWt` -> `0m1.158s`

djvdq · 2 years ago
And now add browser user-agent to the curl request and watch how slow it gets.

- `time curl -A "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/81.0" -L https://t.co/4fs609qwW` -> 4.730 total

- `time curl -L https://t.co/4fs609qwWt` -> 1.313 total

Same request, the only difference is user-agent.

yohannesk · 2 years ago
Is there some cache going on? On my first attempt, there is a 5 second delay. When I try it second time immediately it works without the 5 second delay. But if I try again after an hour, 5 second delay again!
hrrsn · 2 years ago
Safari seems to be caching it for me, but I can reproduce the delay every time with curl - so long as the user agent doesn't include the string "curl".
praisewhitey · 2 years ago
I tested some substack.com links and there's a delay on those too.
snake_doc · 2 years ago
Could it just be rotting infrastructure? I.e there is some logic on most visited domains to allow ease of moderation, that logic is read heavy and is now bucking under skew.

Or even like some junior dev removed an index

joecool1029 · 2 years ago
A few years ago I remember their URL shortener on android app directing somewhere that my hostfile adblocker would catch (like an analytics domain or something). This made it so first click on certain twitter links would fail, but if I clicked it again it would go successfully. Ultimately I never researched it deeply enough but my guess is they had some sort of handler that would log whether loading their analytics service failed and serve up the direct link on the second attempt.
zagrebian · 2 years ago
It’s about 4.5 seconds for me

https://imgur.com/a/qege0O9

praisewhitey · 2 years ago
4521ms according to curl

  curl -A "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/117.0" -I "https://t.co/4fs609qwWt"
  x-response-time: 4521

lamontcg · 2 years ago
Any DNS resolver libraries have a 4.5 second timeout? Maybe their infrastructure is just rotting.
PenguinCoder · 2 years ago
Now do additional testing by adding and setting the http referer to t.co or twitter. Is it Twitter, or is it NYtimes doing this?
kens · 2 years ago
You can do that if you want; I don't take orders.
mzs · 2 years ago
I don't see it:

  % curl -gsSIw'foo %{time_total}\n' -- https://t.co/4fs609qwWt https://t.co/iigzas6QBx | grep '^\(HTTP/\)\|\(location: \)\|\(foo \)'
  HTTP/2 301 
  location: https://nyti.ms/453cLzc
  foo 0.119295
  HTTP/2 301 
  location: https://www.gov.uk/government/news/uk-acknowledges-acts-of-genocide- committed-by-daesh-against-yazidis
  foo 0.037376

lapcat · 2 years ago
I think Twitter, err, X, just turned off the delay now that it's getting big media attention. I could reproduce it over and over again a little earlier, but now I can't anymore: https://news.ycombinator.com/item?id=37138161

[Edit:] I'm still seeing it with threads.net:

  curl -v -A 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15' https://t.co/DzIiCFp7Ti

XCSme · 2 years ago
They both load instantly for me.
ActivePattern · 2 years ago
It's already been reverted.
graybeardhacker · 2 years ago
The solution to X (Twitter) sucking is to stop using it. It will either: get fixed or go out of business and be replaced.

It seems we've become a society that rewards bad practices with attention which is all any company on the web is trying to get, your attention.

hackerlight · 2 years ago
> It seems we've become a society that rewards bad practices with attention

I have a very different way of looking at this. It's not us that gives attention. It is them that take it via exploiting our evolved inflexible cognitive systems for attention/reward/desire/anger/lust. We are moths to a flame. The moth's free will isn't to blame for its inability to avoid it. Our cognitive systems are fixed, we can't just turn them off. If a sufficiently powerful dopamine-inducing technology is made, you can't just "opt out". It is not as simple as that. Any individual variation in the ability to opt out likely comes down to variation in genetics or other extraneous factors not inside one's immediate control.

This is where regulation needs to come in. Once you accept the reality that opting out is a comforting yet false illusion, you can then do something about it.

altacc · 2 years ago
Tim Wu makes a similar point in his book The Attention Merchants. Humans are interested in things and throughout time various people and media (which tends to be controlled by a small number of people) have been working to capture our attention. It is very hard to totally opt out of something that is so pervasive, like fish trying to ignore water.
ravetcofx · 2 years ago
Mastodon has been a breath of fresh air and you can get a really interesting feed going when you follow the right people and hashtags
RMPR · 2 years ago
gsuuon · 2 years ago
Which people and hashtags? Trying to check it out but struggling to find relevant content. Is there a tech community somewhere? The ones I found appeared to be dead.
mutant_glofish · 2 years ago
I think that HN itself also shadow flags submissions from a list of domains it doesn't like.

Try submitting a URL from the following domains, and it will be automatically flagged (but you can't see it's flagged unless you log out):

  - archive.is
  - watcher.guru
  - stacker.news
  - zerohedge.com
  - freebeacon.com
  - thefederalist.com
  - breitbart.com

dang · 2 years ago
Well, yes, many sites are banned on HN. Others are penalized (see e.g. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). None of this is secret, though we don't publish the lists themselves.

Edit: about 67k sites are banned on HN. Here's a random selection of 10 of them:

  vodlockertv.com
  biggboss.org
  infoocode.com
  newyorkpersonalinjuryattorneyblog.com
  moringajuice.wordpress.com
  surrogacymumbai.com
  maximizedlivingdrlabrecque.com
  radio.com
  gossipcare.com
  tecteem.com

rhaksw · 2 years ago
It is a secret if the system does not inform the poster it's been penalized.
mdp2021 · 2 years ago
Understandable, but I think there should be some discriminating system for another class of sites, the "you can submit but not discuss" ones.

For example, a recent submission (of mine):

"Luis Buñuel: The Master of Film Surrealism"

it had no discussion space because (I guess) it comes from fairobserver.com . Now, I understand that fairobserver.com may had been an hive of dubious publishing historically, but it makes little sense we cannot discuss Buñuel...

Maybe a rough discriminator (function approximator, Bayesian etc.) could try and decide (based at least on the title) whether a submission from "weak editorial board" sites seems to be material to allow posts or not.

dredmorbius · 2 years ago
pg posted an early version of the list back in March 2009 when it include only 2096 sites:

<https://news.ycombinator.com/item?id=498910>

That grew fairly rapidly, it was at 38,719 by 30 Dec 2012:

<https://news.ycombinator.com/item?id=4984095> (a random 50 are listed).

I suspect that overwhelmingly the list continues to reflect the characteristics of its early incarnations.

bravogamma · 2 years ago
Do you have get-out-of-jail or N-strikes-and-you're-out policies? What if someone's legitimate website gets caught in this? I've also long wondered about user specific shadow bans. Can you please shed light on this?
zoky · 2 years ago
Well that explains why all those links I posted to maximizedlivingdrlabrecque.com never got any traction…
joenot443 · 2 years ago
That's a lot of domains! Did you source that from some other list, or is that a result of 67k individual entries? Either way, I appreciate it.

Out of curiosity, what's the rationale for blocking archive.is? Legal reasons I assume?

amadeuspagel · 2 years ago
Maybe "major media" should include tech media like The Register, Ars Technica, Tech Dirt, etc.. Unlike with media like the NYT, Bloomberg or Reuters, I've never seen a story for which these sites were the best source and much of what they publish is blogspam summarizing stories that have already been posted on HN, usually with a votebait title.
quickthrower2 · 2 years ago
SEO optimized domains, so 2010 :-)
fnord77 · 2 years ago
radio.com looks legit, what is wrong with it?
rpruiz · 2 years ago
So, is there an algorithm to be features in the front page? —other than upvotes. If a site can be banned, can another one be promoted?
jemmyw · 2 years ago
Would be nice if the lists were published though with a link to the list from the submission form.
networkchad · 2 years ago
Why not publish the list? Users would know what not to submit in that case. Except maybe you’re worried about the list being heavily curated a certain way…
elcano · 2 years ago
I can't believe you all fell for the whataboutism.
mutant_glofish · 2 years ago
[flagged]
mutant_glofish · 2 years ago
And you don't see that as censorship?
pavlov · 2 years ago
The difference is that HN is explicitly heavily moderated while Twitter pretends to be an equitable free speech platform.
gmerc · 2 years ago
Unless you disagree with Elon
chasing · 2 years ago
Good.

Hacker News isn't an open-ended political site for people to post weird propaganda.

mutant_glofish · 2 years ago
How's archive.is "weird propaganda"?

Deleted Comment

archo · 2 years ago
Try submitting a URL from the following domains, and it will be automatically flagged (but you can't see its flagged unless you log out): - archive.is

I can assure you that is Not the case with HN: on posting archive.is URL's, proof?

Look at my comment postings : https://news.ycombinator.com/threads?id=archo

Is it possible you have been shadow-banned for poor compliance to the [1]Guidelines & [2]FAQ's?

[1] : https://news.ycombinator.com/newsguidelines.html

[2] : https://news.ycombinator.com/newsfaq.html

mutant_glofish · 2 years ago
> I can assure you that is Not the case with HN: on posting archive.is URL's, proof?

It's not banned in comments, but it is banned in submissions. @dang (HN's moderator) confirms that here: https://news.ycombinator.com/item?id=37130177

janandonly · 2 years ago
I must admit that I've never really delved into the rules on what/how to post on HN.

For example, I've linked to my work, but it never occurred to me to use "Show HN".

Maybe this is no big deal? Or perhaps for new signups, it would be good to “soft force” them to read the FAQ?

jahsome · 2 years ago
The assertion is about submissions, not comments.
janandonly · 2 years ago
Isn't blocking Stacker.news a petty move?

It's basically HN, but you can earn small tips for submissions and comments.

joshstrange · 2 years ago
> It's basically HN, but you can earn small tips for submissions and comments.

Guesses it's crypto bullshit

goes to website

Yep, exactly as expected. Karma alone can mess with incentives, I cannot imagine that adding monetary incentive does anything but make it worse. Also crypto has the reverse-midas-touch from everything I've experienced first-hand or read so adding that into the mix is just another black mark.

arijun · 2 years ago
It could be because they saw they were getting low quality links from there. In any case, since HN prefers original sources, it’s less likely that a news aggregator would be a good source (the occasional Reddit comment notwithstanding)
iguana_lawyer · 2 years ago
Yeah. HN bans your favorite white supremacist blogs. I don’t see a problem with that.
jquery · 2 years ago
the wise man bowed his head solemnly and spoke: "theres actually zero difference between good & bad things." -- @dril
fortran77 · 2 years ago
There are other domains that, while not algorighmically banned, have an army of obsessive people who will flag any story from them if they see it.
PenguinCoder · 2 years ago
Additional details I wrangled for this rabbit hole. I don't think it's t.co doing this intentionally, but rather poor handling of 'do you have our cookies or not'. Everyone in this thread _proving things_ without taking into account the complexity of the modern web.

   man curl
       -b, --cookie <data|filename>
              (HTTP) Pass the data to the HTTP server in the Cookie header. It is supposedly the data previously received from the server in a "Set-Cookie:" line.
----

Add that option to your curl tests.

    ---
    $ time curl -s -b -A "curl/8.2.1" -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum 
    eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2  -

    real    0m1.245s
    user    0m0.087s
    sys     0m0.034s
    ---

    $ time curl -s -b -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum 
    eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2  -

    real    0m1.265s
    user    0m0.103s
    sys     0m0.023s
    ---

    $ time curl -s -b -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt -o /dev/null | sha256sum 
    eb9996199e81c3b966fa3d2e98e126516dfdd31f214410317f5bdcc3b241b6a2  -

    real    0m1.254s
    user    0m0.100s
    sys     0m0.018
    ---

scientya · 2 years ago
Amazing that this poor handling of 'do you have our cookies or not' only affects news papers and social media sites that Elon doesn't like! What a coincidence.
dymk · 2 years ago
If it's not intentional, why are people observing different behavior (no delay) for other domains, but a delay for NYT, bsky etc then?
mzs · 2 years ago
oh boy... -b takes an option which in your examples is -A and -e, then what follows is interpreted as a URL and you throw away the warnings:

  % curl -vgsSIw'> %{time_total}\n' -b -A "curl/8.2.1" https://t.co/DzIiCFp7Ti 2>&1 | grep '^\(* WARNING: \)\|\(Could not resolve host: \)\|>' 
  * WARNING: failed to open cookie file "-A"
  * Could not resolve host: curl
  curl: (6) Could not resolve host: curl
  * WARNING: failed to open cookie file "-A"
  > HEAD /DzIiCFp7Ti HTTP/2
  > Host: t.co
  > User-Agent: curl/8.1.2
  > Accept: */*
  > 
  > 0.013309
  > 0.112494

PenguinCoder · 2 years ago
Alright thanks for explaining that . Here's what I see explicitly setting the cookiejar

    $ time curl -s -b cookies.txt -c cookies.txt -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/DzIiCFp7Ti

    [t.co meta refresh page src]

    real     0m4.635s
    user   0m0.004s
    sys     0m0.008s

    $ time curl -b cookies.txt -c cookies.txt -A "wget/1.23" -e ";auto" -L https://t.co/DzIiCFp7Ti                        curl: (7)
    Failed to connect to www.threads.net port 443:  Connection refused
    real     0m4.635s
    user   0m0.011s
    sys     0m0.005s

    $ time curl -b cookies.txt -c cookies.txt -e ";auto" -L https://t.co/DzIiCFp7Ti                                       curl: (7)
    Failed to connect to www.threads.net port 443 Connection refused
    real     0m0.129s
    user   0m0.000s
    sys     0m0.013s
The failed to connects are threads.net likely blocking those user agents but the timing is there which is different than the first UA attempt.

ender7 · 2 years ago
I can replicate this behavior fairly easily in a browser.

  1. Open incognito window in Chrome
  2. Visit https://t.co/4fs609qwWt -> 5s delay
  3. Open a second tab in the same window -> no delay
  4. Close window, start a new incognito session
  5. Visit https://t.co/4fs609qwWt -> 5s delay returns

xslowzone · 2 years ago
The reason there isn't a delay the second click is because the redirect is cached locally in your browser.

Your humble anonymous tipster would appreciate if you do a little legwork.

PenguinCoder · 2 years ago
What is that attempting to prove or replicate?

Here's a simpler test I think replicates what I am indicating in GP comment, with regards to cookie handling:

Not passing a cookie to the next stage; pure GET request:

    $ time curl -s -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt > nocookie.html

    real    0m4.916s
    user    0m0.016s
    sys     0m0.018s

Using `-b` to pass the cookies _(same command as above, just adding `-b`)_

    $ time curl -s -b -A "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" -e ";auto" -L https://t.co/4fs609qwWt > withcookie.html

    real    0m1.995s
    user    0m0.083s
    sys     0m0.026s
Look at the differences in the resulting files for 'with' and 'no' cookie. One redirect works in a timely manner. The other takes the ~4-5 seconds to redirect.

ChrisArchitect · 2 years ago
Amen
ChrisArchitect · 2 years ago
Good work Penguin. I believe in you
pizza · 2 years ago
Remember when people were excoriating Google AMP for encouraging walled gardens? If true, this seems in so much worse faith than that.
zx8080 · 2 years ago
Not worse. They are both as evil as it gets. Typical: take public resource and use it for an exclusive profit.

What happened to net neutrality? Could it applied for this case?

pb7 · 2 years ago
Net neutrality has been dead since 2017.
88913527 · 2 years ago
The speed at which enshitification is being unleashed surprises me each and every day.
blowski · 2 years ago
Enshitification is different. It’s when companies destroy a product with hundreds of changes that prioritise internal politics above what end users want.

This is something else - just the ego of one rich guy petulantly satisfying his inner demons.

Terr_ · 2 years ago
"Porque no los dos?"

A five-second delay may be enough to cause a measurably increase in the "stickiness" of Twitter if some people wait <5 seconds before clicking or scrolling onwards to something else.

Then they spend more time generating ad-revenue for Twitter than if they had gone off to the New York Times or something and started browsing over there.

xg15 · 2 years ago
> just the ego of one rich guy petulantly satisfying his inner demons.

As that rich guy happens to be the CEO, how is this not the prime example of "prioritising internal politics above what end users want"?

jonny_eh · 2 years ago
> that prioritise internal politics

I thought it was about increasing short-term revenue.