I don't keep a "dick bar" that sticks to the top of the page to remind you which site you're on. Your browser is already doing that for you.
A variation of this is my worst offender, the flapping bar. Not only it takes space, it flaps every time I adjust my overscroll by pulling back, and it covers the text I was trying to adjust. The hysteresis to hide it back is usually too big and that makes you potentially overscroll again.
Special place in hell for those who hide the flap on scroll-up but show it again when the scroll inertia ends, without even pulling back.
Can’t say here what I think about people who do the above, but you can imagine.
Another common problem with overlayed top bars is that when following fragment links within a page, the browser scrolls the page such that the target anchor is at the top of the window, which then means it’s hidden by the top bar. For example, when jumping to a subsection, the subsection title (and the first lines of the following paragraph text) will often be obscured by the top bar.
Funnily enough for years I would say the general consensus on HN was that it was a thoughtful alternative to having to scroll back to the top, esp back when it was a relatively new gimmick on mobile.
I remember arguing about it on HN back when I was in uni.
It can actually be done correctly, like e.g. safari does it in the top-urlbar mode.
- When a user scrolls content-up in any way, the header collapses immediately (or you may just hide it).
- When a user scrolls content-down by pulling, without "a kick", then it stays collapsed.
- When a user "kick"-scrolls content-down, i.e. scrolls carelessly, in a way that a when finger lifts, scroll still has inertia -- then it gets shown again. Maybe with a short activation distance or inertia level to prevent ghost kicks.
As a result, adjusting text by pulling (including repeatedly) won't flap anything, and if a user kick-scrolls, then they can access the header, if it has any function to it. It sort of separates content-down scroll into two different gestures, which you just learn and use appropriately.
But instead most sites implement the most clinical behavior as described in the comment above. If a site does that, it should be immediately revoked a dns record and its owner put on probation, at the legislative level.
Most mobile browsers lack a "home" key equivalent (or bury it in a not-always-visible on-screen soft-keyboard). That's among the very few arguments in favour of a "Top" navigation affordance.
I still hate such things, especially when using a desktop browser.
The ACM's site has a bar like that, though it's thin enough that the issue is with the animations rather than the size: it expands then immediately collapses after even a pixel's worth of scrolling, so it's basically impossible to get at with the "hide distracting elements" picker.
I've yet to encounter a "dick bar" that doesn't jerk the page around when it collapses. Not smooth at all. I'm surprised that it hasn't been solved in 10 years.
I agree with most of this. If every website followed these, the web would be heaven (again)...
But why this one?
>I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
What is wrong with redirecting 80 to 443 in today's world?
Security wise, I know that something innocuous like a personal blog is not very sensitive, so encrypting that traffic is not that important. But as a matter of security policy, why not just encrypt everything? Once upon a time you might have cared about the extra CPU load from TLS, but nowadays it seems trivial. Encrypting everything arguably helps protect the secure stuff too, as it widens the attacker's search space.
These days, browser are moving towards treating HTTP as a bug and throw up annoying propaganda warnings about it. Just redirecting seems like the less annoying option.
Some old-enough browsers don't support SSL. At all.
Also, something I often see non-technical people fall victim to is that if your clock is off, the entirety of the secure web is inaccessible to you. Why should a blog (as opposed to say online banking) break for this reason?
Even older browsers that support SSL often lack up-to-date root certificates, which prevents them from establishing trust with modern SSL/TLS certificates.
When you force TLS/HTTPS, you are committing both yourself (server) and the reader (client) to a perpetual treadmill of upgrades (a.k.a. churn). This isn't a value judgement, it is a fact; it is a positive statement, not a normative statement. Roughly speaking, the server and client softwares need to be within say, 5 years of each other, maybe 10 years at maximum - or else they are not compatible.
For both sides, you need to continually agree on root certificates (think of how the ISRG had to gradually introduce itself to the world - first through cross-signing, then as a root), protocol versions (e.g. TLSv1.3), and cipher suites.
For the server operator specifically, you need to find a certificate authority that works for you and then continually issue new certificates before the old one expires. You might need to deal with ordering a revocation in rare cases.
I can think of a few reasons for supporting unsecured HTTP: People using old browsers on old computers/phones (say Android 4 from 10 years ago), extremely outdated computers that might be controlling industrial equipment with long upgrade cycles, simple HTTP implementations for hobbyists and people looking to reimplement systems from scratch.
I haven't formed a strong opinion on whether HTTPS-only is the way to go or dual HTTP/HTTPS is an acceptable practice, so I don't really make recommendations on what other people should do.
For my own work, I use HTTPS only because exposing my services to needless vulnerabilities is dumb. But I understand if other people have other considerations and weightings.
this is the statement of someone who wasn't around in 2013 when the snowden leaks happened and google's datacenters got owned. everyone switched to https shortly thereafter
Both Chrome and Firefox will get you to the HTTPS website even though the link starts with "http://", and it works, what more do you want?
You have to type "http://" explicitly, or use something that is not a typical browser to get the unencrypted HTTP version. And if that's what you are doing, that's probably what you want. There are plenty of reasons why, some you may not agree with, but the important part that the website doesn't try to force you.
That's the entire point of this article, users and their browsers know what they are doing, just give then what they ask for, no more, no less.
I also have a personal opinion that SSL/TLS played a significant part in "what's wrong with the internet today". Essentially, it is the cornerstone of the commercial web, and the commercial web, as much as we love to criticize it, brought a lot of great things. But also a few not so great ones, and for a non-commercial website like this one, I think having the option of accessing it the old (unencrypted) way is a nice thing.
I understand the thinking, backwards compatibility of course, and why encrypt something that is already freely available? But this means I can setup a public wifi that hijacks the website and displays whatever I want instead.
TLS is about securing your identity online.
I think with AI forgeries we will move more into each person online having a secure identity. Starting with well know personas and content creators.
HTTP/2 doesn't matter in this case, there are only 4 files to transfer. The webpage itself (html), then the style sheet (css), then the feed icon and favicon. You can do with only the html, the css makes it look better, and the other two are not very important.
It means that HTTP/2 will likely degrade performance because of the TLS handshake, and you won't benefit from multiplexing because there is not much to load in parallel. The small improvement in header size won't make up for what TLS adds. And this is just about network latency and bandwidth. HTTP/2 takes a lot more CPU and RAM than plain HTTP/1.1. Same thing for HTTP/3.
Anyways, it matters even less here because this website isn't lacking SSL/TLS, it just doesn't force you to use it.
My first impulse is to scream obscenities at you because I've seen this argument so many times repeated that I tend just keep quiet.. I don't think you can't understand, but I think you refuse to.
You're basically saying "oh, _YOUR_ usecase is wrong, so let's take this away from everybody because it's dangerous sometimes"
But yeah, I have many machines which would work just fine online except they can't talk to the servers anymore due to the newer algorithms being unavailable for the latest versions of their browsers (which DO support img tags, gifs and even pngs)
Text littered with hyperlinks on every sentence. Hyperlinks that do on-hover gimmicks like load previews or charts. Emojis or other distracting graphics (like stock ticker symbols and price indicators GOOG +7%) littered among the text.
Backgrounds and images that change with scrolling.
Popups asking to allow the website to send you notifications.
Page footers that are two pages high with 200 links.
Fine print and copyright legalese.
Cookie policy banners that have multiple confusing options and list of 1000 affiliate third parties.
It can be done tastefully. I think this commenter is talking about the brief period where it was fashionable to install plugins or code on your site that mindlessly slaps "helpful" tooltips on random strings. I always assumed it was some AdSense program or SEO that gave you some revinue or good boy Google points for the number of external links on a page.
In the modern day we've come full circle. Jira uses AI to scan your tickets for non-English strings of letters and hallucinates a definition for the acronym it thinks it means, complete with a bogus "reference" to one of your documents that doesn't mention the subject. They also have RAINBOW underlines so it's impossible to ignore.
I really appreciate hyperlinks that serve as citations, like “here’s some prior art to back up what I’m saying,” or that explain some joke, reference, jargon, etc. that the reader might not be familiar with, but unfortunately a lot of sites don’t use them that way.
Another: "Related" interstitial elements scattered within an article.
Fucking NPR now has ~2--6 "Related" links between paragraphs of a story. I frequently read the site via w3m, and yes, will load the rendered buffer in vim (<esc>-e) to delete those when reading an article.
I don't know if it's oversensitisation or progressive cognitive decline, but even quite modest distracting cruft is increasingly intolerable.
If you truly have related stories, pile them at the end of the article, and put in some goddamned microcontent (title, description, publication date) for the article.
As I've mentioned previously, my "cnn-sanify" script which strips story links and headlines from CNN's own "lite" page, and restructures those into section-organised, time-sorted presentation. Mostly for reading from the shell, though I can dump the rendered file locally and read it in a GUI browser as well.
My biggest disappointment: CNN's article selection is pretty poor. I'd recently checked against 719 stories collected since ~18 December 2024, and of the 111 "US" stories, 54% are relatively mundane crime. Substantive stories are the exception.
(The sense that few of the headlines really were significant was a large part of why I'd written the organisation script in the first place.)
> Fucking NPR now has ~2--6 "Related" links between paragraphs of a story.
Some sites even have media, like videos or photo carousels in or before an article, the content of which isn't related to the article at all. So you get this weird page where you're reading an article, but other content is mixed in around each paragraph, so you have no idea what belongs where.
Then add to that all ads and references to other sections of "top stories" and the page becomes effectively unreadable without reader mode. You then left with so little content that you start questioning if you're missing important content or media.... You're normally not.
I don't believe that these pages are meant for human consumption.
Case in point: in the Tom's Hardware article about AMD's Strix Halo (1), there's this sentence:
> AMD says this delivers groundbreaking capabilities for thin-and-light laptops and mini workstations, particularly in AI workloads. The company also shared plenty of gaming and content creation _benchmarks_. (emphasis mine)
I clicked on "benchmarks", expecting to see some, well, benchmarks for the new CPU, hoping to see some games like Cyberpunk that I might want to play. But no, it links to /tag/benchmark.
> Text littered with hyperlinks on every sentence.
This is the biggest hassle associated with reading articles online. I'm never going to click on those links because:
- the linked anchor text says nothing about the website it's linking to
- the link shows a 404 (common with articles 2+ years old)
- the link is probably paywalled
Very annoying that article writing guidelines are unchanges from the 2000s where linkrot and paywalls were almost unheard of.
Something I wish more site owners would consider is that if you expose endpoints to the internet, expect users to interact with them however they choose. Instead of adding client-side challenges that disrupt the user experience, focus on building a secure backend. And please, stop shipping business logic to the frontend - especially if you're going to obfuscate it so badly that it ends up breaking on non-Chrome browsers because that's the only browser you test with.
Of course, there are exceptions. If you genuinely need to use a WAF or add client-side challenges, please test your settings properly. There are websites out there that completely break on Linux simply because they are using Akamai with settings that just don't match the real world and were only tested on Mac or Windows. A little more care in testing could go a long way toward making your site accessible to everyone.
My favorite experience was trying to file taxes on Linux in Germany.
Turns out the backend on ELSTER had written code that if Chrome and Linux then store to test account. It wasn't possible to file taxes on Linux for over 6 months until they fixed it when they went online as a mandatory state-funded web service. I can't even comprehend who writes code like that.
Took me also a very long while to explain to the BKA that I did not try to hack them, and that they are just very incompetent people working at DATEV.
It sounds like the easiest solution would be to install another browser (e.g. Firefox) until they fixed the issue. If it is only the combination of Chrome and Linux that is the problem, that is.
It's annoying that every time "they" come up with a new antipattern, "we" have to add yet another extension to the list of mandatory things for each browser. And it also promotes browser monopoly because extensions get ported slowly to non-mainstream browsers.
It would be better to have a single extension like uBlock origin to handle the browser compatibility, and then release the countermeasures through that. In fact, ublock already has "Annoyances" lists for things like cookie banners, but I don't think it includes the dick bar unfortunately.
Incidentally, these bars are always on sites where the navbar takes 10% vertical space, cookie banner (full width of course) takes another 30% at the bottom, their text is overspaced and oversized, the left/right margins are huge so the text is like 50% of the width... Don't these people ever look at their own site? With many of these, I'm so confused how anyone could look at it and say it's good to go.
The president was very insistent that we show popup ads at six different points in time, until he got home and got six popup ads, and said, “You know what? Maybe just two popups.”
Extensions are already there: ubo, stylebot. We just have to invent a way to share user-rule snippets across these. There will always be a gray zone between trusted adblock lists included by default and some preferential things.
Nice. This may be my pet peeve on the modern internet. Nearly EVERY site has a dick bar, and the reason I care is it breaks scrolling with spacebar, which is THE most comfortable way to read long content, it scrolls you a screen at a time. But a dickbar obscures the first 1 to…10? lines of the content, so you have to scroll back up. The only thing worse than the dickbar is the dickbar that appears and disappears depending on last direction scrolled, so that each move of the scrolling mechanism changes the viewport size. A pox on them all.
> Nearly EVERY site has a dick bar, and the reason I care is
that when reading on my laptop screen, it takes up valuable vertical space on a small display that is in landscape mode. I want to use my screen's real estate to read the freaking content, not look at your stupid branding bar.
And I don't need any on-page assistance to jump back to the top of the page and/or find the navigation. I have a "Home" key on my keyboard and use it frequently.
I often scroll with the space bar instead of more modern contrivances like arrow keys, scroll wheels, trackpoints, or trackpads. Sites with these header bars always seem to scroll the entire viewport y size instead of (y - bar_height), so after I hit space I have to up-arrow some number of times to see the next line of text that should be visible but is hidden under the bar.
I am usually the first old man to yell at any cloud, and I was overjoyed when someone invented the word "enshittening" for me to describe how the internet has gotten, but it surprised me a bit that people found that one annoying. I can see the problem of it sticking the top of the page with a logo (which is basically an ad and I hate those), but they usually have a menu there, so I always thought of them a bit like the toolbar at the top of an application window in a native desktop application. FWIW when I've built those, I've always de-emphasized the branding and focused on making the menus obvious and accessible.
I'm happy to learn something new about other people's preferences, though. If people prefer scrolling to the top, so be it!
EDIT: It occurs to me that this could be a preference setting. A few of the websites that have let me have my way, I've started generating CSS from a Django template and adding configuration options to let users set variables like colors--with really positive feedback from disabled users. At a fundamental level, I think the solution to accessibility is often configurability, because people with different disabilities often need different, mutually incompatible accommodations.
Another thing to check for with sticky headers is how it behaves when the page is zoomed. Often, the header increased in size proportionately, which can shrink down the effective reading area quite a bit. Add in the frequent sticky chat button at the bottom, and users may be left with not a lot of screen to read text in.
There can be a logic to keeping the header at the top like a menu bar, and I applaud you if you take an approach that focuses on value to the user. Though I'd still say most sites that use this approach, don't have a strong need for it, nor do they consider smaller viewports except for portrait mobile.
Configuration is great, though it quickly runs into discoverability issues. However it is the only way to solve some things - like you pointed out with colors. I know people who rely on high contrast colors and others that reduce contrast as much as they effectively can.
This is exactly what CSS was designed for: allowing you to define your personal style preferences in your browser, applying them across all websites. The term ‘cascading’ reflects this purpose.
Unfortunately, the web today has strayed far from its original vision. Yet, we continue to rely on the foundational technologies that were created for that very vision.
I prefer it because I read by scrolling down one line at a time. This means that when I want to go back and read the previous couple of lines, I have to scroll up. This shows a big stupid menu of unknown size and behaviour on top of the text I'm trying to re-read.
The biggest problem for me is the randomness between different sites. It's not a problem for Firefox to display a header when I scroll up, since I can predict its behaviour. My muscle memory adapts by scrolling up and then down again without conscious thought. It's a much bigger problem if every site shows its header slightly differently.
I think the key thing is that when I scroll up, 95% of the time I want to see the text up the page, and at most maaaaaaaybe 5% of the time I want to open the menu. This is especially true if I got to your website via a search engine. I don't give a damn what's hidden in your menu bar unless it's the checkout button for my shopping cart, and even then I'd prefer you use a footer for that.
I agree with a lot of the complaints on this article except I think like two, and this is one of them. I think a sticky header is incredibly useful, and they're not something new. Books have sticky headers! Every page of a book will generally list the title and author on the top of each page. I find it just a useful way to provide context and to help me remember who/what I'm reading. The colours/branding of the sticky header communicate that much better to me than the tiny text-only title/url of my browser. And the favicon like-wise doesn't contain enough details for me to latch onto it.
But for UX: (1) Keep it small and simple! It shouldn't be more than ~2 lines of text. (2) Make it CSS-only; if you have to use custom JS to achieve a certain effect, be ready to spend a LOT of time to get the details right, or it'll feel janky. (3) Use `scroll-padding` in the CSS to make sure links to sections/etc work correctly.
this is exactly the sort of idealistic post that appeals to HN and nobody else. i dont have a problem with that apart from when technologists try to take these "back to basics" stuff to shame the substacks and the company blogs out there that have to be more powered by economics than by personal passion.
its -obvious- things are mostly "better"/can be less "annoying" when money/resources are not a concern. i too would like to spend all my time in a world with no scarcity.
the engineering challenge is finding alignments where "better for reader" overlaps with "better for writer" - as google did with doubleclick back in the day.
This actually appeals to everyone. There are words and people can read them. It literally just works. With zero friction. This is peak engineering. It's how the web is supposed to work. It is objectively better. For everyone. Everyone except advertisers.
The only problem to be solved here is the fact advertisers are the ones paying the people who make web pages. They're the ones distorting the web into engagement maximizing content consumption platforms like television.
All the tracking stuff is better for advertisers than going without, and most writers are paid by advertisers. So transitively it would be reasonable to say that tracking is good for writers and bad for readers.
The author isn't trying to profit from the reader's attention; it's just a personal blog. An ad-based business would. Neither is right or wrong, but the latter is distinctly annoying.
I mostly agree with this. Commercial websites probably should track engagement and try to increase it. They should probably use secure http. They probably should not care about supporting browsers without JS. If they need sign in then signing in with Google is useful. There's no harm in having buttons to share on social media if that will help you commercially.
Where I think the post hits on something real is the horrible UI patterns. Those floating bars, weird scroll windows, moving elements that follow you around the site. I don't believe these have been AB tested and shown to increase engagement. Those things are going to lose you customers. I genuinely don't understand why people do this.
Your argument is that writers do this because of "economics", but to the detriment of readers. I don't see how this extends only to HN readers. It applies to all readers in general.
Let's be real. If you have a website were you are trying to sell something to your page visitors (not ad clicks or referral links), then each of these annoyances and hurdles increase the risk that a potential customer backs out of it.
If you give great customer service, you get great customers – and they don't mind paying a premium.
If you're coercing customers, then you get bad customers – and they are much more likely to give you trouble later.
Most business owners are your run of the mill dimwits, because we live in a global feudal economic system – and owning a business doesn't mean you are great at sales or have any special knowledge in your business domain. It usually just means you got an inheritance or that you have the social standing to be granted a loan.
Substack's UI is fairly minimal and does not appear to have many anti-patterns. My only complaint is that it is not easy to see just the people I am subscribed to.
On the first or second page view of any particular blog, the platform likes to greet you with a modal dialog to subscribe to the newsletter, and you have to find and click the "No thanks" text to continue.
Once you're on a page with text content, the header bar disappears when you scroll downward but reappears when you scroll upward. I scroll a lot - in both directions - because I skim and jump around, not reading in a rigidly linear way. My scrolling behavior is perfectly fine on static/traditional pages. It interacts badly with Substack's "smart" header bar, whose animation constantly grabs my attention, and also it hides the text at the top of the page - which might be the very text I wanted to read if it wasn't being covered up by the "smart" header bar.
Substack absolutely refuses to stop emailing you. It's simply not possible to subscribe to paid content and NOT have them either email you or force push notifications. Enough people have complained about this that it's pretty obvious this is intentional on their part.
You can disable this as a browser setting in some browsers (like Chrome). It was driving me nuts until I figured out I could just flip a global flag for it.
It's amazing to me what people tolerate, just because it doesn't seem like a human is doing it to us. If a door-to-door salesman was told to do the equivalent of this stuff, they'd be worried about being punched in the face.
The logic here is that it's you who come to visit, not them. But the next issue is that everyone agrees it's not normal for a private service either, even if it's free, and it should be disallowed. But laws don't work like that. We simply have no law systems that could manage that, nowhere on this planet.
If the world was a nightclub, the tech industry would be a creepy guy who goes up to everyone and says "You're now dating me. [Accept or Try Again Later]"
A variation of this is my worst offender, the flapping bar. Not only it takes space, it flaps every time I adjust my overscroll by pulling back, and it covers the text I was trying to adjust. The hysteresis to hide it back is usually too big and that makes you potentially overscroll again.
Special place in hell for those who hide the flap on scroll-up but show it again when the scroll inertia ends, without even pulling back.
Can’t say here what I think about people who do the above, but you can imagine.
I remember arguing about it on HN back when I was in uni.
- When a user scrolls content-up in any way, the header collapses immediately (or you may just hide it).
- When a user scrolls content-down by pulling, without "a kick", then it stays collapsed.
- When a user "kick"-scrolls content-down, i.e. scrolls carelessly, in a way that a when finger lifts, scroll still has inertia -- then it gets shown again. Maybe with a short activation distance or inertia level to prevent ghost kicks.
As a result, adjusting text by pulling (including repeatedly) won't flap anything, and if a user kick-scrolls, then they can access the header, if it has any function to it. It sort of separates content-down scroll into two different gestures, which you just learn and use appropriately.
But instead most sites implement the most clinical behavior as described in the comment above. If a site does that, it should be immediately revoked a dns record and its owner put on probation, at the legislative level.
I still hate such things, especially when using a desktop browser.
https://github.com/t-mart/kill-sticky
The bar collapses and then pops up back on ios if you scroll content-up in a non-inertial way.
But why this one?
>I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
What is wrong with redirecting 80 to 443 in today's world?
Security wise, I know that something innocuous like a personal blog is not very sensitive, so encrypting that traffic is not that important. But as a matter of security policy, why not just encrypt everything? Once upon a time you might have cared about the extra CPU load from TLS, but nowadays it seems trivial. Encrypting everything arguably helps protect the secure stuff too, as it widens the attacker's search space.
These days, browser are moving towards treating HTTP as a bug and throw up annoying propaganda warnings about it. Just redirecting seems like the less annoying option.
Also, something I often see non-technical people fall victim to is that if your clock is off, the entirety of the secure web is inaccessible to you. Why should a blog (as opposed to say online banking) break for this reason?
For both sides, you need to continually agree on root certificates (think of how the ISRG had to gradually introduce itself to the world - first through cross-signing, then as a root), protocol versions (e.g. TLSv1.3), and cipher suites.
For the server operator specifically, you need to find a certificate authority that works for you and then continually issue new certificates before the old one expires. You might need to deal with ordering a revocation in rare cases.
I can think of a few reasons for supporting unsecured HTTP: People using old browsers on old computers/phones (say Android 4 from 10 years ago), extremely outdated computers that might be controlling industrial equipment with long upgrade cycles, simple HTTP implementations for hobbyists and people looking to reimplement systems from scratch.
I haven't formed a strong opinion on whether HTTPS-only is the way to go or dual HTTP/HTTPS is an acceptable practice, so I don't really make recommendations on what other people should do.
For my own work, I use HTTPS only because exposing my services to needless vulnerabilities is dumb. But I understand if other people have other considerations and weightings.
That's a fair point. HTTP changes more slowly. Makes sense for sites where you're aiming for longevity.
Maybe intranet sites. Everything else absolutely should.
https://doesmysiteneedhttps.com/
Unencrypted connections can be weaponized by things like China’s Great Canon.
Both Chrome and Firefox will get you to the HTTPS website even though the link starts with "http://", and it works, what more do you want?
You have to type "http://" explicitly, or use something that is not a typical browser to get the unencrypted HTTP version. And if that's what you are doing, that's probably what you want. There are plenty of reasons why, some you may not agree with, but the important part that the website doesn't try to force you.
That's the entire point of this article, users and their browsers know what they are doing, just give then what they ask for, no more, no less.
I also have a personal opinion that SSL/TLS played a significant part in "what's wrong with the internet today". Essentially, it is the cornerstone of the commercial web, and the commercial web, as much as we love to criticize it, brought a lot of great things. But also a few not so great ones, and for a non-commercial website like this one, I think having the option of accessing it the old (unencrypted) way is a nice thing.
I understand the thinking, backwards compatibility of course, and why encrypt something that is already freely available? But this means I can setup a public wifi that hijacks the website and displays whatever I want instead.
TLS is about securing your identity online.
I think with AI forgeries we will move more into each person online having a secure identity. Starting with well know personas and content creators.
Let me explain it to you like this:
The NSA has recorded your receipt of this message.
Trust me, the NSA tracking what you read is MUCH WORSE than Google tracking what you read. Encryption helps defeat that.
She accepts http AND https requests. So it's your choice, you want to know who you're talking to, or you want speed :)
It means that HTTP/2 will likely degrade performance because of the TLS handshake, and you won't benefit from multiplexing because there is not much to load in parallel. The small improvement in header size won't make up for what TLS adds. And this is just about network latency and bandwidth. HTTP/2 takes a lot more CPU and RAM than plain HTTP/1.1. Same thing for HTTP/3.
Anyways, it matters even less here because this website isn't lacking SSL/TLS, it just doesn't force you to use it.
Deleted Comment
You're basically saying "oh, _YOUR_ usecase is wrong, so let's take this away from everybody because it's dangerous sometimes"
But yeah, I have many machines which would work just fine online except they can't talk to the servers anymore due to the newer algorithms being unavailable for the latest versions of their browsers (which DO support img tags, gifs and even pngs)
Deleted Comment
Text littered with hyperlinks on every sentence. Hyperlinks that do on-hover gimmicks like load previews or charts. Emojis or other distracting graphics (like stock ticker symbols and price indicators GOOG +7%) littered among the text.
Backgrounds and images that change with scrolling.
Popups asking to allow the website to send you notifications.
Page footers that are two pages high with 200 links.
Fine print and copyright legalese.
Cookie policy banners that have multiple confusing options and list of 1000 affiliate third parties.
Traditional banner and text ads.
Many other dark patterns.
I haven't seen one that shows charts, but I gotta admit, I miss the hover preview when not reading wikipedia.
In the modern day we've come full circle. Jira uses AI to scan your tickets for non-English strings of letters and hallucinates a definition for the acronym it thinks it means, complete with a bogus "reference" to one of your documents that doesn't mention the subject. They also have RAINBOW underlines so it's impossible to ignore.
Fucking NPR now has ~2--6 "Related" links between paragraphs of a story. I frequently read the site via w3m, and yes, will load the rendered buffer in vim (<esc>-e) to delete those when reading an article.
I don't know if it's oversensitisation or progressive cognitive decline, but even quite modest distracting cruft is increasingly intolerable.
If you truly have related stories, pile them at the end of the article, and put in some goddamned microcontent (title, description, publication date) for the article.
As I've mentioned previously, my "cnn-sanify" script which strips story links and headlines from CNN's own "lite" page, and restructures those into section-organised, time-sorted presentation. Mostly for reading from the shell, though I can dump the rendered file locally and read it in a GUI browser as well.
See: <https://news.ycombinator.com/item?id=42535359>
My biggest disappointment: CNN's article selection is pretty poor. I'd recently checked against 719 stories collected since ~18 December 2024, and of the 111 "US" stories, 54% are relatively mundane crime. Substantive stories are the exception.
(The sense that few of the headlines really were significant was a large part of why I'd written the organisation script in the first place.)
Some sites even have media, like videos or photo carousels in or before an article, the content of which isn't related to the article at all. So you get this weird page where you're reading an article, but other content is mixed in around each paragraph, so you have no idea what belongs where.
Then add to that all ads and references to other sections of "top stories" and the page becomes effectively unreadable without reader mode. You then left with so little content that you start questioning if you're missing important content or media.... You're normally not.
I don't believe that these pages are meant for human consumption.
https://text.npr.org/
Do you mean metadata?
> AMD says this delivers groundbreaking capabilities for thin-and-light laptops and mini workstations, particularly in AI workloads. The company also shared plenty of gaming and content creation _benchmarks_. (emphasis mine)
I clicked on "benchmarks", expecting to see some, well, benchmarks for the new CPU, hoping to see some games like Cyberpunk that I might want to play. But no, it links to /tag/benchmark.
1: https://www.tomshardware.com/pc-components/cpus/amds-beastly...
This is the biggest hassle associated with reading articles online. I'm never going to click on those links because:
- the linked anchor text says nothing about the website it's linking to - the link shows a 404 (common with articles 2+ years old) - the link is probably paywalled
Very annoying that article writing guidelines are unchanges from the 2000s where linkrot and paywalls were almost unheard of.
Of course, there are exceptions. If you genuinely need to use a WAF or add client-side challenges, please test your settings properly. There are websites out there that completely break on Linux simply because they are using Akamai with settings that just don't match the real world and were only tested on Mac or Windows. A little more care in testing could go a long way toward making your site accessible to everyone.
My favorite experience was trying to file taxes on Linux in Germany.
Turns out the backend on ELSTER had written code that if Chrome and Linux then store to test account. It wasn't possible to file taxes on Linux for over 6 months until they fixed it when they went online as a mandatory state-funded web service. I can't even comprehend who writes code like that.
Took me also a very long while to explain to the BKA that I did not try to hack them, and that they are just very incompetent people working at DATEV.
The government. Case in point...
I use an extension called "Bar Breaker" that hides these when you scroll away from the top/bottom of the page.[0] More people should know about it.
[0] https://addons.mozilla.org/en-US/firefox/addon/bar-breaker/
It would be better to have a single extension like uBlock origin to handle the browser compatibility, and then release the countermeasures through that. In fact, ublock already has "Annoyances" lists for things like cookie banners, but I don't think it includes the dick bar unfortunately.
Incidentally, these bars are always on sites where the navbar takes 10% vertical space, cookie banner (full width of course) takes another 30% at the bottom, their text is overspaced and oversized, the left/right margins are huge so the text is like 50% of the width... Don't these people ever look at their own site? With many of these, I'm so confused how anyone could look at it and say it's good to go.
1. JS disabled by default, only enabled on sites I choose
2. Filter to fix sites that mess with scrolling:
3. Filters for dick bars and other floating elements:— Joel Spolsky, What is the Work of Dogs in this Country? (2001): <https://www.joelonsoftware.com/2001/05/05/what-is-the-work-o...>
that when reading on my laptop screen, it takes up valuable vertical space on a small display that is in landscape mode. I want to use my screen's real estate to read the freaking content, not look at your stupid branding bar.
And I don't need any on-page assistance to jump back to the top of the page and/or find the navigation. I have a "Home" key on my keyboard and use it frequently.
I'm happy to learn something new about other people's preferences, though. If people prefer scrolling to the top, so be it!
EDIT: It occurs to me that this could be a preference setting. A few of the websites that have let me have my way, I've started generating CSS from a Django template and adding configuration options to let users set variables like colors--with really positive feedback from disabled users. At a fundamental level, I think the solution to accessibility is often configurability, because people with different disabilities often need different, mutually incompatible accommodations.
There can be a logic to keeping the header at the top like a menu bar, and I applaud you if you take an approach that focuses on value to the user. Though I'd still say most sites that use this approach, don't have a strong need for it, nor do they consider smaller viewports except for portrait mobile.
Configuration is great, though it quickly runs into discoverability issues. However it is the only way to solve some things - like you pointed out with colors. I know people who rely on high contrast colors and others that reduce contrast as much as they effectively can.
Unfortunately, the web today has strayed far from its original vision. Yet, we continue to rely on the foundational technologies that were created for that very vision.
The biggest problem for me is the randomness between different sites. It's not a problem for Firefox to display a header when I scroll up, since I can predict its behaviour. My muscle memory adapts by scrolling up and then down again without conscious thought. It's a much bigger problem if every site shows its header slightly differently.
I think the key thing is that when I scroll up, 95% of the time I want to see the text up the page, and at most maaaaaaaybe 5% of the time I want to open the menu. This is especially true if I got to your website via a search engine. I don't give a damn what's hidden in your menu bar unless it's the checkout button for my shopping cart, and even then I'd prefer you use a footer for that.
But for UX: (1) Keep it small and simple! It shouldn't be more than ~2 lines of text. (2) Make it CSS-only; if you have to use custom JS to achieve a certain effect, be ready to spend a LOT of time to get the details right, or it'll feel janky. (3) Use `scroll-padding` in the CSS to make sure links to sections/etc work correctly.
its -obvious- things are mostly "better"/can be less "annoying" when money/resources are not a concern. i too would like to spend all my time in a world with no scarcity.
the engineering challenge is finding alignments where "better for reader" overlaps with "better for writer" - as google did with doubleclick back in the day.
I think a lot of people outside of HN would prefer that Internet way more than what we have now.
My first for pay project was enhancing a Gopher server in 1993.
The only problem to be solved here is the fact advertisers are the ones paying the people who make web pages. They're the ones distorting the web into engagement maximizing content consumption platforms like television.
Where I think the post hits on something real is the horrible UI patterns. Those floating bars, weird scroll windows, moving elements that follow you around the site. I don't believe these have been AB tested and shown to increase engagement. Those things are going to lose you customers. I genuinely don't understand why people do this.
Your argument is that writers do this because of "economics", but to the detriment of readers. I don't see how this extends only to HN readers. It applies to all readers in general.
If you give great customer service, you get great customers – and they don't mind paying a premium.
If you're coercing customers, then you get bad customers – and they are much more likely to give you trouble later.
Most business owners are your run of the mill dimwits, because we live in a global feudal economic system – and owning a business doesn't mean you are great at sales or have any special knowledge in your business domain. It usually just means you got an inheritance or that you have the social standing to be granted a loan.
On the first or second page view of any particular blog, the platform likes to greet you with a modal dialog to subscribe to the newsletter, and you have to find and click the "No thanks" text to continue.
Once you're on a page with text content, the header bar disappears when you scroll downward but reappears when you scroll upward. I scroll a lot - in both directions - because I skim and jump around, not reading in a rigidly linear way. My scrolling behavior is perfectly fine on static/traditional pages. It interacts badly with Substack's "smart" header bar, whose animation constantly grabs my attention, and also it hides the text at the top of the page - which might be the very text I wanted to read if it wasn't being covered up by the "smart" header bar.