Readit News logoReadit News
BLKNSLVR · 2 years ago
Totally useless commentary:

It makes me deeply happy to hear success stories like this for a project that's moving in the correctly opposite direction to that of the rest of the world.

Engildification. Of which there should be more!

My soul was also satisfied by the Sleeping At Night post which, along with the recent "Lie Still in Bed" article, makes for very simple options to attempt to fix sleep (discipline) issues.

sph · 2 years ago
It's a function of scale: the larger the team/company behind the product, the greater its enshittification factor/potential.

The author recently went full time on their Marginalia search engine, AFAIK it's a team size of 1, so it's the farthest away from any enshittification risk. Au contraire, like you say: it's at these sizes where you make jewels. Where creativity, ingenuity and vision shines.

This comment is sponsored by the "quit your desk job and go work for yourself" gang.

Damogran6 · 2 years ago
This is something I tried doing 10-15 years ago, and while talent, and drive, and a good message is all absolutely necessary, there's also some kinda catalyst to get the idea in front of an audience I was never able to crack. So yeah, small teams bring a greater possibility of a good product, but there's a little bit of lottery ticket to the exercise, too.
marcosdumay · 2 years ago
Capital structure may be more important than size.

A bootstraped company can resist enshitification indefinitely, but if it gets any investment, resisting it becomes harder and harder up to the point where a publicly traded one can't resist it at all.

grumblingdev · 2 years ago
The only issue is motivation. With a team size of 1, people don't realize how much they depend on having just one other person there. Some people can do it. I think a lot are in for a surprise.
scyzoryk_xyz · 2 years ago
I would wonder if there wouldn’t be inherent survivorship bias that would only make it seem like most smaller projects are as you call them “jewels”. The small bad ones don’t make it to you, but the big ones are too big to fail that sort of thing. Could extend that to big good ones being less surprising and normal because “it just works”.
paulsutter · 2 years ago
Yeah! Elon could have already landed on Mars if he’d just built the rockets himself /s
keyle · 2 years ago
It's a breath of fresh air to read of someone that

- cut his resources burning in half,

- is more productive with a smaller screen than before, and

- sleeps like a log at night

(his 3 last blog posts!)

namaria · 2 years ago
I believe we all suffer, in technology and society at large, from excess resources.

Too much processing power, too much memory and storage. Too many data centers, bandwidth. We lost all sense of traction and we're running wild trying to fill up all the extra space and burn up all the extra resources. Software is currently way more complicated than it needs to be, and it makes it overall excessively insecure as well.

I've said this before, and it's becoming official this fall. I'm getting out of the wheel. I wish I had the competence of Marginalia's author to brave this storm. But my vessel is making water fast and I need to make landfall. Next month I embark on a new stint in academia hoping to change direction for good.

tannhaeuser · 2 years ago
I can totally relate to the screen post. Mine is even 24" (after having messed around with 27" and 32"). I think it's something not talked about enough (ie manufacturers mostly producing crap monitors when it cones to small diameters, with the exception of LG/Apple and Dell AFAICT), and deserves an extra post.
FrankyHollywood · 2 years ago
I also liked this recent post https://news.ycombinator.com/item?id=37207791
brutusborn · 2 years ago
Not useless at all, thanks for posting!

I’ve been struggling with sleep this year and finding out what works for others is very useful. I wouldn’t have found it if not for your comment.

Link for others interested: https://www.marginalia.nu/log/86-sleep/

noman-land · 2 years ago
I like this term, engildification.
BLKNSLVR · 2 years ago
I was aiming for as opposite as possible to the overused (although often unfortunately appropriate) enshittification.
not_your_vase · 2 years ago

  > Engildification. Of which there should be more!
There are. You will just never find them with Google.

ricardo81 · 2 years ago
TBF, it looks like this is the first place the word is mentioned. Bing seems to treat it as a spelling mistake for anglicization to an extent.

Great word though.

throwaway290 · 2 years ago
When I stay up late and go on a walk after morning coffee I just walk like a zombie then fall asleep for the rest of the day. I think it is rationalization, maybe something else changed for the better and then you'd both want to go for a walk and also sleep better.

Deleted Comment

flexagoon · 2 years ago
By the way, Kagi, the paid search engine you might've seen on HackerNews as well, uses Marginalia as one of its data sources

https://help.kagi.com/kagi/search-details/search-sources.htm...

If you use the "non-commercial" lens, those results, among with results from Kagi's own index and a few other independent sources will be prioritized.

gnyman · 2 years ago
On a side note inspired by this blog post.

I'm wondering if humans are mostly incapable of producing great things without (artifical) restrictions.

In this case, marginalia is (ridiculously) efficient because Victor (the creator) is intentionally restricting what hardware it runs on and how much ram it has.

If he just caved in and added another 32GiB it would work for a while, but the inefficient design would persist and the problem would just show it's head later and then there would be more complexity around that design and it might not be as easy to fix then.

If the original thesis is correct, then I think it explains why most software is so bad (bloated, slow, buggy) nowadays. It's because very few individual pieces of software nowadays are hitting any limits (in isolation). So each individual piece is terribly inefficient but with the latest M2 Pro and GiB connection you can just keep ahead of the curve where it becomes a problem.

Anyways, turned into a rant; but the conclusion might be to limit yourself, and you (and e everyone else) will be better off long term.

crote · 2 years ago
It is mostly a matter of priorities.

For most applications it simply does not make any sense to spend this much time on relatively small optimizations. If you can choose to either buy 32GiB of RAM for your server for less than $50 or spend probably over 40 hours of developer time at at least $20 / hour, it is quite obvious which one makes more sense from a business perspective. Not to mention that the website was offline for an entire week - that alone would've killed most businesses!

A lot of tech people really like doing such deep dives and would happily spend years micro-optimizing even the most trivial code, but endless "yak shaving" isn't going to pay any bills. When the code runs on a trivial number of machines, it probably just isn't worth it. Not to mention that such optimizations often end up in code which is more difficult to maintain.

In my opinion, a lot of "software bloat" we see these days for apps running on user machines comes from a mismatch between the developer machine and the user machine. The developer is often equipped with a high-end workstation as they simply need those resources to do their job, but they end up using the same machine to do basic testing. On the other hand, the user is running it on a five-year-old machine which was at best mid-range when they bought it.

You can't really sell "we can save 150MB of memory" to your manager, but you can sell "saving 150MB of memory will make our app's performance go from terrible to borderline for 10% of users".

dgb23 · 2 years ago
What if runtime performance and developer performance aren’t inversely proportional?

It might just be to a certain degree, we’re not actually getting any business efficiency from creating bloated and slow software?

A lot of things, especially in business IT, are built on top of outdated and misleading assumptions and are leaning on patterns and norms touted as best practices.

We sometimes get trapped in this belief that any form of performance improvement somehow costs us something. What if it’s baggage that we didn’t need in the first place?

Joeri · 2 years ago
In my opinion, a lot of "software bloat" we see these days for apps running on user machines comes from a mismatch between the developer machine and the user machine. The developer is often equipped with a high-end workstation as they simply need those resources to do their job, but they end up using the same machine to do basic testing.

Incidentally, I think the reason they need those specs is the same: the people building the dev tools all have top end hardware, and what’s fast enough for them is good enough to ship. I don’t think the people building the dev tools at meta, or apple, or google are seriously considering the use case of a developer working on an old dual core 8 gb machine, but that’s the reality in large parts of the world.

vincnetas · 2 years ago
So i guess that is the point GP is making. From practical perspective one would just spend 50$ on RAM and forget about it. But you miss opportunity to make something great in terms of algorithm improvements for example. Even if it costs you more.

SO here artificial constraint is that "you cant have more RAM" and so you need to find other more creative solutions.

stavros · 2 years ago
> over 40 hours of developer time at at least $20

I think maybe you dropped a zero.

marginalia_nu · 2 years ago
Yeah this aligns with my view. Limitations breed ingenuity, and that isn't limited to demo scene outputs. You're going to run into scaling problems sooner or later, and they're a lot easier to deal with early than late. If your software runs well on a raspberry pi[1], it's going to be absurdly performant on a real server.

It's actually how we used to build software. It's why we could have an entire operating system perform well on a machine like a Pentium 1 with most of what you'd expect today, etc. while at the same time we have web pages that struggle to scroll smoothly on a smartphone with literally a thousand times more resources across all axes. The Word 95 team were constantly faced with limits and performance tradeoffs, and it very clearly worked or did not.

If I had just gone and added more RAM (or whatever), I would still have been stuck with an inferior design, and soon enough I would need to buy even more RAM. The crazy part about this change is that it isn't just reducing the resource utilization, it's actually making the system more capable, and faster because free RAM means more disk caching.

[1] e.g. this runs on a single pi, and is much faster than production wikipedia because it doesn't permit updates: https://encyclopedia.marginalia.nu/article/Hacker_News

pjerem · 2 years ago
Oh yes ! That’s my pet theory too.

I think it’s why old computers felt good and also why old games were so good.

Maybe it have something to do with the complexity of the systems we deal with.

When you have a restricted amount of some resource (RAM, physical space, food, materials, time, money …) you have to plan how you will use it. You are forced to be smart.

When you have a virtually infinite resource, you can make whatever you feel making but you don’t have to really care about the final state, you just start and you’ll see when it will work.

I’m not exactly a true gamer, but I’ve always been amazed by the fact humans were capable to store so much emotion, adventures and time to enjoy in the good old cartridges with some kb/mb of rom. I mean, Ocarina Of Time rom is just the size of the last 8 photos I took with my iPhone.

prox · 2 years ago
The guy who made Virtualdub (virtualdub.org) has a blog who said essentially that. His video program is supersmall because it doesn’t use 4 packaged libraries; he programmed everything to hardware / OS interfaces directly.
MichaelZuo · 2 years ago
American Airlines ran SABRE, a sizeable airline ticketing and reservation system, in the mid-1970s on two system/360 mainframes that could only process a few tens of millions of instructions per second.

A raspberry pi 2 can do over 4 billion Dhrystone instructions per second, and a pi 4 over 10 billion per second.

Of course by modern standards mid-1970s SABRE was pretty barebones for an airline's main system, but it's at least theoretically possible to run simplified systems for over a 100 airlines simultaneously on a single pi 2...

So yes modern programs are very far from optimized. 1000x or 10 000x improvements are possible, less for math heavy stuff.

nottheengineer · 2 years ago
Good point about not hitting limits individually.

I think microsoft has a huge problem with this. Even 3000$ laptops from 5 years ago struggle with running a teams call, some office instances and a browser with 30 tabs at the same time without slowing down to unacceptable levels.

They test stuff individually and running one thing alone is fine, but that's not what people do.

I'd imagine that artificial limits in the form of run time on well-defined hardware that are only raised after an explicit decision could be the solution to this.

But then again I only write business software where the performance aspect comes down to "don't do stupid shit with the database and don't worry about the rest because the client won't pay for those worries", so I might be on the wrong track entirely.

eh8 · 2 years ago
This is especially true in UX design.

When I see a website...

- using a ridiculously thin or small font,

- relying on a high-end monitor to provide sufficient color contrast

- loading an unreasonable amount of resources only to provide laggy animations

...I'm wondering if the responsible designer(s) only have 32-in Retina displays and the latest Macbooks to work with. Because on any other combination of devices, the website looks and feels awful.

And I know this because I was formerly guilty of it!

rqtwteye · 2 years ago
“ I'm wondering if the responsible designer(s) only have 32-in Retina displays and the latest Macbooks to work with. Because on any other combination of devices, the website looks and feels awful.”

I think often it’s that they aren’t users themselves. They make it “pretty” but not functional.

famahar · 2 years ago
There are so many tools to see how a website looks like on multiple screens / devices. Even full on emulation. I can see a designer making this oversight but a UX designer doing that kinda makes the UX part of the title irrelevant.
roughly · 2 years ago
> I'm wondering if humans are mostly incapable of producing great things without (artifical) restrictions.

On the one hand, I agree with this. I think an awful lot of great art and great work comes from the enforced genius of operating within constraints, and there's a profound feeling that comes from recognizing that kind of brilliance.

I'll also say, though, that there's also something about seeing the results of absolutely turning every nob to 11 - about seeing the absolute unfettered apex of what particularly talented people can actually do with no constraints whatsoever. It's a very different experience, and I deeply respect the genius of making art under constraint, but sometimes you've just gotta put a dude on the moon, you know?

mfru · 2 years ago
> sometimes you've just gotta put a dude on the moon, you know?

in software development it looks like everyone and their grandmother is sending people to all moons known to mankind, though.

tkgally · 2 years ago
> I'm wondering if humans are mostly incapable of producing great things without (artifical) restrictions.

Here is an opinion to support your hypothesis from a couple of different domains: poetry and music.

While some people prefer free verse and avante-garde music, what stays most in my mind, and what seems to endure longest overall, are poetry with regular rhyme and meter and music that follows standard patterns of melody, rhythm, and harmony. Having to force their creativity into those sometimes rigid frameworks seems to enable many artists to produce better works.

nottheengineer · 2 years ago
Maybe a counterexample to it, not sure if it can be applied:

I write software in ABAP, which is a weird and ridiculously complicated language that has inline SQL and type checking against the database and has never had a major version that breaks old stuff, so code from 30 years ago will (and does) still run.

I used to have fun working around the quirks of it and finding solutions that work within the limitations, but now I'm just frustrated by having to solve problems that haven't been around for the past 15 years in the rest of world and looking at terrible code that can't be made any nicer because of those limitations or because customers don't give a shit as long as it works most of the time.

maxweylandt · 2 years ago
There's definitely some artist who feel that way. Jack White comes to mind. He'll deliberately use restrictions(like writing music for only two instruments) and even physically obstruct things at live performances. See this (very good) interview with Conan: https://m.youtube.com/watch?t=890&v=AJgY9FtDLbs&feature=yout...
keyle · 2 years ago
This is very true.

And as a developer or a team, you're bound by how long development takes, not by the required resources.

You won't be asked by a business stakeholder "oh, and how much RAM does it take?" or "why is it $2,000 a month instead of $1,000?". These questions tend to come much later when profit needs to be ironed out.

arphox · 2 years ago
And later, when performance becomes important, it is often much harder to improve than early on. Especially with legacy db schemas with a lot of existing customer data.
cglee · 2 years ago
Interesting observation and aligns with my experience of really enjoying small focused tools and apps. This website is a good example.

Further, it feels like there's a corollary here to companies, where financially constrained companies who are smaller and more focused provide better customer experience than cash-flush competitors.

jancsika · 2 years ago
> I'm wondering if humans are mostly incapable of producing great things without (artifical) restrictions.

I think the real issue is that there isn't a program language that produces a compiler error if the given code can exceed a maximum specified latency.

Even working on a program with soft-realtime scheduling, I've had to constantly push back against patches that introduce some obscure convenience without having measured worst case latency.

The problem is so bad I doubt most people realize it's there. I don't know what the answer is, but I have the feeling there's an intersection with timing attacks on software/hardware. Some kind of tooling that makes both worst case times and variance as visible as the computed CSS in devTools would probably help. Added to some kind of static analysis, perhaps devs to hack their way to decently responsive interfaces and services.

globular-toast · 2 years ago
It would probably help if the whole "developer spec" thing would go away. I never understood why people think they need 32GB of RAM and top of the line CPU to write code. If you're compiling a lot (especially C++ I guess) then you need a build server. I wonder how much better things would be if "developer spec" actually meant something close to median or representative spec.
uoaei · 2 years ago
I completely agree. Every creative I've ever trusted has the same philosophy: freedom through constraints. I've found in my life, too, I can focus more closely on elegant solutions when they become (perhaps artificially) necessary, not merely aesthetically pleasing. I'm actually having a similar experience of insane efficiency improvements in a personal project, much smaller in scope, that came down to using bit operations and as-branchless-as-possible methods for an Arduino Nano.
isaacremuant · 2 years ago
Without going about efficiency and priorities. I think it's easy enough to claim that a great way to spark creativity or great solutions is putting constraints.

It's about specialization around the usage of few elements to achieve a goal vs a paradox of choice or going through common and known patterns.

Jams can be great for this and people realize they can work so much more efficiently and focus on the core of their idea.

boredumb · 2 years ago
I very often use my highspeed 3g option in the network tab when developing web UIs to give myself some serious constraints instead of assuming everyone is using a developer workstation.

Dead Comment

Deleted Comment

nicbou · 2 years ago
I always love seeing marginalia.nu updates here. You are a cherished user on this website, and I hope that you keep posting.
marginalia_nu · 2 years ago
Aww shucks.
anyfactor · 2 years ago
Oh thank you. I have been doing a hobby project on search engines, and I kept searching of variations of "Magnolia" for some reason. ""Marginalia"" at least for me is hard to remember. Currently, I am trying to figure my way around Searx.

Does Marginalia support "time filters" for search like past day, past week etc? According the special keywords the only search params accepted is based on years.

  year>2005 (beta) The document was ostensibly published in or after 2005
  year=2005 (beta) The document was ostensibly published in 2005
  year<2005 (beta) The document was ostensibly published in or before 2005

marginalia_nu · 2 years ago
The search index isn't updated more than once every month, so no such filters. The year-filter is pretty rough too. It's very hard to accurately date most webpages.
meithecatte · 2 years ago
It's the search engine for the niche stuff. Marginal stuff, if you will. The name makes sense to me.
mananaysiempre · 2 years ago
> In brief, every time an SSD updates a single byte anywhere on disk, it needs to erase and re-write that entire page.

Is that actually true for SSDs? For raw flash it’s not, provided you are overwriting “empty” all-ones values or otherwise only changing 1s to 0s. Writing is orders of magnitude slower than reading, but still a couple orders of magnitude faster than erasing (resetting back to “empty”), and only erases count against your wear budget. It sounds like an own goal for an SSD controller to not take advantage of that, although if the actual guts of it are log-structured then I could imagine it not being able to.

lelanthran · 2 years ago
>> In brief, every time an SSD updates a single byte anywhere on disk, it needs to erase and re-write that entire page.

> Is that actually true for SSDs? For raw flash it’s not, provided you are overwriting “empty” all-ones values or otherwise only changing 1s to 0s.

Maybe it depends. I wrote the driver for more than one popular flash chips (don't remember which ones now, but that employer had a policy of never using components that were not mainstream and available from multiple suppliers) and all the chips I dealt with did read and write exclusively via fixed-size pages.

Since SSDs are collection of chips, I'd expect each chip on the SSD to only support fixed-size paged IO.

marginalia_nu · 2 years ago
In this scenario I was basically re-writing the entire hard-drive completely in a completely random order, which is the worst case scenario for an SSD.

Normally the controller will use a whole bunch of tricks (e.g. overprovisioning, buffering and reordering of writes) to avoid this type of worst case pattern, but that only goes so far.

gavinray · 2 years ago
I was under the following impressions:

1. Writable Unit: The smallest unit you can write to in an SSD is a page.

2. Erasable Unit: The smallest unit you can erase in an SSD is a block, which consists of multiple pages.

So if a write operation impacts only 1 byte within a page, the SSD cannot erase just that byte. However, it does not need to erase the entire block either.

The SSD can perform a "read-modify-write" type of operation:

- Read the full page containing the byte that needs to change into the SSD's cache buffer.

- Modify just the byte that needs updating in the page cache.

- Erase a new empty block.

- Write the modified page from cache to the new block.

- Update the FTL mapping tables to point to the updated page in the new block.

So, a page does need to be rewritten even if just 1 byte changes. Whole-block erasure is avoided until many pages within it need to be modified.

mikehollinger · 2 years ago
> Is that actually true for SSDs?

Not precisely. The logical view of a page living at some address of flash is not the reality. Pages get moved around the physical device as writes happen. The drive itself maintains a map of what addresses are used for what purpose, their health and so on. It’s a sparse storage scheme.

There’s even maintenance ops and garbage collection that happens occasionally or on command (like a TRIM).

In reality a “write” to a non-full drive is: 1. Figure out which page the data goes to. 2. Figure out if there’s data there or not. Read / modify / write if needed. 3. Figure out where to write the data. 4. Write the data. It might not go back where it started. In fact it probably won’t because of wear leveling.

You’re right that the controller does a far more complex set of steps for performance. That’s why an empty / new drive performs better for a while (page cache aside) then literally slows down compared to a “full” drive that’s old, with no spare pages.

Source: I was chief engineer for a cache-coherent memory mapped flash accelerator. We let a user map the drive very very efficiently in user space Linux, but eventually caved to the “easier” programming model of just being another hard drive after a while.

Filligree · 2 years ago
> Is that actually true for SSDs?

It's completely false. Even the most primitive SSD controllers would make some attempt at mitigating this.

ricardo81 · 2 years ago
Just a shout out to my boss at Mojeek who presumably has a very similar path to this (the post resonates a lot with past conversations). Mojeek started back in 2004 and for the most part has been a single developer who built the bones of it, and in that, pretty much all of the IR and infrastructure.

Limitations of finance and hardware, making decisions about 32 vs 64 bit ids, sharding, speed of updating all sound very familiar.

Reminds me of Google way back when and their 'Google dance' that updated results once a month, nowadays it's a daily flux. It's all an evolution, and great to see Marginalia offering another view point into the web beyond big tech.

aidenn0 · 2 years ago
Great to read this!

Lots of people treat optimization as some deep-black-magic thing[1], but most of the time, it's actually easier than fixing a typical bug; all you have to do is treat excessive resource usage identical to how you would treat a bug.

I'm going to make an assertion: most bugs that you can easily reproduce don't require wizardry to fix. If you can poke at a bug, then you can usually categorize it. Even the rare bugs that reveal a design flaw tend to do so readily once you can reproduce it.

Software that nobody has taken a critical eye to performance on is like software with 100s of easily reproducible bugs that nobody has ever debugged. You can chip away at them for quite a while until you run into anything that is hard.

1: I think this attitude is a bit of a hold-out from when people would do things like set their branch targets so that the drum head would reach the target at the same time the CPU wanted the instruction, and when resources were so constrained that everything was hand-written assembly with global memory-locations having different semantics depending on the stage the program was in. In that case, really smart people had already taken a critical eye to performance, so you need to find things they haven't found yet. This is rarely true of modern code.

marginalia_nu · 2 years ago
I agree in general, but I think bugs are a lot easier to track down with divide and conquer strategies. If you're able to reproduce the bug by sending request X to service Y, gradually shrink the test case down until you've found the culprit.

Optimization is often an architectural problem. Sure there are cases where you're copying a thing where you could recycle a buffer, but you run out of those fairly quickly, and a profiler will tell you what you need to know.

A lot of the big performance wins are in changing the entire data logistics, possibly eliminating significant portions of the flow until the code does what it needs to in as few steps as possible.

aidenn0 · 2 years ago
This is a good point; bugs are less often an architectural problem, and that fixing any architectural problems (bugs or otherwise) are more difficult than fixing localized problems.