Webamp is quite amazing. I use it on my desktop environment website. I recently added playlist support (m3u/pls/asx) and integrated lazy(er) Butterchurn so it can run as the wallpaper. Winamp/Milkdrop forever!
I was only able to go 1 level deep. I could open the top level browser and load the site which allowed me to open another browser but that wouldn't load the site.
Properly coded, there is no reason to assume, that it would not work for an arbitrary depth. Considering, that this is on a website, and that it is running JS, which in most implementations does not implement tail call elimination, coding it properly probably involves externalizing the stack.
I thought that SQLlite databases are not suitable for large multi user websites. Something got to do with only being able to handles one transaction at a time. Isn’t that right?
Sure, but you can have raised millions with large numbers of customers before this is an issue. Having multiple customers using your platform at the same time to the extent that their transactions actually end up having to wait a meaningful amount of time requires a lot of usage that you often don’t reach for a long long time.
Define large. I'd guess sqlite can do 10-100K TPS, no problem.
You'll want a read cache to avoid repeatedly rendering the same HTML, etc in the frontend, where most CPU is spent. That'll accidentally reduce database load.
Assuming 10% writes (which is really, really write heavy), that'll get you to 1M page views per second. After that, you'll need to rearchitect.
(MySQL and Postgres are probably better choices, but I'd evaluate all three if I was setting this up for real.)
I think this is kind of their point. If you're starting out, you may not need to meet the same considerations as a "large multi user website". So you don't get stuck trying to do that - you start with what you need (appropriately) and adjust down the line.
Somewhere along the way, we seem to have forgotten that we have processors that can do 3 billion mathematical operations per second.
I'm currently trying to shame some people at work into acting on the fact that they wrote some code that should take about 1µs per call that is instead taking over 100. If you're stupid with cycles then you get to be stupid with cores too, and then you get to be stupid with locking semantics.
Context: I see a bunch of people here recommending SQLite. I have a suggestion, try out LMDB. It's kind of like noSQL SQLite. It's a simple in-process K/V store with a few features like compound keys that allow you to model relational data well enough.
A lot of people here say "use SQLite for small projects". But even using SQLite can be significant over-engineering. Running migrations? Writing SQL? That's too much effort for me.
For example: In my application, people can leave comments on a (what is effectively) a post. A SQL-native solution might have a table for comments, with foreign keys to post IDs. That's 10x more engineering then I want to do for an MVP. I just store all the comments as an array on a post. This means reads read all the comments and writes require reading all the comments, appending, and then re-writing. That's totally fine, and will probably scale me to 100x my current traffic.
LMDB-JS is great. It allows you to serialize arbitrary JS objects to LMDB using the message pack encoding system. This makes for some super concise code.
> But even using SQLite can be significant over-engineering. Running migrations? Writing SQL? That's too much effort for me.
Key-value stores are not a replacement for SQL, they solve a simpler problem. If you actually can do with a key value store I would argue a SQL table with two columns key and value with key being a primary key will do just fine. And using it needs simple select and 'insert on duplicate key update' statements. The effort for migrations won't be higher than whatever you had to configure for your key-value store.
However, if you can't actually do with a key-value store you will end up writing a lot of the features SQL provides in application code. Indices, joins, grouping, ordering, etc. You might not notice it at first, but you will blow up complexity in your business logic reinventing existing SQL features and chances are high you're doing it worse and less performant than what e.g. Postresql offers out of the box.
SQL is just such a powerful tool that is at the same time incredibly easy to use for simple scenarios.
Added benefit of using SQLite, you can migrate to a more powerful SQL database later without having to reengineer your whole data layer.
SQLightning was the initial project combining SQLite3 with an LMDB backend. It seemed to be more an experimental/Proof-of-Concept thing, and isn't maintained.
LumoSQL is an alternative project (maintained), providing a SQLite3 front end with various optional storage backends. One of which is LMDB.
Note - I'm not affiliated with either project, I just remembered they exist. :)
It is very cool that this is archived. I was the application skin admin over at deviantArt and Deskmod way back in the day (wild times, pre-Web 2.0 so everybody was just making it up as they go) but sadly none of these names ring a bell. I think Deskmod just piddled out and dA got bought and the original crew all left. I lived in the middle of nowhere so being exposed to that level of programming as well as passion for art was extremely inspiring. I even made a punk music spinoff called I Hate Music that introduced me to lifelong friends all over the world.
I have built a graph-based knowledge management system (https://github.com/brettkromkamp/contextualise) on top of SQLite. It runs great. Also, from a management point of view (e.g., deployments, backups) its ease of use is second to none. I migrated the application from PostgreSQL (which is also a great RDBMS) to SQLite and I haven’t looked back.
I love the webamp website. But, in the world of multigig RAM and multigbps I/O speeds, 1.2GB isn't that much. Not sure if that's something novel for SQLite3 though.
unlike a webserver setup maybe 20 years ago now it's trivially easy to hold even a 30GB database entirely in RAM and never touch disk, for a vast performance increase.
I think I saw a used x86-64 server with 512GB of RAM for sale for $1700 recently.
Yeah, I have a project with about this much data, and literally read the whole file into memory, and rewrite the whole file on disk every time I need to modify it. It's hilarious that it works.
https://dustinbrett.com/
It was interesting, seeing all of the clocks tick, one above another, second after second.
We really need this counter culture in software development that emphasis simplicity over the "Start with Kafka-on-k8s" madness.
What about a real-world workload? For example, I have 10 users a day on my new next.js app so I clearly need a RDS cluster for burst traffic.
A site like this which is mostly a read-only archive is a perfect use case for SQLite.
You'll want a read cache to avoid repeatedly rendering the same HTML, etc in the frontend, where most CPU is spent. That'll accidentally reduce database load.
Assuming 10% writes (which is really, really write heavy), that'll get you to 1M page views per second. After that, you'll need to rearchitect.
(MySQL and Postgres are probably better choices, but I'd evaluate all three if I was setting this up for real.)
It is hard to do right.
The allowed writer can also impact readers, depending on the WAL mode setting.
For DSS uses, where a data store is published once a day, SQLite is wonderful. For OLTP, look elsewhere.
In this case, I doubt there are any writers at all.
I'm currently trying to shame some people at work into acting on the fact that they wrote some code that should take about 1µs per call that is instead taking over 100. If you're stupid with cycles then you get to be stupid with cores too, and then you get to be stupid with locking semantics.
I recently used lmdb for webhighlighter.com (specifically the wrapper: https://www.npmjs.com/package/node-lmdb), and it was a fantastic decision.
A lot of people here say "use SQLite for small projects". But even using SQLite can be significant over-engineering. Running migrations? Writing SQL? That's too much effort for me.
For example: In my application, people can leave comments on a (what is effectively) a post. A SQL-native solution might have a table for comments, with foreign keys to post IDs. That's 10x more engineering then I want to do for an MVP. I just store all the comments as an array on a post. This means reads read all the comments and writes require reading all the comments, appending, and then re-writing. That's totally fine, and will probably scale me to 100x my current traffic.
LMDB-JS is great. It allows you to serialize arbitrary JS objects to LMDB using the message pack encoding system. This makes for some super concise code.
Here's my entire data layer: - Interface: https://github.com/vedantroy/grape-juice/blob/main/site/app/... - Implementation: https://github.com/vedantroy/grape-juice/blob/main/site/app/...
TL;DR -- I won't use SQLite, for, I don't know, my first 10K users?
Key-value stores are not a replacement for SQL, they solve a simpler problem. If you actually can do with a key value store I would argue a SQL table with two columns key and value with key being a primary key will do just fine. And using it needs simple select and 'insert on duplicate key update' statements. The effort for migrations won't be higher than whatever you had to configure for your key-value store.
However, if you can't actually do with a key-value store you will end up writing a lot of the features SQL provides in application code. Indices, joins, grouping, ordering, etc. You might not notice it at first, but you will blow up complexity in your business logic reinventing existing SQL features and chances are high you're doing it worse and less performant than what e.g. Postresql offers out of the box.
SQL is just such a powerful tool that is at the same time incredibly easy to use for simple scenarios.
Added benefit of using SQLite, you can migrate to a more powerful SQL database later without having to reengineer your whole data layer.
KV stores are fun but you end up writing a lot of queries and indexes manually
* https://github.com/LMDB/sqlightning
* https://github.com/LumoSQL/LumoSQL
SQLightning was the initial project combining SQLite3 with an LMDB backend. It seemed to be more an experimental/Proof-of-Concept thing, and isn't maintained.
LumoSQL is an alternative project (maintained), providing a SQLite3 front end with various optional storage backends. One of which is LMDB.
Note - I'm not affiliated with either project, I just remembered they exist. :)
I think I saw a used x86-64 server with 512GB of RAM for sale for $1700 recently.
Where? Sounds like a terrific bargain, my old supermicro only has 384GB.