Who do you target this for? Why would I use this instead of Redis, or any of the many open source and SAAS/IAAS key-value stores? I would want more control over important data, not pushing it unencrypted to an opaque service over HTTPS.
Yep, valid point. I made it to use in cloud functions, so I can store data somewhere. I didn't want to install Redis, or MongoDB for example, as my functions' dependencies.
Basically, a directory tree where the files are JSON. Instead of Unix rm, ls, cat, echo/>>, we have rest endpoints.
The main value prop of a DB is performance, moreso than the interface. How is this different from S3 or Firebase’s rest API?
> Nodb can easily accommodate an increase in the number of requests without breaking down or becoming slow. Your app remains responsive and reliable, even during periods of high demand.
From the limitations docs page:
> Number of read (GET) requests is limited to 10k per month
Filesystems are databases. Do we use filesystems because they're faster than doing direct i/o?
There isn't a value proposition for databases, because a value proposition is a business statement. If there were a business statement for databases, it would be a management interface for data. Nobody cares if a database is fast, they care if they can get their data in and out easier.
> There isn't a value proposition for databases, because a value proposition is a business statement. If there were a business statement for databases, it would be a management interface for data.
This is just bunch of vacuous statements. Are you suggesting this product exists not to serve a business prop or value ?
> Nobody cares if a database is fast
Businesses do. There’s a reason people don’t attempt to build their own database.
Every serious database meant for production use takes performance into consideration. People would just roll their own database otherwise, but we pile into Postgres, or even MongoDb, because they’re battletested and has had years of optimizations. More than half of the work in database implementations is in engine and query optimization.
> they care if they can get their data in and out easier.
Then just store and read flat files from disk.
Strange to use a throwaway to make this comment. It seems like you are the author justifying a product that does need to exist.
I'm assuming the limitation of 10K per month has more to do with their small cloud infrastructure budget than the actual database. Saying that, I can agree that intutively, a webserver can have slower performance than a traditional database but it would be interesting to see some benchmarks.
Hi, I made the API which helps you with your serverless applications. There's no installation of packages, you store JSON via HTTPS.
It works like this, you go to the dashboard on dash.nodb.sh and create apps and environments. Then via API you can create your JSON models.
An API endpoint is split like this
/{appName}/{envName}/your-model/:id/you-model/:id/...?token={accessToken}
This way you can split your data between environments like "dev" or "prod". And every environment is protected by an access token which is generated by nodb when you create a new environment in the dashboard. Only apps and environments have to be created in the dashboard behind bearer token, but your JSON models are protected by access token, so you can call HTTP requests easily from your code.
I would like someone to share their thoughts on this, whether this would be useful when working on any app, web or mobile, or inside cloud functions. In the docs on docs.nodb.sh is described everything (so far) about the API.
Correct me if I'm wrong, but to me passing the access token in the GET URL is not a good security practice. This increases the probability of unintentionally exposing the token. The URLs can be logged by proxy servers, application logs. Simply, user can accidentally send the link with token in chat, etc.
Usually in REST APIs the auth token is passed via some HTTP header.
Hi, i'm new to HN and didn't find comments easily! Sorry for late reply.
Having them in the query params was intended for sharing indeed. I wouldn't expect someone using it on the frontend of course. I might switch to having them in the headers instead, as it was initially like that. Idea was to use the service with minimal requirements.
Tokens can be set as READ_ONLY, these tokens are only meant to be used with GET requests. So you can share the link to use in some other app for example. Again, headers might be better however we can't share them, i.e. simple copy paste.
I don't think proxy servers or sniffers see GET data - it's encrypted, assuming HTTPS of course.
Server logs might be an issue.
Browser logs and accidentally sharing is definitely a bigger issue. Less of a concern if API is only used behind the scenes by apps though.
First let me say that the minimal interface you've designed for the dashboard is lovely. But also consider that as you talk to potential customers for this, the hardest sell is "send your data to a third party" and the easiest sell is "here is something that will save your developer's time". The biggest problem it seems like most people are facing now is getting their data transported between services.
Thanks! And I fixed some bugs in the meantime. Yes, and the API should maintain its simplicity without making it super complex. And I'm still considering moving token to the headers instead of query params, as stated in one of the comments here. There are obvious pros and cons.
I'm a little confused after looking at your website. You're expecting someone to give you their personal data, there's no pricing information, there's no way to get their data back, and since they don't know anything about you, there's no restriction on what might be happening to their data. What I've written may or may not be true, but I'd expect anyone looking at the landing page to want those things to be addressed. You might have more luck if you offer a self-hosted version that folks can run themselves, and then once they trust you and decide they don't want the admin burden, they can pay you.
Disclaimer: I'm not your target audience, but I've investigated a lot of these services for my own things, and these are my feelings after an initial look at your website. I'd honestly not return for a second look under normal conditions.
Actually feedback from non-tech folks (assuming you as such) is very welcomed, so thank you!
The product is at early stage and I will be adding more texts of all sorts (like disclaimers) and features. Data is truly not shared with anyone, it's in a dedicated server, but it's NOT in an encrypted database for now. It will be when it's developed.
All that is collected from users is their email address, since you can sign up using Google and Github auth only. Behind this auth one can create apps, environments and access tokens. Afterwards, requests to the API are open under access token.
Product is free for now, thus no pricing info at the moment.
Seems like a good idea. It would have been revolutionary 10-15 years ago. But the db-space has gotten really crowed the last couple of years. And all have the focus on doing it simpler.
You should try to find your target audience and see how you can make nodb better then what ever db/solution that audience is using instead of nodb
Thank you. You're right it's overcrowded. I had passion to make it to use for my projects. I didn't want to install "npm install some-db", since I can't use persistence in cloud functions that way. Solutions exists like supabase, however I imagined separation of concerns as in apps/environments like in standard software projects beneficial for my use case.
It is a minimal product. In order to be a minimum viable product certain features may be needed:
1. Some data reliability guarantees (eg. how does this compare with Amazon S3 [1] that provides 99.999999999% durability and 99.99% availability of objects over a given year)?
2. Backup and restore
3. User-generated keys to encrypt the data
4. User-specified indexing
> Number of write (POST, PUT, PATCH, DELETE) requests is limited to 2k per month
Many production applications use substantially more than this in 1 second. I think even most people developing a project will exceed that limitation before even releasing it.
You should put a disclaimer somewhere more obvious.
Second note is, without indexing options, what's to distinguish you from S3, or Dynamo, or DocumentDB, or Mongo, or Rocks, Level, Foundation, Redis, Fauna, Postgres. JSON isn't enough, what's the hook?
Hey! Thank you for the feedback. Yes, it's very limiting (for now) since it's a free to use product right now.
Indexing is done internally because the data model is arbitrary. This is key difference when we speak just about persistence. You will be making an app about groceries, someone else about movies. Indexing will be based upon your entities and sub-entity types.
This is the case for every database, but they still tell me what indexing options I have available.
With an RDBMS I know that I can use hash key indexes for fast lookups on an exact key. I can use B-Tree indexes for range or partial key lookups.
For S3 I know that I can do exact lookups, or prefix scans, and that scans occur in ascending order, but cannot be done in descending order, etc.
I know nothing of what lookup schemes are available to me in your application, and what kind of Big O profile to expect from each. It has nothing to do with whether it's a grocery app or a movie app.
I actually like this, because it gets much closer to a simpler, unix-y model of computing. No, this thing is not a good idea for doing anything complex. But sometimes you don't need complex. This seems to be about as simple as you can get. (Excepting that you don't need JSON at all to exchange key=value data over HTTP, as it already has multiple ways to pass key=value built into the protocol)
I'm writing a small webapp and I'm hosting it on a PAAS which is stateless between deploys.
I merely need to persist a single string between deploys, which gets changed when the app is running.
And buying another service like S3, or hosting postgres or a databse is overkill.
Huh, that is a very interesting problem to have. It's certainly not the scale or complexity or security or reliability requirements that make storing your data difficult -- it's the choice of hosting technology!
That sounds dismissive but I don't mean it that way. I'm sure you've chosen wisely. I just find the problem intriguing because I don't know of any obvious solution and it's not one of the common problems people have with persistence.
I'm trying to think of what sort of stateful servers you might be using anyway that you can piggyback for this. Some ideas:
- SSH to any Unix box really
- Email
- Web-based pastebin service with an API
- Regular Twitter tweets
- Something stored on whatever you are using to conduct the deployment
- An image uploaded to imgur where the image data are the string you need
- DNS records
- Any web service where you have a user profile editable through an API you could use one of the profile fields to store the string
The list is getting more stupid as I go so I should probably stop there.
But if you have written the service such that you can reject a deployment unless you're sure it worked, you are technically free to use quite stupid ways to store the data because worst case the deployment just fails and the old code chugs on.
Thanks for considering my problem intriguing. I didn't find your comment dismissive at all and I appreciate your thorough response.
Also your ideas are interesting! The last one especially.
> But if you have written the service such that you can reject a deployment unless you're sure it worked, you are technically free to use quite stupid ways to store the data because worst case the deployment just fails and the old code chugs on.
I do and there's lots of error handling, so I'm not worried about doing a fun idea like editing a user profile via API.
It is 10 characters long max.
I haven't considered browser local storage before.
I want to use the string as a single source of truth but I appreciate your suggestion!
If you wanna also preserve history, this could be an option for you https://klev.dev . It lets you persist changes, get the last value, or get all values you've stored.
The main value prop of a DB is performance, moreso than the interface. How is this different from S3 or Firebase’s rest API?
> Nodb can easily accommodate an increase in the number of requests without breaking down or becoming slow. Your app remains responsive and reliable, even during periods of high demand.
From the limitations docs page:
> Number of read (GET) requests is limited to 10k per month
Filesystems are databases. Do we use filesystems because they're faster than doing direct i/o?
There isn't a value proposition for databases, because a value proposition is a business statement. If there were a business statement for databases, it would be a management interface for data. Nobody cares if a database is fast, they care if they can get their data in and out easier.
This is just bunch of vacuous statements. Are you suggesting this product exists not to serve a business prop or value ?
> Nobody cares if a database is fast
Businesses do. There’s a reason people don’t attempt to build their own database.
Every serious database meant for production use takes performance into consideration. People would just roll their own database otherwise, but we pile into Postgres, or even MongoDb, because they’re battletested and has had years of optimizations. More than half of the work in database implementations is in engine and query optimization.
> they care if they can get their data in and out easier.
Then just store and read flat files from disk.
Strange to use a throwaway to make this comment. It seems like you are the author justifying a product that does need to exist.
It works like this, you go to the dashboard on dash.nodb.sh and create apps and environments. Then via API you can create your JSON models.
An API endpoint is split like this /{appName}/{envName}/your-model/:id/you-model/:id/...?token={accessToken} This way you can split your data between environments like "dev" or "prod". And every environment is protected by an access token which is generated by nodb when you create a new environment in the dashboard. Only apps and environments have to be created in the dashboard behind bearer token, but your JSON models are protected by access token, so you can call HTTP requests easily from your code.
I would like someone to share their thoughts on this, whether this would be useful when working on any app, web or mobile, or inside cloud functions. In the docs on docs.nodb.sh is described everything (so far) about the API.
Usually in REST APIs the auth token is passed via some HTTP header.
Having them in the query params was intended for sharing indeed. I wouldn't expect someone using it on the frontend of course. I might switch to having them in the headers instead, as it was initially like that. Idea was to use the service with minimal requirements.
Tokens can be set as READ_ONLY, these tokens are only meant to be used with GET requests. So you can share the link to use in some other app for example. Again, headers might be better however we can't share them, i.e. simple copy paste.
https://stackoverflow.com/a/499594
I don't think proxy servers or sniffers see GET data - it's encrypted, assuming HTTPS of course. Server logs might be an issue. Browser logs and accidentally sharing is definitely a bigger issue. Less of a concern if API is only used behind the scenes by apps though.
Disclaimer: I'm not an auth expert!
Yes, headers or even as a data in the POST request.
> Simply, user can accidentally send the link with token in chat, etc.
Yep! Even more - it can be seen in the URL even if the user send a screenshot.
Could you elaborate a bit?
Disclaimer: I'm not your target audience, but I've investigated a lot of these services for my own things, and these are my feelings after an initial look at your website. I'd honestly not return for a second look under normal conditions.
The product is at early stage and I will be adding more texts of all sorts (like disclaimers) and features. Data is truly not shared with anyone, it's in a dedicated server, but it's NOT in an encrypted database for now. It will be when it's developed.
All that is collected from users is their email address, since you can sign up using Google and Github auth only. Behind this auth one can create apps, environments and access tokens. Afterwards, requests to the API are open under access token.
Product is free for now, thus no pricing info at the moment.
You should try to find your target audience and see how you can make nodb better then what ever db/solution that audience is using instead of nodb
Good luck!
Many production applications use substantially more than this in 1 second. I think even most people developing a project will exceed that limitation before even releasing it.
You should put a disclaimer somewhere more obvious.
Second note is, without indexing options, what's to distinguish you from S3, or Dynamo, or DocumentDB, or Mongo, or Rocks, Level, Foundation, Redis, Fauna, Postgres. JSON isn't enough, what's the hook?
With an RDBMS I know that I can use hash key indexes for fast lookups on an exact key. I can use B-Tree indexes for range or partial key lookups.
For S3 I know that I can do exact lookups, or prefix scans, and that scans occur in ascending order, but cannot be done in descending order, etc.
I know nothing of what lookup schemes are available to me in your application, and what kind of Big O profile to expect from each. It has nothing to do with whether it's a grocery app or a movie app.
I merely need to persist a single string between deploys, which gets changed when the app is running. And buying another service like S3, or hosting postgres or a databse is overkill.
So this seems promising!
That sounds dismissive but I don't mean it that way. I'm sure you've chosen wisely. I just find the problem intriguing because I don't know of any obvious solution and it's not one of the common problems people have with persistence.
I'm trying to think of what sort of stateful servers you might be using anyway that you can piggyback for this. Some ideas:
- SSH to any Unix box really
- Email
- Web-based pastebin service with an API
- Regular Twitter tweets
- Something stored on whatever you are using to conduct the deployment
- An image uploaded to imgur where the image data are the string you need
- DNS records
- Any web service where you have a user profile editable through an API you could use one of the profile fields to store the string
The list is getting more stupid as I go so I should probably stop there.
But if you have written the service such that you can reject a deployment unless you're sure it worked, you are technically free to use quite stupid ways to store the data because worst case the deployment just fails and the old code chugs on.
Also your ideas are interesting! The last one especially.
> But if you have written the service such that you can reject a deployment unless you're sure it worked, you are technically free to use quite stupid ways to store the data because worst case the deployment just fails and the old code chugs on.
I do and there's lots of error handling, so I'm not worried about doing a fun idea like editing a user profile via API.
Your suggestions have my gears turning.