Not from a modern developer. You need to jump down the rabbit hole of the history of solid and proven solutions.
Indeed. I've noticed an increasing number of people who call themselves "developers" and appear to create software, but all they can do is follow step-by-step tutorials to glue together some massively bloated thing that they have absolutely no understanding of, much less the skills to debug it when something doesn't work perfectly as expected.
At some point, the plane flew so high above the clouds, and it hasn't seen the ground for so long, the passengers might think the clouds is the ground.
When "Hello World" is a cloud instance spun up and billed on your tuition.
That won't really happen, right? No Computer Science (CS) professor would designate special cloud Interactive Development Environments (IDEs) to students.
On the other hand, it would really resolve a lot of "doesn't work on my machine" issues.
To be fair, I would also say the mainstream IDEs have contributed to this situation to a certain extent, since they lower barrier of entry. That's a very good thing, but also sometimes bad.
Here's what I mean. There's a difference between:
* Using an IDE knowing what you want to do but you just like the convenience. You know what the IDE is doing for you, but you want your tool to do it for you quickly and get out of the way. If the IDE does something you don't know, you try to learn what this was.
* Using an IDE because you literally can't work without one. Not knowing your programming language and depending 100% on autocomplete to do most stuff. Not even knowing how to write the files that the IDE automatically updates when doing certain things.
It's a subtle difference, but the former is more likely to leave better comments in a web-based code review ("pull request") than the latter.
And if a project consists only of the latter type, it's unlikely that there will be comments like:
* "Use this other class with the same name but different package because this one is buffered" (because depending on autocomplete without knowing the language prevented them from even caring about which import to use, or the difference between those, as long as it works).
* "Would this change affect this other file that's not in the PR but calls this function a thousand times per request?" (because not knowing their own projects general structure due to depending on their IDE for navigation, so they're more likely to NOT notice a file that's not mentioned in the PR).
So if a project has only people that depend on an IDE (instead of using an IDE for convenience), it's easy to end up with stuff that technically works, but copies 40MB of strings around multiple times, and thrashes your performance due to reading one character at a time without buffering.
Granted, this technically is not related to IDEs, but in my experience (emphasis on "my"), most of the people I've worked with that use an IDE, are of the "can't program without it" type, and not the "I use it because it's convenient" type.
Not all of them. But most. Even if there's a learning curve at play here, I'd say there's usually overlap between "use IDE because convenient" and "actually bothering to put an effort into learning".
I understand this is an inevitable consequence of lowering the barrier of entry and making programming more accessible, and it doesn't matter if we're talking about npm, IDEs, programming languages, or anything.
You might not want them in your team maybe, but isn’t it really cool we reached a state where you can actually do this? Democratization of software engineering is a great good!
Dunno if it actually is democratizing software engineering. Looks more like a cargo cult to me. You get a second class of developers that don't quite get it. Not because they aren't capable, but because nobody shared the secrets with them and they had to piece together everything from flotsam.
Democratization of software engineering means giving everyone the opportunity to learn. Not handing them a black box with a few instructions then sending them off to plow head first into a brick wall of insurmountable knowledge gaps so they give up.
does this, but more production-ready, honors cache headers, sets Etag, etc.
Also:
caddy respond
can be used to hard-code specific responses if you're testing an HTTP client. It can even spin up on a whole port range and supports templates in the response text right there on the command line:
caddy respond --listen :2000-2004 "I'm server {{.N}} on port {{.Port}}"
Caddy excels at making simple [1] use-cases trivial - either with no config, or 3-5 lines of config.
It might seem jarring comming from Apache or nginx - but i think a lot of it comes from the developers having destilled a lot of common use-cases and carefully chosen reasonable defaults.
[1] Simple to articulate; not necessarily simple to execute - eg letsencrypt works out of the box. Caddy will give you https if you ask (or rather unless you go out of your way to avoid it).
It can be a great stop-gap in running an ssl proxy in front of a legacy application server stuck on some horrible old distro, for example - since the binary just needs a Linux kernel (and can ignore old ssl libraries).
It’s pretty easy, but I find the config file format rather confusing. In the classic Unix style, you’re learning yet another little language, so there’s a learning curve. Once you learn it, it’s fairly powerful and expressive, but the docs aren’t very clear.
I'd say "easy to setup and run" is arguably caddy's entire point:) Single binary file, yes running is that easy for simple stuff, and honestly it's not exactly complicated for more complex stuff. It got a lot of its popularity because it made HTTPS, including automatic cert provisioning, utterly trivial.
At my work I struggle with the opposite: all problems are being squeezed into "let's put it into static JSON on the CDN" - which ends up with a complex custom JSON based language (schema) to support sharing information between apps, subsetting information (ie. search) etc - ie. implementing an ad-hoc one-file database.
Ahh... and don't forget about complex CI/CD pipelines and Git setup that allow business users to manipulate data in these files.
I found "just use Postgres for everything" a much saner default in the end.
That sounds like a PITA solution more than KISS. That said, I'd rather walk into that business and solve that problem, than walk into a business that is fully cloud serverless hooks all over the place.
So yes, Kiss is good until it isn't... because it wasn't kiss anymore. Moving to the postgres solution sounds like it's the new Kiss.
I don't know how this will ever not be a complete maintenance hell. It is even far worse than the monolithic "I push every bit that needs to be pushed for the whole enterprise - Sync Tool".
So it’s simple or complex because I got confused ;)
KISS is ALWAYS good. But simplicity doesn’t mean easy or naïve solutions. Naïve solutions get very complex since the initial input is low and they don’t cover edge cases by design making it tangled mess.
Simple solutions are often very difficult to create, require a lot of input to be simple, but they cover a lot of cases behind the curtains and are easy to describe.
Your “use Postgres” strategy is more KISS than doing complex mocking. Sure, initial effort is higher, but using well known, powerful yet ergonomic singular entity is as simple as one can get on some levels. It’s equivalent of “just use Excel for it”.
I agree with you, but I think tagline is misplaced.
The crazy habits proponent in your workplace is simply ahead of his time: there is the "SQLite for edge" vulgata now and he/she may skim some of the wheel reinvention and funny engineering problems with throwing around SQLite files, like in the good old Access days.
People should not cache something that changes very often.
This technique of producing an “API” from directories of json generated by batch jobs and creative use of symlinks can be surprisingly effective. I wrote a simple program to transform a list of RSS feeds into a directory tree that I’ve been using for years as my feed reader API to great effect.
Yeah, I've always tried to generate RSS feeds when needed rather than on-demand — it strikes me that the ratio of requests to updates must be so enormous that it doesn't make sense doing anything else.
I really like endoflife! It's great for looking up dates with a consistent UI and also codenames - I could never learn that Debian Bullseye was 11 or all the other silly codenames so I look them up here. Thank you!
We also do something similar on Datatig, which is a tool I'm writing for the general situation of crowdsourcing data in Git repos. It makes it easier for people to submit data and then creates usable data and handy websites. The static website output has an API which is just static JSON files. https://pypi.org/project/DataTig/ or https://datatig.readthedocs.io/en/latest/
Rails, by default, will put a “.json” suffix on json routes. This is so supremely useful if you have a Rails api app, speaking to some over complicated front end that might need a quick and dirty mock for something. Just serve your /api/thing.json from somewhere. Only issue is with HTTP verbs other than GET, but usually at that point I need a lot more than this 15second mock. Good for testing little things though.
I built and API for an internet radio station archive. Initially I tried to build an app but instead I realised I can run a scheduled Github Action that runs every hour, scrapes the archive, builds a Json file and deploys it to Github Pages for free: https://bd.maido.io/api.json
Indeed. I've noticed an increasing number of people who call themselves "developers" and appear to create software, but all they can do is follow step-by-step tutorials to glue together some massively bloated thing that they have absolutely no understanding of, much less the skills to debug it when something doesn't work perfectly as expected.
That won't really happen, right? No Computer Science (CS) professor would designate special cloud Interactive Development Environments (IDEs) to students.
On the other hand, it would really resolve a lot of "doesn't work on my machine" issues.
Welcome to the world of npm and supply-chain hell
Here's what I mean. There's a difference between:
* Using an IDE knowing what you want to do but you just like the convenience. You know what the IDE is doing for you, but you want your tool to do it for you quickly and get out of the way. If the IDE does something you don't know, you try to learn what this was.
* Using an IDE because you literally can't work without one. Not knowing your programming language and depending 100% on autocomplete to do most stuff. Not even knowing how to write the files that the IDE automatically updates when doing certain things.
It's a subtle difference, but the former is more likely to leave better comments in a web-based code review ("pull request") than the latter.
And if a project consists only of the latter type, it's unlikely that there will be comments like:
* "Use this other class with the same name but different package because this one is buffered" (because depending on autocomplete without knowing the language prevented them from even caring about which import to use, or the difference between those, as long as it works).
* "Would this change affect this other file that's not in the PR but calls this function a thousand times per request?" (because not knowing their own projects general structure due to depending on their IDE for navigation, so they're more likely to NOT notice a file that's not mentioned in the PR).
So if a project has only people that depend on an IDE (instead of using an IDE for convenience), it's easy to end up with stuff that technically works, but copies 40MB of strings around multiple times, and thrashes your performance due to reading one character at a time without buffering.
Granted, this technically is not related to IDEs, but in my experience (emphasis on "my"), most of the people I've worked with that use an IDE, are of the "can't program without it" type, and not the "I use it because it's convenient" type.
Not all of them. But most. Even if there's a learning curve at play here, I'd say there's usually overlap between "use IDE because convenient" and "actually bothering to put an effort into learning".
I understand this is an inevitable consequence of lowering the barrier of entry and making programming more accessible, and it doesn't matter if we're talking about npm, IDEs, programming languages, or anything.
Deleted Comment
As in be a terrorist and produce bombs? These are all ticking time bombs.
Relevant: https://xkcd.com/1988/
Lego's dream has come true.
Also:
can be used to hard-code specific responses if you're testing an HTTP client. It can even spin up on a whole port range and supports templates in the response text right there on the command line: Here's an example maintenance page:https://caddyserver.com/docs/command-lineCaddy excels at making simple [1] use-cases trivial - either with no config, or 3-5 lines of config.
It might seem jarring comming from Apache or nginx - but i think a lot of it comes from the developers having destilled a lot of common use-cases and carefully chosen reasonable defaults.
[1] Simple to articulate; not necessarily simple to execute - eg letsencrypt works out of the box. Caddy will give you https if you ask (or rather unless you go out of your way to avoid it).
It can be a great stop-gap in running an ssl proxy in front of a legacy application server stuck on some horrible old distro, for example - since the binary just needs a Linux kernel (and can ignore old ssl libraries).
At my work I struggle with the opposite: all problems are being squeezed into "let's put it into static JSON on the CDN" - which ends up with a complex custom JSON based language (schema) to support sharing information between apps, subsetting information (ie. search) etc - ie. implementing an ad-hoc one-file database. Ahh... and don't forget about complex CI/CD pipelines and Git setup that allow business users to manipulate data in these files.
I found "just use Postgres for everything" a much saner default in the end.
So yes, Kiss is good until it isn't... because it wasn't kiss anymore. Moving to the postgres solution sounds like it's the new Kiss.
I don't know how this will ever not be a complete maintenance hell. It is even far worse than the monolithic "I push every bit that needs to be pushed for the whole enterprise - Sync Tool".
So it’s simple or complex because I got confused ;)
KISS is ALWAYS good. But simplicity doesn’t mean easy or naïve solutions. Naïve solutions get very complex since the initial input is low and they don’t cover edge cases by design making it tangled mess.
Simple solutions are often very difficult to create, require a lot of input to be simple, but they cover a lot of cases behind the curtains and are easy to describe.
Your “use Postgres” strategy is more KISS than doing complex mocking. Sure, initial effort is higher, but using well known, powerful yet ergonomic singular entity is as simple as one can get on some levels. It’s equivalent of “just use Excel for it”.
I agree with you, but I think tagline is misplaced.
People should not cache something that changes very often.
The fuller version of KISS is "keep it as simple as possible, but no simpler".
Deleted Comment
Dead Comment
We also do something similar on Datatig, which is a tool I'm writing for the general situation of crowdsourcing data in Git repos. It makes it easier for people to submit data and then creates usable data and handy websites. The static website output has an API which is just static JSON files. https://pypi.org/project/DataTig/ or https://datatig.readthedocs.io/en/latest/
Makes it super simple to implement mocks of your api design before building the api.
…I guess that was kind of the point of REST in the first place but it’s easy to forget.
REST has a wonderful simplicity to it.
I’m yet to hear about an equivalent caching solution for GrqphQL APIs for example.
It’s been running unattended for a decade….
Deleted Comment