postgres is for relational data, ok
CDC is meant to capture changes and process the changes only (in isolation from all previous changes), not to recover the snapshot of the original table by reimplementing the logic inside postgres of merge-on-read
iceberg is columnar storage for large historical data for analytics, its not meant for relational data, and certainly not for realtime
it looks like they need to use time-series oriented db, like timescale, influxdb, etc
This is not to say that this architecture isn't salvageable - if the only consumer of the Iceberg table copy is a e.g. view that downstream consumers must use, then it's easier to change the Postgres schema, as only the view must be adjusted. My experience with copying tables directly to a data warehouse using CDC, though, suggests it's hard to prevent erosion of the architecture as high-urgency projects start taking direct dependencies to save time.
And capabilities [1] is the long-known, and sadly rarely implemented, solution.
Using the trifecta framing, we can't take away the untrusted user input. The system then should not have both the "private data" and "public communication" capabilities.
The thing is, if you want a secure system, the idea that system can have those capabilities but still be restricted by some kind of smart intent filtering, where "only the reasonable requests get through", must be thrown out entirely.
This is a political problem. Because that kind of filtering, were it possible, would be convenient and desirable. Therefore, there will always be a market for it, and a market for those who, by corruption or ignorance, will say they can make it safe.
There's a solution already in use by many companies, where the LLM translates the input into a standardized request that's allowed by the CSR script (without loss of generality; "CSR script" just means "a pre-written script of what is allowed through this interface"), and the rest is just following the rest of the script as a CSR would. This of course removes the utility of plugging an LLM directly into an MCP, but that's the tradeoff that must be made to have security.
From 1 to 10, falls are by the far the biggest risk.
If you live in the US, firearms trump all of the above, but only in the US.
Most web APIs are not designed with this use-case in mind. They're designed to facilitate web apps that are much more specific in what they're trying to present to the user. This is both deliberate and valuable; app creators need to be able to control the presentation to achieve their apps' goals.
REST API design is for use-cases where the users should have control over how they interact with the resources provided by the API. Some examples that should be using REST API design:
- Government portals for publicly accessible information, like legal codes, weather reports, or property records
- Government portals for filing forms and other interactions
- Open data initiatives like Wikipedia and OpenStreetmap
Considering these examples, it makes sense that policing of what "REST" means comes from the more academically-minded, while the detractors of the definition are typically app developers trying to create a very specific user experience. The solution is easy: just don't call it REST unless it actually is.It's normal for an application to be built from many independent modules that accept their dependencies as inputs via dependency inversion[2]. The modules are initialized at program start by code that composes everything together. Using the "god object" pattern from the article is basically the same thing.
[1] https://en.wikipedia.org/wiki/God_object [2] https://en.wikipedia.org/wiki/Dependency_inversion_principle
I've encountered the same phenomenon, and I too cannot explain why it happens. Some of the highest-value types are the small special-purpose types like the article's "CreateSubscriptionRequest". They make it much easier to test and maintain these kinds of code paths, like API handlers and DAO/ORM methods.
One of the things that Typescript makes easy is that you can declare a type just to describe some values you're working with, independent of where they come from. So there's no need to e.g. implement a new interface when passing in arguments; if the input conforms to the type, then it's accepted by the compiler. I suspect part of the reason for not wanting to introduce a new type in other languages like Java is the extra friction of having to wrap values in a new class that implements the interface. But even in Typescript codebases I see reluctance to declare new types. They're completely free from the caller's perspective, and they help tremendously with preventing bugs and making refactoring easier. Why are so many engineers afraid to use them? Instead the codebase is littered with functions that take six positional arguments of type string and number. It's a recipe for bugs.
I would argue it needs editing, as it violates the HN guideline:
> use the original title, unless it is misleading or linkbait; don't editorialize.