Readit News logoReadit News
bravesoul2 · 2 months ago
Oof is this for people who reuse their DTO for business logic and/or share DTOs across different APIs.

It never occurred to me to ever (in any language) have a DTO with fields I wish (let alone require for security) not to unmarshall.

This seems doubly strange in Go the language of "Yes absolutely DO repeat yourself!"

Side rant:

JS (even with Typescript) pisses me off as this is unavoidable. Splats make it worse. But still a distinct DTO and business object and don't use splats would help. (Maybe make ... a lint error)

IceDane · 2 months ago
This is not unavoidable in typescript at all. It really depends a lot on how you have structured your application, but it's basically standard practice at this point to use e.g. zod or similar to parse data at the boundaries. You may have to be careful here (remember to use zod's .strict, for example), but it's absolute not unavoidable.
bravesoul2 · 2 months ago
I should have been clear. Yes it is avoidable but only by adding more checking machinery. The bare language doesnt help.

In Go etc. If a struct doesn't a field foo then there will not be a field foo at runtime. In JS there might be. Unless you bring in libraries to help prevent it.

You are relying on someone remembering to use zod on every fetch

commandersaki · 2 months ago
Does the origin of "DTO" come from Java? I've never seen it used anywhere else.
paulddraper · 2 months ago
DTO comes from OOP for which Java is the poster child.

But you can find it many places.

https://codewithstyle.info/typescript-dto/

glenjamin · 2 months ago
It’s worth noting that if you DisallowUnknownFields it makes it much harder to handle forward/backward compatible API changes - which is a very common and usually desirable pattern
georgelyon · 2 months ago
While this is a common view, recently I’ve begun to wonder if it may be secretly an antipattern. I’ve run into a number of cases over the years where additional fields don’t break parsing, or even necessarily the main functionality of a program, but result in subtle incorrect behavior in edge cases. Things like values that are actually distinct being treated as equal because the fields that differ are ignored. More recently, I’ve seen LLMs get confused because they hallucinated tool input fields that were ignored during the invocation of a tool.

I’m a little curious to try and build an API where parsing must be exact, and changes always result in a new version of the API. I don’t actually think it would be too difficult, but perhaps some extra tooling around downgrading responses and deprecating old versions may need to be built.

yencabulator · 2 months ago
It's a convenience and a labor saver, so of course it's fundamentally at odds with security. It's all trade-offs.
physicles · 2 months ago
If you’re writing a client, I could see this being a problem.

If you’re writing a server, I believe the rule is that any once valid input must stay valid forever, so you just never delete fields. The main benefit of DisallowUnknownFields is that it makes it easier for clients to know when they’ve sent something wrong or useless.

nine_k · 2 months ago
No, once-valid input can be rejected after a period of depreciation.

What actually makes sense is versioning your interfaces (and actually anything you serialize at all), with the version designator being easily accessible without parsing the entire message. (An easy way to have that is to version the endpoint URLs: /api/v1, /api/v2, etc).

For some time, you support two (or more) versions. Eventually you drop the old version if it's problematic. You never have to guess, and can always reject unknown fields.

anitil · 2 months ago
This was all very interesting, but that polyglot json/yaml/xml payload was a big surprise to me! I had no idea that go's default xml parser would accept proceeding and trailing garbage. I'd always thought of json as one of the simpler formats to parse, but I suppose the real world would beg to differ.

It's interesting that decisions made about seemingly-innocuous conditions like 'what if there are duplicate keys' have a long tail of consequences

shakna · 2 months ago
Tangent for breaking Python's JSON parser: This has worked for five years. The docs do not say that parsing an invalid piece will result in a RecursionError. They specify JSONDecodeError and UnicodeDecodeError. (There is a RecursionError reference to a key that is off by default - but if its off, we can still raise this...)

    #!/bin/sh

    # Python will hit it's recursion limit
    # If you supply just 4 less than the recursion limit
    # I assume this means there's a few objects on the call stack first
    # Probably: __main__, print, json.loads, and input.

    n="$(python3 -c 'import math; import sys; sys.stdout.write(str(math.floor(sys.getrecursionlimit() - 4)))')"

    echo "N: $n"

    # Obviously invalid, but unparseable without matching pair
    # JSON's grammar is... Not good at being partially parsed.
    left="$(yes [ | head -n "$n" | tr -d '\n')"

    # Rather than exploding with the expected decodeError
    # This will explode with a RecursionError
    # Which naturally thrashes the memory cache.
    echo "$left" | python3 -c 'import json; print(json.loads(input()))'

jwilk · 2 months ago
anitil · 2 months ago
This is why we can't have nice things
mdaniel · 2 months ago
> I'd always thought of json as one of the simpler formats to parse, but I suppose the real world would beg to differ

Parsing JSON Is a Minefield (2018) - https://news.ycombinator.com/item?id=40555431 - June, 2024 (56 comments)

et al https://hn.algolia.com/?query=parsing%20json%20is%20a%20mine...

asimops · 2 months ago
In the case of Attack scenario 2, I do not get why in a secure design you would ever forward the client originating data to the auth service. This is more of a broken best practise then a footgun to me.

The logic should be "Parse, don't validate"[0] and after that you work on those parsed data.

[0]: https://hn.algolia.com/?q=https%3A%2F%2Flexi-lambda.github.i...

securesaml · 2 months ago
See: https://bsky.app/profile/filippo.abyssdomain.expert/post/3le... that was about a signature wrapping attack in crypto, but it also applies here.
octo888 · 2 months ago
What is "IsAdmin" doing in the "create user" request DTO in the first place? The examples seem to indicate inappropriate re-use of data models.

Would it not be better to:

  type CreateUserRequest struct {
    Username string
    Password string
  }

  type UserView struct {
    Username string
    IsAdmin boolean
  }
etc?

No need to just have just 1 model that maps 1:1 to your DB row. This applies to all languages

liampulles · 2 months ago
Yeah its a bad pattern. And we can expect to see it in a sizable number of production codebases.
diggan · 2 months ago
ORMs love to put people and projects in the situation where it's much easier for data types in your program to match exactly with the tables in your database, but reality often requires them to be decoupled instead. That "create user" request contains "isAdmin" smells like someone mapped that exact request to a table, and just because the table has "isAdmin", the data type now also has it.
ljm · 2 months ago
Aside from security issues it presents, it’s just a really bad idea to tightly couple your data model to your API like that.

Any change to your DB schema is liable to become a breaking change on your API. If you need separate types for your requests and responses so be it.

ajross · 2 months ago
Exactly. This isn't a flaw in the runtime or the metaphor, the bug here is that the app exposes private data in a public API. That's a mistake that doesn't care about implementation choice. You can make the same blunder with a RESTful interface or a CGI script.

At most, you can argue that simple serialization libraries (Go's is indeed one of the best) make it more tempting to "just send the data" in such a design, so if you squint really (really) hard, you can call this a "footgun" I guess.

But the rest of the headline is 100% nonsense. This is not about "Go" or "parsers". At all.

yencabulator · 2 months ago
Except in the case of encoding/xml garbage, that's really a limitation of the Go parser. It's a very non-strict parser and doesn't actually validate that the input conforms to XML. Think of it more as a query language for a stream of XML fragments.
tln · 2 months ago
What? It's about parsers in Go's standard library.
matthew16550 · 2 months ago
It's a concept from Fieldings REST thesis that is important to the current meaning of REST:

Transfer Objects are not the Storage Objects.

WhyNotHugo · 2 months ago
> No need to just have just 1 model that maps 1:1 to your DB row. This applies to all languages

I agree with your stance, but creating lots of DTOs is more work and we developers are lazy. It's incredibly common to see a single "User" type used everywhere. This also makes all User-related functions re-usable in all contexts (again: it's the fast and lazy approach).

delusional · 2 months ago
> No need to just have just 1 model that maps 1:1 to your DB row. This applies to all languages

One reason is to avoid copying data constantly. I don't just mean this from an efficiency perspective, but also (and maybe more so) from a simplicity one. If you have a library for shoving data into a struct mechanistically, but you then take the data from that struct and shove it into an additional struct, what's the point of the library? You're writing the code move the data anyway.

In my dayjob I see this tendency constantly to have a lot of different very narrow structs that somehow integrate into some library, and then a TON of supporting code to copy between those structs. Only to then do very little actually useful work with any of the data at the end. I generally think you'd be happier with fatter structs that integrated less with weird "struct-filling" libraries.

kgeist · 2 months ago
1. As shown in the article, exposing the internal model to APIs directly is a security footgun. I've seen sites get hacked because the developer serialized internal objects without any oversight just like in the article and accidentally exposed secrets.

2. Exposing internal models to APIs directly also makes it hard to refactor code because refactoring would change APIs, which would require updating the clients (especially problematic when the clients are owned by other teams). I've seen this firsthand too in a large legacy project, people were afraid to refactor the code because whenever they tried, it broke the clients downstream. So instead of refactoring, they just added various complex hacks to avoid touching the old core code (and of course, their models also mapped directly to the UI).

In the end, codebases like that, with no layer separation, become really hard to maintain and full of security problems.

All because they thought it was "simpler" to skip writing ~10 lines of extra boilerplate per model to map models to DTOs.

Lack of layer separation becomes a problem in the long term. When you're just starting out, it may seem like overengineering, but it isn't

masklinn · 2 months ago
> In my dayjob I see this tendency constantly to have a lot of different very narrow structs that somehow integrate into some library, and then a TON of supporting code to copy between those structs.

Maybe that's the problem to solve, rather than exposing the entire internal world to the outside? Because different views of the same entities is pretty critical otherwise it's way too easy to start e.g. returning PII to public endpoints because some internal process needed it.

eadmund · 2 months ago
> In my dayjob I see this tendency constantly to have a lot of different very narrow structs that somehow integrate into some library, and then a TON of supporting code to copy between those structs. Only to then do very little actually useful work with any of the data at the end.

Don’t think of it as doing a little useful work at the end; think of it as doing all the useful work in the centre. Your core logic should be as close to a pure implementation without external libraries as possible (ideally zero, but that is often not easily achievable), but call out to external libraries and services to get its work done as appropriate. That does mean a fair amount of data copying, but IMHO it’s worth it. Testing copies is easy and localised, whereas understand the implications of a JSON (or Protobuf, or Django, or whatever) object carried deep into one’s core logic and passed into other services and libraries is very very difficult.

There’s a common theme with the ORM trap here. The cost of a little bit of magic is often higher than the cost of a little bit of copying.

TeMPOraL · 2 months ago
> also (and maybe more so) from a simplicity one. If you have a library for shoving data into a struct mechanistically, but you then take the data from that struct and shove it into an additional struct, what's the point of the library? You're writing the code move the data anyway.

Super annoying if you need to do it by hand, and wastes compute and memory if you actually need to do copies of copies, but this is the mapping part of "object relational mapping", the M in ORM. Skipping it is a bad idea.

Your business/domain model should not be tied directly to your persistence model. It's a common mistake that's responsible for like half of the bad rep ORMs get. Data structures may look superficially similar, but they represent different concepts with different semantics and expectations. If you skip on that, you'll end up with tons of stupid mistakes like 'masklinn mentions, and more subtle bugs when the concepts being squashed together start pulling in opposite directions over time.

homebrewer · 2 months ago
I don't know about Go, but in Java and .NET world this is trivially solvable with libraries like MapStruct. If you have a model with 20 fields and need to create a tiny slice of it (with let's say three fields), you need a few lines of boilerplate: create a record with those fields (1-3 LOC):

  record Profile(int id, String login, boolean isAdmin) {}
create a mapper for it:

  interface UserMapper {
    // arrays are just one example, plain models and plenty
    // of other data structures are supported
    Profile[] usersToProfiles(User[] user);

    // other mappers...
  }
and then use it:

  class UserController {
    //
    @GET("/profiles")
    Profile[] getUserProfiles() {
      var users = userRepo.getUsers();
      return userMapper.usersToProfiles(users);
    }
  }
As long as fields' names match, everything will be handled for you. Adding another "view" of your users requires creating that "view" (as a record or as a plain class) and adding just one line to the mapper interface, even if that class contains all User's fields but one. So no need to write and maintain 19+ lines of copying data around.

It also handles nested/recursive entities, nulls, etc. It's also using codegen, not reflection, so performance is exactly the same as if you had written it by hand, and the code is easy to read.

https://mapstruct.org

Go developers usually "don't need these complications", so this is just another self-inflicted problem. Or maybe it's solved, look around.

db48x · 2 months ago
Yes!

Deleted Comment

piinbinary · 2 months ago
Kudos to the author for making this very clear and easy to understand. More technical writing should be like this.

On another note, it's mind-blowing that a single string can parse as XML, JSON, and YAML.

Joker_vD · 2 months ago
> Maintain consistency across boundaries. When input in processed in multiple services, ensure consistent parsing behavior by always using the same parser or implement additional validation layers, such as the strictJSONParse function shown above.

Or make some "entry gate"-service not only validate/authorize requestes but also re-encode them into certainly valid shape. In the example with AuthService/ProxyService from "Attack scenario 2", make the Auth Service return not a simple "yep/nope" in response, but properly re-serialized request instead (if it's allowed in). So if e.g. AuthService takes a request with two fields "UserAction" and "AdminAction" and allows the "UserAction", it would response with a request object that has "UserAction" field in it but not "AdminAction" because the service ignored that field and so did not copy it into the response.

tptacek · 2 months ago
These kinds of issues (parser differentials in particular) are why you shouldn't trust Go SAML implementations that use `encoding/xml`, which was never designed for that application to begin with; I just wrote my own for my SAML.

(I mean, don't use SAML to begin with, but.)

securesaml · 2 months ago
Issue is not with go's parser, but instead about processing layer using different input than verifying layer [1]

We patched the gosaml2 (and other go saml libraries), by ensuring only the authenticated bytes are processed (not the original XML document). You can see the patches here: https://github.com/russellhaering/goxmldsig/commit/e1c8a5b89...https://github.com/russellhaering/gosaml2/commit/99574489327...

> I just wrote my own for my SAML.

Curious to see your implementation for SAML and XML Signatures.

[1]: https://bsky.app/profile/filippo.abyssdomain.expert/post/3le...