Readit News logoReadit News
Posted by u/randytandy 3 years ago
GraphQL kinda sucks
Graphql is great, but is totally over hyped. This is probably more of a rant or a frustrated dev outburst.

but beginner to mid level developers are lead down the path of USE GRAPHQL especially on youtube... and this is just unfair and wrong.

The good:

- It makes working with describing the data you want easy

- It can save you bandwidth. Get what you ask for and no more

- It makes documentation for data consumers easy

- It can make subscription easier for you to use

- Can let you federate API calls

The bad

- It is actually a pain to use, depending on the backend you are using you'll have to manage

two or more type systems if there are no code first generates in your language

- It doesn't support map/tables/dictionaries. This is actually huge. I get that there might be

some pattern where you don't want to allow this but for the majority of situations working with json api's you'll end up with a {[key: string] : T} somewhere

- No clear path for Api versioning you'll end up with MyQueryV1.01 MyQueryV1.02 MyQueryV1.03

Don't use Graphql unless you're managing a solution/problem set that facebook intended graphql for

Invest your time in a simpler solution then running to GraphQL first

thanks for reading my ted talk

please any senior dev's drop your wise words so that any new dev's can avoid tarpits

stickfigure · 3 years ago
The biggest problem with graphql is that you have to do a lot of non-obvious work to harden your system against DOS attacks or people that want to fly by and download your whole database. It's easy to construct a query which puts unreasonable load on your system.

The more fine-grained nature of boring REST calls makes it more easy to control client impact on the system.

If you want to see the kind of work you actually need to put in to make a graphql API, look at Shopify. They have rate limits based on quantity of data returned. Cursors and pagination. The schema is a huge ugly mess with extra layers that never show up in the pretty examples of GraphQL on the internet.

Note that even if you use graphql for a private api to your web client, folks will reverse engineer it and use it for their own purposes. There are no private apis on the web.

I'm not a fan of graphql for anything that the public could see. It's somewhat akin to exposing a SQL interface; it opens up too many avenues of trouble. Keep public communication channels as dumb as possible.

softfalcon · 3 years ago
We avoid this by statically analyzing the queries that are used by the client, generating id’ed persisted queries for them, and only allowing those to be run if you’re un-authenticated/a regular user.

We also have user roles, which, if you’re an admin, you can run raw queries of whatever you want, but basic users are locked to the persisted query options.

It’s pretty cool, definitely adds complexity to our builds/permissions, but it works. We rolled our own, but I think Apollo GraphQL supports this out of the box now.

Not saying people should use GraphQL for everything though. It’s kind of overkill for a lot of apps.

PragmaticPulp · 3 years ago
> We avoid this by statically analyzing the queries that are used by the client, generating id’ed persisted queries for them, and only allowing those to be run if you’re un-authenticated/a regular user.

I worked with a team that was going down a similar path. At some point it felt like they were reinventing REST on top of GraphQL with a strict set of predefined queries and result shapes.

They hadn't gone too deep into GraphQL tooling so we just switched to REST and implemented those predefined queries. Had a separate, developer-only GraphQL endpoint for prototyping things for a while, but the production stuff was easier to just build out as plain old REST.

ewittern · 3 years ago
We did research on solving the DOS issue using static analysis at IBM (cf. https://arxiv.org/pdf/2009.05632.pdf). Our findings were that static analysis allows to determine (relatively strict) upper bounds on query complexity, which we assessed for two production GraphQL APIs (GitHub and Yelp). However, the static analysis requires some configuration to determine the (maximum) size of lists.

I was later involved in productising said research into an API gateway (called DataPower) offered by IBM. We implemented our GraphQL static analysis in a quite flexible and performant way (all of GraphQL's query validation and the static analysis are implemented in C++). The required configuration for the static analysis can be provided using GraphQL schema directives (cf. https://ibm.github.io/graphql-specs/cost-spec.html). Unfortunately, DataPower is quite inaccessible to the common developer.

I find that persisted queries are a very interesting approach to also solve this issue. They still grant developers the full flexibility to define queries during development, but then require providers to validate and vet queries only once before persisting them (instead of validating and vetting queries on every request). This has huge benefits for runtime performance, of course.

Deleted Comment

thr0wawayf00 · 3 years ago
It's also not that hard to implement attribute filtering with REST endpoints. People make a big deal about being able to control the shape of your API responses with GraphQL, but this completely achievable with standard REST APIs as well.
tshaddox · 3 years ago
Sure, you could have a syntax for requesting (potentially recursive) nested resources in the query string of a REST API, and there are some systems that do something close [0], but if you do it’s probably a less friendly syntax than GraphQL and has all the same problems as GraphQL regarding performance and rate limiting.

[0]

A very simple convention for including nested resources in a JSON API: https://www.jsonapi.net/usage/reading/including-relationship...

A totally cool but kinda over the top library that exposes a REST API over your database with a very GraphQLesque query string syntax that supports recursively nested resource inclusion, filtering, etc.: https://postgrest.org/en/stable/api.html#nested-embedding

lfauve · 3 years ago
This approach builds on the persisted queries approach: https://docs.wundergraph.com/docs/features/graphql-to-json-r...

It gives you all the benefits of GraphQL during development while in production you’re only dealing with JSON-RPC

pan69 · 3 years ago
> Note that even if you use graphql for a private api to your web client, folks will reverse engineer it and use it for their own purposes. There are no private apis on the web.

How is this a GraphQL specific problem?

agucova · 3 years ago
At least compared to classical REST, usually access is limited to whatever the programmer explicitly chose to add, instead of being open by default.
crabmusket · 3 years ago
It's not, but the OP is saying this to pre-empt the "just use GraphQL for private APIs" defence.
jensneuse · 3 years ago
We solved this by turning GraphQL into a compile time problem and replacing it with JSON-RPC as the transport, keeping GraphQL entirely hidden on the server. It's not just great for security and performance reasons, but also comes with a lot of other benefits, like you can inject claims into Operations. Here's some more info: https://docs.wundergraph.com/docs/features/graphql-to-json-r...
wallfacer120 · 3 years ago
How could someone reverse engineer anything other than your data model from gql queries?
SnowHill9902 · 3 years ago
> It's easy to construct a query which puts unreasonable load on your system.

Can you give an example?

lazyasciiart · 3 years ago
https://payatu.com/blog/manmeet/graphql-exploitation-part-4

query dos { allDogs(onlyFree: false, limit: 1000000) { id veterinary { id dogs { id veterinary { id dogs { id veterinary { id dogs { .....

pdpi · 3 years ago
Details will depend on your schema, but the moral equivalent of “SELECT * FROM master_table” is a good start.
humbleMouse · 3 years ago
That’s exactly what it is, a made up sql layer over sql exposing a special port. Ridiculous.
mLuby · 3 years ago
Having worked in big tech and small startups, I think GraphQL is a brilliant way to solve an organizational problem that massive tech companies have.

It's that the team maintaining the API is different from the team that needs changes to the API. Due to the scale of the organization the latter doesn't have the access or know-how to easily add fields to that API themselves, so they have to wait for the maintainers to add the work to their roadmap and get back to it in a few quarters. Relevant Krazam: https://www.youtube.com/watch?v=y8OnoxKotPQ

At a small start-up, if the GET /foo/:fooId/bar/ endpoint is missing a field baz you need, you can usually just add it yourself and move on.

demarq · 3 years ago
That's the theory. In my experience at both large and small organisations is that NONE of the theory makes it into practice.

Some reasons:

- Front end devs save time by.... sharing queries. So component B ends up fetching records it has no use for because its sharing GQL with component A.

- Backenders never optimise column selection. You may think you are really optimising by sending a GQL query for one column, but the backend will go ahead and collect ALL the columns and then "filter" down the data that was asked for.

- Backenders can also forget to handle denormalisation. If you query related many to many records but the GQL only asks for related ids of implementations will go ahead and do a full join instead of just returning results from the bridge table.

- Frontenders aren't even aware you can send multiple graphql GraphL requests simultaneously.

GraphQL is great, but any technology is limited by how well people can extract its value. I personally feel sometime we'd be better off with REST, or at least make sure people receive the training to use GraphQL effectively.

squidsoup · 3 years ago
> Front end devs save time by.... sharing queries. So component B ends up fetching records it has no use for because its sharing GQL with component A

An unfortunate problem that really only exists with Apollo. Facebook’s graphql client, relay, does not have this issue as it requires each component to explicitly declare its data dependencies.

the_mitsuhiko · 3 years ago
In many ways you also need to be a massive tech company to not create a massive scalability problem. The first time someone ships a shitty query to a large user base on a mobile app you are now dealing with the consequences of a frontend engineer creating a bad query you can not kill quickly any more.

Making scalable, well performing queries work is nontrivial, particularly with the current ecosystem of GraphQL libraries. The main workaround for this provided appears to be directly mapping GraphQL to an ORM.

makeitdouble · 3 years ago
I used to see GraphQL (and to an uglier respect Soap like interfaces) as complicated solutions to that problem you describe.

But more and more, I think Backend For Frontends solve this issue in a much better way. And of course that idea isn’t new and Yahoo for instance had that kind of architecture.

Frontend teams get to adjust by themselves a simple interface to their needs, and backend teams can provide more info through internal APIs with less restrictions than if it was directly exposed to the outside.

gedy · 3 years ago
I'm not following if you think GraphQL is a bad fit still, but we used GraphQL with the BFF pattern, and it was nice to use from Frontend to BFF. The backend services would use REST or whatever appropriate behind the BFF.
kbumsik · 3 years ago
You can use GraphQL for BFF btw.
no_wizard · 3 years ago
Do you have any reference material for the Yahoo architecture?
masklinn · 3 years ago
I think an other thing with graphql is it reduces friction when trying to discover what your API should be.

So what you can do is some sort of generative graphql thingie when doing your initial iteration, with the client hitting whatever is convenient (in that situation you'd just expose the entire backend unprotected).

Once the needs have gelled out you strip it out and replace the graphql queries by bespoke API endpoints.

ryanbrunner · 3 years ago
In my experience, "let's do something non-scalable and obviously wrong while we're exploring the problem space and replace it with something better before shipping" reduces to "let's do something non-scalable and ship to production" 100% of the time.
coffee_beqn · 3 years ago
I just have never had this as a problem having worked on many APIs at many companies. Usually we decide what we want to work on and the frontend/full stack can read the documentation / chat with the backend engineers if it’s not clear. At no point is “discoverability” a issue
danabrams · 3 years ago
100% agree that it’s about dependence on other teams. That said, I’d much rather that we were communicating across a well defined api boundary, rather than a graphql api. You could, of course, very easily do this with an api layer in the middle.
YZF · 3 years ago
Nobody seems to get the idea of building software out of pieces with well defined APIs any more </rant>. I would say it's not possible to build large software without adhering to this principle but I seem to be proven wrong. You can build large poor quality software and just throw more people at it.

The other part about team dependence is very true but it also shows a lack of knowledge/thinking/care by whoever formed the teams. It seemed for a while Amazon had things right both in terms of boundaries of teams and in terms of forcing people to use APIs- not sure what they do these days.

jonhohle · 3 years ago
If that field isn’t populated aren’t you in the exact same spot?
vikR0001 · 3 years ago
Let's say you need to get a field back that is already in the database table, but that wasn't previously returned by the GraphQL endpoint. All you have to do on the front end is ask for it and GraphQL will populate it for you on the server.
NeedMoreCowbell · 3 years ago
+1 This is it. Great for internal API's, not so much for public facing ones.
mabbo · 3 years ago
Whether it was the intention or not, I find GraphQL solved a fascinating problem: it let front end developers move faster by greatly decoupling their data needs from the backend developers.

Backend developers describe the data model, expose it via graphql. Front end developers, often ones who never met those backend developers, can see the data model and just use it. They can change what they're querying on the fly, get more or less as they see fit.

It lets everyone move faster.

But as a backend developer, I actually fucking hate it, myself.

breckenedge · 3 years ago
I get specialization, but are there any other good reasons to divide product teams between frontend and backend? I guess it also helps establish patterns and contracts, but I think those are only helpful above a critical mass that I haven’t reached in my career yet.
TheAceOfHearts · 3 years ago
It depends on the kind of application and scale. Consider a large and complex application like Discord or Figma. Past a certain point, it's hard for anyone to know how every single detail works.

You should probably be comfortable enough to work with both ends of the spectrum, but specialization allows you to do a much deeper dive into complex subjects.

A backend engineer probably has a much deeper understanding of every little nuance of their prefered database. A great backend engineer can make sure that you're getting near-optimal performance from every important query.

A frontend engineer probably knows about various UX techniques along with how to avoid unecessary reflows and repaints. A great frontend engineer can implement a UI toolkit as well as advanced techniques such as windowing.

dasil003 · 3 years ago
The reason is to scale teams. It’s not the only way to do it—you can also have vertical teams—but it’s a common one because frontend and backend have different technical considerations. The downside is the product can lose cohesion as developers get tunnel vision. Of course all big teams suffer from a version of that problem depending how the lines are drawn.
endisneigh · 3 years ago
In a small organization there isn't generally any reason to divide the teams between front and back end. As you've alluded - once you have many clients you'll want to separate responsibilities in order to increase velocity.
adra · 3 years ago
Frontend teams are often feature partitioned at least in larger orgs. There will need to be some level of feature level backend knowledge to be had somewhere, but backend problems are often feature agnostic. Many companies conflate backend with platform, which have subtle differences, but end up working with similar results. Specialized backend teams work to develop a really good "platform" feature while frontend teams focus on developing customer/product oriented features.
withinboredom · 3 years ago
This is something I’ve wondered as well. Coming from the military and occasionally working with spec-ops, I would say having a few “full stack” teams would be the way to go. I am just a lowly dev though, so what do I know?

The separation of front/backend has always been mildly entertaining to me and I’ve worked on both teams. Btw, if you ever want to cause a political mess, just submit a PR to add a new API endpoint to the backend team that “doesn’t have the time” to work on it. Woah boy, they will get mighty pissed. As a backend engineer these days, it would be a blessing to get free work from another team… I don’t know why they were so pissed that one time.

pragmatic · 3 years ago
I think you are saying why not combine specialists on one team vs the “everyone is fullstack+devops” amateur hour dystopia that is becoming all too common?
tveita · 3 years ago
You can have N frontends for one backend. If you need a new iOS app you will probably hire a team of iOS developers, not have all of your product teams learn Swift.

If your API looks like this...

  /ios-app/v1/landing-page
  /android-app/v1/landing-page
  /android-app/v2/landing-page
  /windows-mobile/v1/landing-page (legacy)
  /web/v5/...
where each platform has its own subtly different UI structure, the ability for frontend teams to get basically an arbitrary JSON structure of their choosing starts looking worth the extra work on the backend.

You might not encounter this problem at any point that's fine - there are other ways to avoid it, like having a cross-platform codebase. I have "hand-rolled" similar solutions to the GraphQL field selection before, and I would use the GraphQL protocol if I were to do it again today.

TedDoesntTalk · 3 years ago
> good reasons to divide product teams between frontend and backend?

People specialize in different things. A great React developer may not be a great Java developer, and vice-versa

satyrnein · 3 years ago
From a management perspective, the fiction of the full stack developer that is equally skilled at everything is the easiest. You stick with that until you complicate your architecture (wisely or not) to the point where having specialists outweighs having to manage multiples queues of work and dependencies.
jayd16 · 3 years ago
To me it's a weird way to go about this decoupling. Another way is you can just keep your own view-model client side and abstract the backend data with that.

FFBs and GraphQL is a way to tightly couple to a backend system and then have that backend system loosely coupled.

I guess its all 6 of one, half-dozen of the other, but I usually prefer to just handle things client side. You can maybe get less data transfer optimization but that's down the road from the fast development stage, anyway.

NonNefarious · 3 years ago
I thought it was supposed to do this, but then discovered that it has no way to express joins.

Has this been addressed? I don't see how you can decouple the back-end data from front-end queries without that.

vikR0001 · 3 years ago
You can do something like this. This query can be created and run on the client:

``` const SAMPLE_JOIN_QUERY = gql` query ($main_data_id: String!) { mainData( main_data_id: $main_data_id) { id main_data_field_1 main_data_field_2 main_data_field_3 related_data{ related_data_field_1 related_data_field_2 related_data_field_3 } } } `; ```

grumple · 3 years ago
Curious as to why you hate it specifically. Because what you could be doing is exposing every table / field automatically based on permissions (which you could set up a system where you don't even have to be involved).
toolz · 3 years ago
because either the boilerplate is massive or the libraries do so much for you that you have to specialize in understanding the libraries magic - front-end plus back-end code for graphql is almost always more than a traditional REST api and I mean a lot more code, not just a bit more, so more code is strongly correlated with more problems. The exception is when you use heavy libraries that have magical APIs. Then you end up with teams who understand the API and have no idea how graphql actually works, which is probably an even worse problem.
mabbo · 3 years ago
Honestly it's mostly just lack of familiarity, which is getting better every day.
syastrov · 3 years ago
e.g. postgraphile
quickthrower2 · 3 years ago
It sounds like it is promoting a siloed cogs in the machine type of work ethic. Where you are either front end or back end and no one is thinking end-to-end about the system.
satyrnein · 3 years ago
I think the causality runs the other way. Once the frontend had gotten so complex that it required a specialized team, solutions arose to reduce the back and forth necessary between frontend and backend teams.
adra · 3 years ago
Sr enough developers who's role expands beyond a single team, architects, product managers, product owners, the list goes on for the roles of people who are often task of thinking big picture about the health and "end-to-end" picture of any given software project. Your narrow synicism makes me assume you don't work in a company that has a large dev org. People specialize and if you want to be the learn and do everything person, you'll find that you're doing less and learning more which isn't a good fit for most orgs.
locutous · 3 years ago
That's generally true for sizable companies. Small companies can and do use full stack devs.

Segmentation makes some sense but the industry is lacking end to end thinking as you point out.

mattbillenstein · 3 years ago
Why do you hate it? What tooling are you using? I found it fairly painless in Python - Graphene/Flask.
matsemann · 3 years ago
If all you do is expose simple models, it's fine. But then not much different than an auto-generated REST-api. But if you want to query deep, list of childrens etc you quickly get into queries that are very hard to write on the backend (n+1 issues quickly pop up etc). To solve those you need to write complicated loaders, which all should be very general in nature and thus you can't rely on two fields backed by the same data sharing a query without doing something special. Which is much more hassle than just writing whatever join you want for a tailored endpoint.
mabbo · 3 years ago
I joke mostly. Coming from a REST based background it was really foreign and convoluted to me.

Until I understood the point, the organizational decoupling. Then it clicked why it's great.

thdxr · 3 years ago
Be careful with anyone with a take that says some technology is 100% bad always. Given enough experience / skill you can make any technology fairly enjoyable so I've only ever seen mixed reactions at worst from people giving things a fair try.

GraphQL is a way to describe not only your API but also the entities and relationships in it. This enables certain useful things for client heavy applications, like cache normalization. If you look at clients like URQL they enable high quality features in your app that are otherwise extremely difficult.

You can also do this with JSONAPI but the GraphQL ecosystem is more developed.

Setting up GraphQL to minimize its rough edges is incredibly difficult. I've currently landed on a combination of Pothos + Genql + URQL to enable me to do everything in typescript instead of untyped strings.

It takes very high skill to use GraphQL well. Few teams get there because they don't have the upfront time to do all the research.

But if you pull it off it can be an incredibly productive system that is friendly to iteration and refactoring. I can send you some content we've produced on this if you're interested.

That said, if I'm not working on a client heavy app, I'd just use a less featureful RPC framework.

eduction · 3 years ago
> Be careful with anyone with a take that says some technology is 100% bad always.

This isn’t that though; the first sentence starts “GraphQL is great, but” and then the post lists first “the good” and then “the bad.” Even the provocative headline hedges with “kinda.”

I wish there was more of this sort of balanced discussion on HN. There is a tendency among devs at least in public toward trying to get others to use the tech they use and are excited about, which is understandable, but everything involves trade offs and it would be nice to hear more of those up front (as opposed to one day “mongo is the bomb” and the next “actually mongo is terrible to back to postgres for everything).

thdxr · 3 years ago
was referring to some of the replies
minusf · 3 years ago
> balanced discussion

while the discussion can be balanced the real world outcome is normally binary: use it or drop it.

i will not start investing huge amounts of time to learn graphql if it has very specific use cases for specific environments. so i naturally look for red flags and objectively negative experiences to see if those might be the roadblocks i would run into 2 months down the line.

imagine a PM coming in all happy "i know nothing about graphql besides the hype but i think it's a great fit for our next project". where will balanced discussion take you there?

lf-non · 3 years ago
Agree with everything here, but something that often gets missed is that you don't have to use all that GraphQL enables from day one.

It is perfectly fine to start with an early implementation that treats GraphQL as mostly an RPC, with only resolvers for Query & Mutation types. You still benefit from GraphQL's type-safety, batching and code-generation.

Once you have more familiarity with dataloaders, query complexity etc. update your output objects to have links to other output objects building the graph model.

The issue is that too many people get fascinated with GraphQL early, then build deep & complex topologies and expose it in inefficient and potentially insecure way.

mccorrinall · 3 years ago
I love consuming graphql as a client. But writing resolvers and all that stuff on the backend? God, I hate it!
adra · 3 years ago
I don't know your language of choice, but in the JVM ecosystem, Netflix DGS is so damn simple to build new resolvers.
kubami · 3 years ago
It's very enjoyable with python. Especially given integrations with frameworks like django.
nawgz · 3 years ago
Hasura :)
PainfullyNormal · 3 years ago
> Be careful with anyone with a take that says some technology is 100% bad always.

On the flip side, you should also be careful of anyone who says some technology is 100% good always. It's far more common to see people talking up the advantages of some trendy new technology without ever mentioning the downsides. All technologies have tradeoffs.

chpmrc · 3 years ago
> Be careful with anyone with a take that says some technology is 100% bad always.

THIS 100%.

> It takes very high skill to use GraphQL well. But if you pull it off it can be an incredibly productive system that is friendly to integration and refactoring.

I could not agree more. It's like any other piece of tech: once you internalize the mental model and are able to translate those abstractions in your language of choice everything clicks. And then it's hard to imagine going back to something more "primitive" (i.e. what's conventionally called "REST").

After building "RESTful" APIs for years I can confidently say GraphQL (with a decent implementation) is a step up across almost every possible dimension (performance aside because of the additional parsing).

necovek · 3 years ago
> It's like any other piece of tech: once you internalize the mental model and are able to translate those abstractions in your language of choice everything clicks.

While this is true, I think the ultimate assesment of a technology is how easy is it for someone skilled at doing similar work to internalize the model and abstractions.

A tech stack can become really successful only if this is easy and relatively quick, otherwise, it will meet a lot of resistance. Partly because it's hard to master, but partly because so many people will be misusing it (which makes an even bigger annoyance for someone who's trying to get to a proper mental model).

As such, I've come to appreciate only models that majority of developers can easily get right on the first go: or rather, models which are hard to get wrong.

throwawaymaths · 3 years ago
Eh I think "rest for external API access, graphql for internal frontend use" is probably a good thing.
endisneigh · 3 years ago
> Be careful with anyone with a take that says some technology is 100% bad always. Given enough experience / skill you can make any technology fairly enjoyable so I've only ever seen mixed reactions at worse from people giving things a fair try.

Completely agreed. Without knowing the team experience, greenfield project or not or in general more information about the task at hand, how can anyone say GraphQL is good or not?

One thing I've noticed among some people who've failed to move up in their career is that they carry these extreme opinions due to a proper understanding or a bad experience. Right tool for the job and all that.

meesterdude · 3 years ago
> Given enough experience / skill you can make any technology fairly enjoyable

Have you never used MongoDB?

deltasevennine · 3 years ago
>Be careful with anyone with a take that says some technology is 100% bad always. Given enough experience / skill you can make any technology fairly enjoyable so I've only ever seen mixed reactions at worst from people giving things a fair try.

So what does this mean?

You're saying that in the universe we live in there is absolutely nothing that is 100% bad always. Everything is good for something? There is no concept of bad or obsolete things because it's all good for something?

Let's restrict this to programming languages and frameworks and apis.

Are you saying that in the universe of ALL programming languages, ALL frameworks and all APIs, None of them are at all bad and they are all good for something?

I don't agree with this sentiment at all. In fact I think it's a sign of two possibilities:

1. that the person saying it is highly biased and unable to detect things that are genuinely bad.

2. The person saying it is just being temporarily illogical, there are clearly things in the programming world that are bad. He knows this but is so biased that he's incapable of processing this concept while promoting his favorite language/api/framework.

Scenario 2 is the most likely scenario here. Not saying graphQL is bad. But to love graphQL so much as to say Nothing in the universe is actually bad?

Let's be real, I am not against graphQL. However you cannot actually say that people against graphQL have invalid opinions because there is nothing in the programming universe that is definitively bad. This argument does not make any sense at all.

Kaze404 · 3 years ago
The point is that “bad” isn’t a useful descriptor of anything regarding their real-life applications, because it simply doesn’t mean anything. Without more context, saying something is bad is just saying you dislike it with intent to present it as objective rather than subjective.

For example:

1) “Haskell is bad because it’s too theoretical” 2) “Haskell is bad in corporate environments, as its roots in mathematical and academia make it harder for people to get productive with it compared to something like Java or C#”

The difference is clear. In my opinion it’s best if people who care about what they’re saying avoids #1, and instead frames their criticism like in #2.

Note: those examples don’t reflect my actual opinion on Haskell, it’s just something I came up with while writing this.

Guid_NewGuid · 3 years ago
Hmm, I feel weird reading all this criticism of GQL. It reminds me of when our place switched to it and I was constantly slagging it off until my friend/colleague said "do you actually hate it or you just don't understand it?".

Default hostility to new concepts and frameworks can save a lot of time, energy and mistakes in software but sometimes for some use cases the new (variation of an old) solution can be superior.

We use it as the API we expose for our React and mobile clients and it's just, so good. I'd never want to consume it for a non-FE client but it's night and day versus stitching the results of multiple API calls together using some godawful chunk of mess like Redux.

We have a C# backend and Typescript frontend. We write our backend resolvers of the form `public async Task<SomeResultType> GetSomeNamedField(TypedParams pq)` and it just works, Apollo generates type safe client code and we define the schema in a single place. We still write backend code to implement each resolver method, exactly how we did in a normal API but... that's just the same.

I wonder how bad other backend devx must be for all these people to hate it, it seems more like a language specific implementation flaw than a genuine problem.

jddil · 3 years ago
You haven't actually described any of the unique parts of GraphQL though. Generating a typesafe API whose schema is defined in a single place is trivial with an OpenApi spec and a client generator

What I've noticed is people get their first taste of a type safe api and automatic client creation via GraphQL and don't understand that exists without it.

MisterSandman · 3 years ago
> What I've noticed is people get their first taste of a type safe api and automatic client creation via GraphQL and don't understand that exists without it.

I mean, yes, you can technically write any project in C if you're smart and dedicated enough, but data scientists still use Python because it's easier to work with for the things they need to do. The fact that GraphQL is inherently safe, while REST is inherently not, is why people like GraphQL. OpenAPI is hardly the standard in Rest APIs.

jongjong · 3 years ago
Some other bad things:

- Makes caching more challenging since there are now more possible permutations of the data depending on what query the client uses. A hacker could just spam your server's memory with cache entries by crafting many variations of queries.

- Makes access control a lot more complicated, slower and error-prone since the query needs to be analyzed in order to determine which resources are involved in any specific query in order for the server to decide whether to allow or block access to a resource. It's not like in REST where the request tells you exactly and precisely what resource the client wants to access.

- Adds overhead on the server side. It requires additional resources to process a query rather than just fetching resources by ID or fetching simple lists of resources. A lot of work may need to happen behind the scenes to fulfill a query and GraphQL hides this from the developer; this can lead to inefficient queries being used. I have a similar complaint about database ORMs which generate complex queries behind the scenes; this makes it difficult to identify performance issues in the underlying queries (since these are often completely hidden from the developer). Hiding necessary complexity is not a good idea... Maybe worse than adding unnecessary complexity.

krschultz · 3 years ago
It shifts the complexity to the server side. The additional logic you are describing currently lives on the client, where it's harder to update and likely duplicative across platforms.
endisneigh · 3 years ago
> - Makes caching more challenging since there are now more possible permutations of the data depending on what query the client uses. A hacker could just spam your server's memory with cache entries by crafting many variations of queries.

You could use something like https://stellate.co/.

> - Makes access control a lot more complicated, slower and error-prone since the query needs to be analyzed in order to determine which resources are involved in any specific query in order for the server to decide whether to allow or block access to a resource.

Hasura and Postgraphile can do this - in the case of Postgraphile it obviously requires Postgres.

the_mitsuhiko · 3 years ago
If the solution for caching problems turns out to be a hosted API proxy then there are still not enough tools available. If you put some third party infrastructure in front of your API then your availability is exactly that company's availability.
ojkelly · 3 years ago
You can cache the objects in the query by ID. And then reuse them across a number of queries.

> It's not like in REST where the request tells you exactly and precisely what resource the client wants to access.

It can be. You can have a query that does `getObject(id: $id): Object`.

> Adds overhead on the server side.

Yep it pulls a heap of complexity around data fetching, synchronisation, and data modelling from the client to the server.

As it requires both the client and server to come to some agreement about a shared data model, it can appear as more work up front. But it enables a decoupling of the client and server such that the client can make requests for new use cases with the existing shared data model.

cletus · 3 years ago
My usual experiene is that people use GraphQL wrong. GraphQL's primary use case is to homogenize access to a bunch of heteregenous backend services. If you find yourself just taking your Postgres database and creating a 1:1 mapping between your tables and GraphQL, this probably isn't a good fit as you'r ejust adding another layer for no reason.

> No clear path for Api versioning

GraphQL came about as a way for mobile clients to call backend services. At Facebook, once a version (of th emobile app) was released, it was essentially out there forever. Some people would simply never upgrade until they absolutely had to.

So the point of GraphQL is that you want to get away from thinking about versioning your API and cleanly upgrading because you probably can't. You can do versions but you don't have clean divisions. You'll mark a given field as "since v2.1". And fields that you ahve added can basically never be removed. The best you can do is make them return null or an error.

So if you want to do versions it probably means you have control over the server and the client such that you can deploy them almost simultaneously. If so, GraphQL isn't really designed for your use case. If not, get out of the mindset that client and server versions can move in lockstep.

I will say I think GraphQL made one mistake and that's baking in string values into the API. You can't change a field name without breaking your API. Same with enum values.

Protocol buffers (including gRPC) instead went for numbering. The name is just a convenience for reading the IDL and data definitions and for code generation but it doesn't translate to the wire format at all.

This can have downsides too but they're fairly minimal. For example, if you accidentally renumber your enums by inserting a new value in the middle, the whole thing can break (side note: always explicitly number your enum values in protobuf! That should've been the only way to do it).

Another issue is that fragments seem like a good idea for code reuse but they can get really out of hand. It's so much easier just to add a field to an existing fragment. If that fragment is used in a bunch of places you may find yourself regenerating a ton of code. You will never be able to remove that field from the fragment once added.

Disclaimer: Ex-Facebooker.

nchase · 3 years ago
> GraphQL came about as a way for mobile clients to call backend services. At Facebook, once a version (of the mobile app) was released, it was essentially out there forever. Some people would simply never upgrade until they absolutely had to. > > The point of GraphQL is that you want to get away from thinking about versioning your API and cleanly upgrading because you probably can't. You can do versions but you don't have clean divisions. You'll mark a given field as "since v2.1". And fields that you have added can basically never be removed. The best you can do is make them return null or an error.

This nails it. For distributed apps where a client might never be updated, you can't really change your schema if you want to guarantee that your app works for everyone. You can build in a mechanism to prompt users to update to a later version, though, and that can be very effective.

hamandcheese · 3 years ago
My company has many backends federated together, but a monolithic, web-based frontend. It’s actually really easy to remove a field from graphql.

1. Remove all uses from the frontend. 2. Deploy. 3. A day later, delete it from the schema.

If deleting it causes relay compiler errors, go back to step 1.

(The less lazy way to do it would be to actually pull some stats about how long-lived frontend bundles are)

Not being able to delete fields only really applies if you have a huge number of clients that you can’t easily force to update on a whim. Plenty of folks just have web clients.

My company doesn’t have this, but I’d love to get per-field usage stats about our schema.

cletus · 3 years ago
That affirms my point: you have complete control over the front end.
aleclm · 3 years ago
We recently had to design an HTTP API, and we wanted to have as much automatic stuff as possible. I mean:

* Autogenerated documentation

* Autogenerated wrappers for scripting languages

* Autogenerated validator for requests and responses

For a REST API, you can get most of these things with swagger or stuff like that, but clearly it's an afterthought. If you have a schema, it's all much more natural and elegant.

But the most important thing you get with GraphQL is batching. For our use case (a decompilation pipeline) if you make 10 requests one after the other or 10 requests altogether it makes a huge difference in terms of performance.

If you need batching and design a REST API, for every nice endpoint you have you need to make a bulk version of the API. You're likely going to do that POST'ing a JSON. Now, once you're at that point, you're reinventing the wheel with six sides.

If your backend is in C/C++ and I suggest to make a C API for Python and use ariadne:

    https://ariadnegraphql.org/
Don't do GraphQL in C/C++.

loosescrews · 3 years ago
> But the most important thing you get with GraphQL is batching.

Doesn't HTTP/2 make this mostly obsolete? One of its big features is request multiplexing.

Regarding the auto-generated code bit, are auto-generated GraphQL clients a thing? It seems like it would be doable, but I haven't found any (at least for the languages I'm using).

aleclm · 3 years ago
> Doesn't HTTP/2 make this mostly obsolete? One of its big features is request multiplexing.

HTTP/2 helps with the network layer, but your backend will still handle requests one-by-one. Depending on what you need to do, this might make a hell of a difference.

> Regarding the auto-generated code bit, are auto-generated GraphQL clients a thing? It seems like it would be doable, but I haven't found any (at least for the languages I'm using).

There's this:

    https://www.graphql-code-generator.com/
But yeah, on a second look I expected to find more. Anyway, having a standard way to do things instead of relying on one specific piece coupled to some language (swagger), it's certainly better. On the other hand REST APIs have a much longer history, so I guess it's normal for them to have more tools.

kortex · 3 years ago
I believe by batching they mean operating on collections of entities instead of single ones. So you may have POST /pets to create a single pet from a json object, but what if you want to add a hundred pets at once? Even with multiplexing, this is often way less efficient. Often the solution is to have POST /pets/bulk which takes a list of objects.