Readit News logoReadit News
mbell · 3 years ago
I've tried using OpenAPI a few times, it's been...lackluster... I probably won't use it again.

Here are my gripes:

1) For me one of the biggest selling points is client code gen (https://github.com/OpenAPITools/openapi-generator). Basically it sucks, or at least it sucks in enough languages to spoil it. The value prop here is define the API once, code gen the client for Ruby, Python and Scala (or insert your languages here). Often there are a half dozen clients for each language, often they are simply broken (the generated code just straight up doesn't compile). Of the ones that do work, you get random PRs accepted that impose a completely different ideological approach to how the client works. It really seems like any PR is accepted with no overarching guidance.

2) JSONSchema is too limited. We use it for a lot of things, but it just makes some things incredibly hard. This is compounded by the seemingly limitless number of version or drafts of the spec. If your goal is interop, which it probably is if you are using JSON, you have to go our and research what the lower common denominator draft spec JSONSchema support is for the various languages you want to use and limit yourself to that (probably draft 4, or draft 7).

On the pros side:

It does make pretty docs - kinda wish it would just focus on this and in the process not be as strict, I think it would be a better project.

GOATS- · 3 years ago
I find it odd that you've struggled so much with generating API clients. I've generated C# and TypeScript (Angular's HttpClient and React Query) clients for my API and never had any issues with them. With that being said, I didn't use OpenAPI's Java-based code generators and rather used ones made by third-party developers such as NSwag[0] and openapi-codegen[1].

[0]: https://github.com/RicoSuter/NSwag

[1]: https://github.com/fabien0102/openapi-codegen

dcre · 3 years ago
You said it yourself — the “official” generator is awful and very hard to modify or extend (well, you didn’t say that, but I’m saying it) and while there are many alternatives, they’re not always easy to find. I had some success with swagger-typescript-api[1], but eventually got tired of it and wrote my own generator. Despite looking around quite a bit at what’s available, I never heard of openapi-codegen, which looks quite good.

I think it’s a pretty big problem for many devs that so many of the options are mediocre and they’re quite difficult to evaluate unless you have a lot of experience, and even then it takes a lot of time.

[1]: https://github.com/acacode/swagger-typescript-api

throwawaymaths · 3 years ago
Nswag leaves much to be desired. I deployed an OpenAPI server and the initial deployment partners using nswag begged us to change the API to suit an issue that was reported and nswag hadn't fixed in years. I respectfully told them to pound sand and deal with it manually or better yet be a good citizen on the ecosystem that they are using for free and contribute a patch. Last I checked they were still manually patching their codegen on each deploy. /Shrug

Nswag has important issues that are many years old still in their backlog.

1.6k issues, oldest unresolved 7 years old:

https://github.com/RicoSuter/NSwag/issues?q=is%3Aissue+is%3A...

SCUSKU · 3 years ago
I've been using redux toolkit's OpenAPI code generator for a side project and it's been pretty good. The documentation is a bit lacking and it could certainly use more work to make names more customizable. The generated code comes out looking very much machine generated. But I love that RTKQuery (redux toolkit query) has client side caching so that if I use a query param that was already used before, it will remember and just serve from the local cache.

But it's been nice being able to make a backend change, run the code generator, and then be able to use whatever API in react. I hope this type of stuff gets developed more!

[0] - https://github.com/reduxjs/redux-toolkit/tree/master/package...

johnny_reilly · 3 years ago
This. I've found you can get a long way with NSwag, and the barrier to entry is low. I've got a post that walks through how to get a TypeScript and a C# client generated.

https://johnnyreilly.com/generate-typescript-and-csharp-clie...

simplesager · 3 years ago
I'm working on a company https://speakeasyapi.dev/ with the goal of helping companies in this ecosystem get great production quality client sdks, terraform providers, cli(s) and all the developer surfaces you may want supported for our API. We also manage the spec and publishing workflow for you so all you have to do is build your API and we'll do the rest.

Feel free to email me at sagar@speakeasyapi.dev or join our slack (https://join.slack.com/t/speakeasy-dev/shared_invite/zt-1cwb...) . We're in open beta and working with a few great companies already and we'd be happy for you to try out the platform for free!

Hardwired8976 · 3 years ago
Created some accounts for advertising? Accounts created some hours ago and just praising you endlessly…..
n_f · 3 years ago
Definitely check out Speakeasy— we've been using them and the experience + team are fantastic
bebop · 3 years ago
We at Airbyte are happy users of the speakeasy platform. The CLI generator is easy to get started with and generate nice api clients that are constantly getting better. Their api developer platform does a great job of managing new client builds and deploys to the package repositories as well. Super please with the experience so far.
spjain · 3 years ago
What's the thought process behind the product with API Keys? Why do you build those for your end user, and what's the goal of someone using that. Unless I'm misinterpreting.
rileybrook · 3 years ago
+1 to Speakeasy ~ Our end-users love to use the SDKs that are automatically generated through Speakeasy, based off our API.
coplowe · 3 years ago
I've been working with Speakeasy for a couple of month now to produce client libraries for our customers to use. They've finally made an OAS-based code generator that's great. In fact, it's getting even better with useful functionality being released on an almost biweekly basis. I would strongly recommend Sagar and the Speakeasy team to anyone looking to support high quality client libraries for your customers.
dandevs · 3 years ago
I'm one of the builders of an open source project (https://buildwithfern.com/docs) to improve API codegen. We built Fern as an alternative to OpenAPI, but of course we're fully compatible with it.

The generators are open source: https://github.com/fern-api/fern

We rewrote the code generators from scratch in the language that they generate code in (e.g., the python generator is written in python). We shied away from templating - it's easier but the generated code feels less human.

Want to talk client library codegen? Join the Fern Discord: https://discord.com/invite/JkkXumPzcG

handrews · 3 years ago
I think it's worth pointing out that a lot (most?) of Fern's complaints against OpenAPI are really complaints against JSON Schema. There have been talks before about allowing other schema systems in OpenAPI - I wouldn't be incredibly surprised to see such things come up for Moonwalk. JSON Schema is not not a type definition system https://modern-json-schema.com/json-schema-is-a-constraint-s....

It's also worth noting that most JSON Schema replacements I've seen that prioritize code generation are far less powerful in terms of runtime validation (I have not examined Fern's proposal in detail, so I do not know if this is true for them).

The ideal system, to me (speaking as the most prolific contributor to JSON Schema drafts-07 through 2020-12), would have clearly defined code generation and runtime validation features that did not get in each other's way. Keywords like "anyOf" and "not" are very useful for runtime validation but should be excluded from type definition / code generation semantics.

This would also help balance the needs of strongly typed languages vs dynamically typed languages. Most JSON Schema replacements-for-code-generation I've seen discard tons of functionality that is immensely useful for other JSON Schema use cases (again, I have not deeply examined Fern).

thesandlord · 3 years ago
Just wanted to chime in and say we are a big fan of Fern! It makes developing our APIs 10x easier as we get full end-to-end type safety and code completion, but the really great part is we get idiomatic client and server SDKs as well as OpenAPI output that we use to autogen documentation. Our small team of two engineers are able to ship multiple client facing SDKs because we are built on Fern!
spjain · 3 years ago
Speed of development on these guys is huge, and have enjoyed using their SDKs. Community is getting involved in building things like auto-retry with backoff and other pretty helpful features in SDKs. Big fan of these guys!

Dead Comment

seer · 3 years ago
I think that’s what most people are using it for, but having a single expressive (debatable) language to describe the contract an api offers has soo much potential.

GraphQL promised us apis that we can trust - since both the client and the server were implemented with the same schema, you would know for sure which requests the api would respond to and how, if it tried to do something outside of the schema, the server lib itself would through a 500 error. This allowed you to generate lean, typesafe clients.

OpenAPI kinda allows you to do that but for any other http api - I’ve written some code to use the schema as a “source of truth” for the server code as well, proving at compile time that the code will do the correct requests and responses for all the endpoints, paths and methods. So if you are reading the schema, you know for sure that the api is going to return this, and any change has to start from modifying the api.

And in turn this allows a “contract first” dev where all parties agree on the api change first, and then go to implement their changes, using the schema as an actual contract.

Combine this with languages with expressive type systems, and it allows you a style of coding thats quite nice - “if it compiles it is guaranteed to be correct”. Now of course this does not catch all bugs, but kinda confines them to mostly business logic errors, and frees you from needing to write tons of manual unit tests for every request.

Oh as a bonus it can be used for runtime request validation as well, which allows you to have types generated for those as well, for the client _and_ the server! Makes changes in the api a lot more predictable.

Client / server code generation can also be implemented as just type generation with no actual code being created, sidestepping a lot of complaints about code generators.

I did package it up as OS https://github.com/ovotech/laminar but no longer have access to maintain it as I no longer work there unfortunately.

vesinisa · 3 years ago
> "contract first” dev

Just wanted say that this is very cool and I find it hard to understand why this is not already the norm in 2023. I've done something quite similar in a proprietary project (I called it "spec-driven development" in reference to "test-driven development").

I would first start by writing the OpenAPI spec and response model JSON schema. I could then write the frontend code, for example, as the API it called on the server was now defined. Only as the last step I would actually integrate the API to real data - this was especially nice as the customer in this particular project was taking their time to deliver the integration points.

All the time during development the API conformity was being verified automatically. It saved me from writing a bunch of boilerplate tests at least.

rattray · 3 years ago
FWIW, there are a few companies cropping up now doing better codegen for client libraries. I'm starting one of them: https://stainlessapi.com

Unfortunately we don't yet have a "try now" button, and our codegen is still closed-source, but you can see some of the libraries we've generated for companies like Modern Treasury and sign up for the waitlist on our homepage.

Always happy to chat codegen over email etc.

saarons · 3 years ago
CTO of Modern Treasury here. We're very happy with the quality of the client libraries being generated by Stainless. It also integrates nicely into our workflow, we've got the releases almost fully automated whenever new API routes are added or changed.
satvikpendem · 3 years ago
I'm looking for a codegen tool for client SDKs for my product, would love to use your product. My email is in my profile, if you want to chat.
taeric · 3 years ago
Sad to see it isn't just me. I had very real vibes of "surely I'm holding this wrong" in my building an OpenAPI file. And you didn't even mention tools to help deploy, just to help make a client.

To add my difficulty, the document generation inside Sphinx was less than up to date. Such that I didn't even get the pretty docs.

moondowner · 3 years ago
The Java generator is pretty good, many big companies are using it for generating both client and server code, I'm especially happy with the Java Spring Boot generator, I've been using it for both reactive and 'standard' code generation.
jordiburgos · 3 years ago
I am using the 'spring' generator and it works fine. Just fine. Could be better if it would buse more Spring features.

It saves hours and hours of development time. And the ability to regenerate the whole application on spec changes is amazing.

BerislavLopac · 3 years ago
I'm sorry, but you have completely misunderstood the purpose of Open API.

It is not a specification to define your business logic classes and objects -- either client or server side. Its goal is to define the interface of an API, and to provide a single source of truth that requests and responses can be validated against. It contains everything you need to know to make requests to an API; code generation is nice to have (and I use it myself, but mainly on the server side, for routing and validation), but not something required or expected from OpenAPI

For what it's worth, my personal preferred workflow to build an API is as follows:

1. Build the OpenAPI spec first. A smaller spec could easily be done by hand, but I prefer using a design tool like Stoplight [0]; it has the best Web-based OpenAPI (and JSON Schema) editor I have encountered, and integrates with git nearly flawlessly.

2. Use an automated tool to generate the API code implementation. Again, a static generation tool such as datamodel-code-generator [1] (which generates Pydantic models) would suffice, but for Python I prefer the dynamic request routing and validation provided by pyapi-server [2].

3. Finally, I use automated testing tools such as schemathesis [3] to test the implementation against the specification.

[0] https://stoplight.io/

[1] https://koxudaxi.github.io/datamodel-code-generator/

[2] https://pyapi-server.readthedocs.io

[3] https://schemathesis.readthedocs.io

madeofpalk · 3 years ago
To your first point - you may think it detracts from perceived value, but you can just write your own code generator for openapi - it's not that hard, and you'll probably end up with a higher quality client that more fits your preferred pattern better.

This is still a win because you can still generate all your clients in sync with your API spec rather than doing all that manually.

kelnos · 3 years ago
> For me one of the biggest selling points is client code gen. Basically it sucks

I agree that the official codegen is not that great. One of my former colleagues started guardrail[0] to offer better client -- and server -- codegen in Scala for a few different http/rest frameworks. Later, I added support for Java and some Java frameworks. (I haven't worked on the project in over a year, but from what I understand, it's still moving forward.)

Obviously that's a fairly limited set of languages and frameworks compared to what the official generators offer, and there are some OpenAPI features that it doesn't support, but guardrail is a good alternative if you're a Java or Scala developer.

> JSONSchema is too limited

I've run into some of the problems you've described, which can be a big bummer. For new APIs I'd designed, I took the approach of designing the API in a way that I knew I could express in OpenAPI without too much trouble, using only the features I knew guardrail supported well (or features I knew I could add support for without too much trouble). It's not really the ideal way to design an API, but after years of that sort of work, I realized one of the worst parts of building APIs is the tedious and error-prone process of building server routes or a client for it, and I wanted to optimize away as much of that as possible.

Ultimately my view is that if you are writing API clients and servers by hand, you're doing it wrong. Even if you end up writing your own bespoke API definition format and your own code generators, that's still better than doing it manually. Obviously, if something like OpenAPI meets your needs, that's great. And even if you don't like the output of the existing code generators, you can still write your own; there are a bunch of parser libraries for the format that will make things a lot easier, and it really isn't that difficult to do, especially if you pare your feature support down to the specifics of what you need.

[0] https://guardrail.dev

Cthulhu_ · 3 years ago
I think OpenAPI needs to step up its game with its code generators, because that too has been an issue for me; I've seen a few issues pop up over time.

It's only useful for generating types; most generators' APIs are stubs at best, which means it's pretty much useless for evolving API specifications.

JSON has its limitations, in that its type system is different enough from other languages that back-end generated code often feels awkward.

I think that the foundation should take ownership of the generators and come up with a testing, validation & certification system. Have them write a standardized test suite that can validate a generated client, making sure there's a checklist of features (e.g. more advanced constructs like `oneOf` with a discriminator, enums, things like that).

And they should reduce the number of generators. Have one lead generator for types, then maybe a number of variants depending on what client the user wants to use. But those could be options / flags on the generator.

Of course, taking a step back, maybe OpenAPI and by extension REST/JSON is a flawed premise to begin with; comparing it with e.g. grpc or graphql, those two are fully integrated systems, where the spec and protocol and implementation are much more tightly bound. The lack of tight bounds (or strict standards for that matter) is an issue with REST/JSON/OpenAPI.

layoric · 3 years ago
I agree that codegen + docs is actually where most of the value is. The problem I think is in the design. Having an intermediate spec makes all the downstream tooling for codegen + docs need to handle all the complexity. Any information lost (because the spec is in sufficient for whatever use case), you end up with a worst of both worlds situation, your code or docs gen tooling is less direct to use, and now is missing context.

Another way of handling this is getting the server your are interacting with to be able to generate the code directly based on their own internal knowledge of how the APIs are put together. This puts more onus on the library creators to support languages etc, but provides a much better experience and better chance things will 'just work' as there are just less moving parts.

ServiceStack is a .NET library that does this with 'Add ServiceStack Reference'[0] which enables a direct generation of Request and Response DTOs with the APIs for the specific server you are integrating with. IDE integration is straight forward since pulling the generated code is just another web service call. Additional language generation are integrated directly. It had trade offs but I'm yet to see a better dev experience.

[0] https://servicestack.net/add-servicestack-reference

(Disclaimer I work for ServiceStack).

nerdponx · 3 years ago
I find the code gen less valuable than having a machine-readable spec that I can test against.
sthuck · 3 years ago
It's always nice to read and know I am not an opinionated asshole, and other people share the misery. I admit I've been duped using OpenAPI. Generating the schema via FastAPI and Nest.js works pretty well. But like you we have been sorely disappointment by the codegen.

Anyone care to suggest alternatives though, assuming we want to call from node to python? I actually believe that having api packages with types is one of the only things startups should take from the enterprise world. I thought about GRPC, I had good experience with it as a developer, but the previous company had a team of people dedicated just to help with the tooling around GRPC and Protobufs.

So I picked OpenAPI, figuring simple is better, and plaintext over http is simpler. and currently I do believe it's better than nothing, but not by much. I am actually in the process of trying to write my own codegen and seeing how far I can get with it.

are protobuf's with GRPC really the way to go nowadays? should a startup of 20 developers just give up and document api in some shared knowledge base and that's it?

easton · 3 years ago
NSwag does a wonderful job of generating TypeScript clients from OpenAPI specs. Definitely give it a shot before killing your current setup.

https://github.com/RicoSuter/NSwag (It sucks in any OpenAPI yml, not just ones from Swashbuckle/C#)

dsinghvi · 3 years ago
@sthuck I'm working on an alternative in this space called Fern. Like gRPC, you define your schema and use it to generate client libraries + server code. Fern integrates with FastAPI/Express/Spring and generates clients in Python, Typescript, and Java.

Checkout out this demo: https://www.loom.com/share/42de542022de4e55a1349383c7a465eb. Feel free to join our discord as well: https://discord.com/invite/JkkXumPzcG.

onetrickwolf · 3 years ago
Yeah I had this experience too. I figured at least there'd be static checking that would at least make sure we aren't going off spec but there isn't really. So you just have a spec that slowly becomes out of sync with the code until it's basically useless. Just seems like double work for almost no benefit.
LelouBil · 3 years ago
It depends on the tools you use. You can use tester proxies that validate all requests and responses to the specs, or you can do server code gen (with interfaces/Subclasses for example in Java) so you are forced to adhere to the specs.
bob1029 · 3 years ago
I actually got the OpenAPI docgen magic to work 100% all the way into Azure API Management Service such that our branded portal's docs were being updated based upon code comments each time upon merge. It really is something to marvel at. It actually worked.

That said, I didn't like the amount of moving pieces, annotation soup in code, etc. I got rid of all of it. Instead of relying on a fancy developer web portal with automagically updating docs, I am maintaining demo integration projects in repositories our vendors will have access to. I feel like this will break a hell of a lot less over time and would be more flexible with regard to human factors. Troubleshooting OpenAPI tooling is not something I want myself or my team worrying about right now.

ciwchris · 3 years ago
Good to know. I'd like to learn about the process you had set up and the number of moving pieces it required. Have you written about this process? Can I read about it somewhere?
groestl · 3 years ago
1) hits home. The lack of a proper AST, "logicless" Mustache templates for code generation, lack of tests,... Also, OpenAPI seems to allow features in the language before at least a handfull of first class code generators have support for that feature, just because one generator supports it. And not that the others refuse, they just produce broken code
jontro · 3 years ago
As for point 1) I fully agree. I'm using it a lot currently due to lack of alternatives, mainly with java. Swagger codegen is the one I've had most success with, but both openapi and swagger codegen shares the same problems.

For internal projects we use grpc which is a breeze to use in comparison.

Deleted Comment

Dead Comment

furyofantares · 3 years ago
I'm certainly not going to be the only one who, confused, read fairly far in before realizing I didn't see the P in OpenAPI.
jeron · 3 years ago
reminds me of when Snap-On tools had a nice bump in stock price prior to Snapchat IPO
stefankuehnel · 3 years ago
You are definitely not the only one. It happened to me too. xD
esafak · 3 years ago
OpenAI's plugins are based on OpenAPI.
Eduard · 3 years ago
The https://github.com/OAI/ Github group name is indeed deceptive.
samspenc · 3 years ago
I wonder if it was just a matter of them getting that GitHub handle way before OpenAI was a real thing.

Update: per Wikipedia, looks like OpenAPI was founded in 2010-11 so that would make sense https://en.wikipedia.org/wiki/OpenAPI_Specification

low_tech_punk · 3 years ago
__jonas · 3 years ago
you are not to be honest, and I work with Swagger APIs every day…
_ea1k · 3 years ago
I mean, chatgpt does a decent job of working with openapi docs.
nvrmnd · 3 years ago
yes, I can confirm this.
stickfigure · 3 years ago
I just wish they'd stop using map keys. Use arrays:

    paths:
      - name: "speakers"
        requests:
          - name: createSpeaker
            method: post
This structure would have allowed adding request name to the schema without breaking everything.

This really goes for anyone building REST/JSON APIs. Please avoid dynamic keys; whatever you think the "primary key" is today, it may be different tomorrow. Clients can easily hash an array of objects into a map if they need it.

echelon · 3 years ago
Why not accept map keys as nullable similar to what non-required fields in protobuf do? Wouldn't arrays open you up to duplication?

More broadly, I'm interested in sparse field updates vs. full payload updates and how each of these handle nullable / emptyable fields. I haven't seen any protocol or standard handle these well.

withinboredom · 3 years ago
Allow me to introduce you to RFC6902[1]. A standard of how to update without sending the whole thing. There are libraries for almost every language as well.

[1]: https://datatracker.ietf.org/doc/html/rfc6902/

handrews · 3 years ago
One of the Moonwalk discussions is indeed about moving from objects to arrays for many structures: https://github.com/OAI/moonwalk/discussions/32

Also, I agree with the person who mentioned JSON Patch (RFC 6902), which I feel is an under-rated and underused technology. While less intuitive than JSON Merge Patch (RFC 7396), it is far more powerful. I have used both together, using JSON Merge Patch where possible to keep things more readable and intuitive, and using JSON Patch where JSON Merge Patch can't do what is needed. Although if most of your changes need JSON Patch, I find it's better to just stick with that.

taeric · 3 years ago
Hard not to have major WSDL flashbacks in the OpenAPI project I have running right now. Between the lack of support for the latest OpenAPI to generated documentation in Sphinx, custom attributes needed for AWS API Gateway deployment, and the general grossness of adding the OPTIONS requests for CORS access, nothing has worked as easily as I would have thought it would. Especially given so much of the hype.

Worse, too many of the client generation libraries all look to be abandoned. With no real indication for me to know which would have a good future.

jjice · 3 years ago
I consume WSDLs and produce OpenAPI very regularly and you're very right that we've reinvented the wheel. On the bright side, REST APIs are much easier to work with ad hoc and Open API (not sure if this is really because of the format or we just care more now) often has better associated docs.
talideon · 3 years ago
The single dumbest thing we on a team did on a project I used to work on was use the AWS API Gateway. I have my problems with OpenAPI itself, but API Gateway itself is a half-assed mess. If we'd been able to wait a little longer, the lambda serving API requests could've just gotten the HTTP requests directly without that useless heap getting in the way.
taeric · 3 years ago
This sounds surprising to me. Do you have a link that goes over the problems?
nerdponx · 3 years ago
It's a shame about the tooling. It seems like maybe lack of corporate sponsorship is a problem. I know that many companies use open API internally, but of course are never willing to actually contribute or fund the tools that they depend on.
taeric · 3 years ago
Just looking at all of the companies in this post that are chiming in to say they are helping the tooling, I think there is no lack of effort. I agree that having a big sponsor would help clarify direction.
bterlson · 3 years ago
Since there's a lot of show and tell going on, I'll jump in too I suppose.

I work on a tool in this space called TypeSpec (aka.ms/typespec) that aims to address some of the authoring concerns folks have with OpenAPI. We're a language that feels a lot like TypeScript, with support for high-level features you might be used to in a more typical PL, but compiles to high quality OpenAPI 3.0 you can feed into your existing code/docs generation pipeline. You can see this in action on our playground: https://cadlplayground.z22.web.core.windows.net. We also support protobuf and (once my PR gets merged) JSON Schema targets.

We're not yet to beta (obviously, no website even) but we're hoping to get there relatively soon. Happy to hear any thoughts, especially from folks using OpenAPI.

dcre · 3 years ago
I looked through the docs and now I have a question! It seems like part of the advantage here is being able to represent things in the spec that OpenAPI might have a hard time with. For example, a ResultsPage<T> template represents structure that's shared across a bunch of paginated endpoints regardless of the type of the items. In OpenAPI, each of those responses has to get its own type (ResultsPageA, ResultsPageB, etc.), and presumably in the generated OpenAPI spec, the fact of that shared structure must be lost, and therefore cannot be picked up by client generators that work on the OpenAPI spec.

How do you think about the relation to OpenAPI v3 (i.e., setting aside possible improvements in v4) — is the goal of TypeSpec to avoid doing things that are too far afield from what you can represent in OpenAPI, or is the OpenAPI thing more like a bridge to adoption so people can use their existing generators, but in the future you imagine people generating clients from TypeSpec directly?

bterlson · 3 years ago
This is a great question. We think OpenAPI is great and TypeSpec is a great way to write OpenAPI, but OpenAPI has some challenges generating high quality, language ideomatic code for complex services. Generating code straight from TypeSpec can drive better codegen in some cases such as the one you mention, especially when combined with custom libraries and emitters. Incidentally, this is how we are creating many client libraries for Azure services.

That said, these are not opposing choices really, or a bridge to anything. OpenAPI works great for probably most http services, has a huge ecosystem, and enjoys wide support across the industry so I'd expect many folks to continue to leverage our OpenAPI emitter to take advantage of that.

In general we don't limit ourselves to things which can be trivially compiled to OpenAPI but try to be super general purpose. We support protobuf and intend to support more protocols going forward, and also have experimented with generating other things like JSON RPC, ORMs, db migrations, etc.

dcre · 3 years ago
This is great. After a couple of years fighting OpenAPI in various ways, I can definitely see the case for giving the spec its own proper language that’s not JSON. Seems inspired by GraphQL’s really nice specs (IMO the only nice thing about GraphQL).
bterlson · 3 years ago
Thank you! We take most of our inspiration from TypeScript and sometimes C#, but I agree that gql demonstrates the value of terse, highly readable specs quite nicely.
wg0 · 3 years ago
Much or some OpenAPI tooling has not even moved past OpenAPI 2.0 some even refusing and saying that they will not update to Swagger 3.0 [0] and others have issues open since some 2019 and still open with no resolution in sight (because these are individuals doing out of passion and the spec is complex to implement) and yet we have Open API spec 4.0

All this is - trying to do RPC over HTTP in a fashion that was deemed virtuous in some doctoral thesis.

I wish there were better alternatives for RPC that work everywhere including browsers.

EDIT: typos

[0]https://github.com/go-swagger/go-swagger/issues/1122#issueco...

[1]https://github.com/swaggo/swag/issues/386

handrews · 3 years ago
OAS 3.0 is pretty well-supported by now. One of the biggest lags was Swagger, and they now support 3.0 and are finally making progress on 3.1.

One factor in 3.1 support is that it came out more-or-less concurrently with JSON Schema draft 2020-12, and depends on it. 2020-12 support has recently become more common across more languages, and we're seeing 3.1 work pick up the pace a bit.

But (from years of experience working on these standards), there is _always_ a lag in adoption. You can't just sit and wait until everyone "catches up" because that won't really shorten the lag to the next major version (OAS 3.1, despite the numbering, had significant enough changes to lag like a major version).

So while I'd agree that it's slower than if there were a clear and well-funded owner of things (which is closer to the situation with AsyncAPI), it's not _unusually_ slow as these things go.

incrudible · 3 years ago
The doctoral thesis does not describe RPC at all. It is a more general description of a request/response pattern for navigation of something like a website, by an agent. I would wager that virtually nobody read the thing, and some of those that actually did drew bad conclusions, e.g. pushing for HATEOAS in their API.
nerdponx · 3 years ago
Well I am more optimistic about its usefulness than you, I agree that it's a little ridiculous to announce v4, when v3 was out for over a year before it had decent support from tools like Postman and Swagger UI. There seems to be a disconnect between the people creating the standard and the people implementing it.
lolinder · 3 years ago
The big weak point I've always seen in OpenAPI is that every change in the server needs to be mirrored in the spec. This opens up a lot of surface area for mistakes.

What I really want is a way to generate clients from the server source. I realize that this would require a highly opinionated web server with strong typing on all endpoints, but that just sounds like extra value to me.

Are there any web frameworks that enable generating clients like this, whether through a generated OpenAPI spec or otherwise?

lyu07282 · 3 years ago
> Are there any web frameworks that enable generating clients like this, whether through a generated OpenAPI spec or otherwise?

I think perhaps you just never realized this has been common practice for a long time...

There are lots and lots of web service frameworks that do that: FastAPI in Python, Spring in Java, Play in Scala (iheartradio/play-swagger), rocket/okapi in Rust, many many more. You just need some introspection it's not a difficult thing to do.

oddevan · 3 years ago
I'm trying to go in this direction with an API I'm building at the moment in PHP: https://github.com/smolblog/smolblog-core/tree/feature/api-b... It uses a combination of definition-in-code (also used to translate the endpoint classes to the outside framework), reflection, and PHP annotations to generate the OpenAPI spec which I'm loading into Swagger to do testing.
dcre · 3 years ago
Plenty of frameworks let you generate the spec from your server code. Nest.js is one off the top of my head. Generate the spec from your server code, version it in-repo, and write a test to run in CI that makes sure the spec is up to date with the code.
ssijak · 3 years ago
Something like:

https://ts-rest.com/

or https://github.com/sukovanej/effect-http ?

there are several others in TS world.

tkfu · 3 years ago
We're using tapir (https://tapir.softwaremill.com/en/latest/), and are pretty happy with it.
arcanemachiner · 3 years ago
Django Rest Framework + drf-spectacular. It needs a couple tweaks here and there but it's great.

Apparently django-ninja (a different REST framework for Django) also generates an OpenAPI spec but I haven't tried it.

cachance · 3 years ago
I've also had success with drf + drf-spectacular and then using the output OpenAPI spec to generate a typescript-react client with a third-party code generator.

It mostly just works, like you said, and almost acts as a framework guardrail: if the inferred client types are comprehensive and unsurprising then the view tends to be concise; a wonky type indicates there may be something nonstandard in the view that could be fixed by cleaner framework-abiding code.

breul99 · 3 years ago
There are various tools that will do some of this, utoipa [1] for rust and play-swagger [2] for scala + play that I've used in the past and enjoyed. They generate a significant portion of your spec for you, then a client can be generated from the spec.

[1] https://github.com/juhaku/utoipa

[2] https://github.com/iheartradio/play-swagger

danappelxx · 3 years ago
I suspect OpenAPI advocates would argue you should start with the spec and use it to generate both the client and server. This is already a common pattern in other RPC definition languages such as gRPC. You _could_ write a server to match the gRPC spec, but why would you?
lolinder · 3 years ago
That works great if you're starting from scratch, but not so much if you're on a brownfield project. The way the generators are written, even generating a new endpoint can't be done after code has been written because it generates a whole file with route stubs.
clintonb · 3 years ago
Not sure if I’m an advocate, but definitely a fan. I use frameworks (e.g., Django REST Framework, Nest.js) to build the server and generate my OpenAPI spec. I find it faster than writing YAML/JSON manually, and I end up with a half-working server at the end of it all.
doublerebel · 3 years ago
In addition to the many other solutions mentioned, Cloudflare can export an OpenAPI spec from the proxied traffic.

Optic/UseOptic does a similar traffic watch and spec export from local builds.

Deleted Comment

dandevs · 3 years ago
While I have many thoughts about OpenAPI's ease-of-use and readability, I'm going to focus my commentary on the proposal at hand.

1. "The primary goal of this proposal for a major new version of OpenAPI is to make it more approachable in order." -> This is a sound objective. I'm glad to see less nested structures that will improve readability. It'll make it easier to scroll the JSON/YAML and follow the logic.

2. "OpenAPI has become the defacto standard for API descriptions." -> With OpenAI's choice to pick OpenAPI as the standard for ChatGPT plugins, this is more true than ever. It's great to see that now giving names to responses will make it easier for AIs (i.e., ChatGPT, Copilot) to call an API more accurately.

3. No mention of improving the quality of codegen (e.g., client libs, server stubs). Surprised that Moonwalk is silent on this topic.

handrews · 3 years ago
If you dig through enough of the discussions, you will definitely see talk around code gen. There's more discussions happening in the background that haven't really percolated out to the moonwalk repo because it's all pretty hand-wavey right now. My intention, when advocating for OpenAPI 3.1 to adopt JSON Schema 2020-12, which brings custom vocabulary support (which I led the design of for JSON Schema), was for additional vocabularies to improve code generation semantics. This has not really happened (for a variety of reasons including that vocabularies weren't quite done and I ended up unable to follow up on the whole thing for several years).

It's not entirely clear to me where things go from here, but I suspect Moonwalk will address it in some way. I'd like to focus on it some (from the OpenAPI perspective more than JSON Schema, specifically) but I haven't found anyone who would sponsor that work (I guess the dollars are flowing more to these alternatives several folks have mentioned)