For my uses it's great that it has both test suite mode and individual invocation mode. I use it to execute a test suite of HTTP requests against a service in CI.
I'm not a super big fan of the configuration language, the blocks are not intuitive and I found some lacking in the documentation assertions that are supported.
Overall the tool has been great, and has been extremely valuable.
I started using interface testing when working on POCs. I found this helps with LLM-assisted development. Tests are written to directly exercise the HTTP methods, it allows for fluidity and evolution of the implementations as the project is evolving.
I also found the separation of testing very helpful, and it further enforces the separation between interface and implementation. Before hurl, the tests I wrote would be written in the test framework of the language the service is written in. The hurl-based tests really help to enforce the "client" perspective. There is no backdoor data access or anything, just strict separation betwen interface, tests and implementation :)
Maintainer here, thanks for the feedbacks. 6-7 years ago, when we started working on Hurl, we started with a JSON then a YAML file format. We gradually convinced ourself to write a new file format and I completely understand that it might feel weird. We tried (maybe not succeeded!) to have something simple for the simple case...
I'm really interested by issues with the documentation: it can always be improved and any issues is welcome!
Yeah love Hurl, we stared using it back in 2023-09.
We had a test suite using Runscope, I hated that changes weren't versioned controlled. Took a little grunt work and I converted them in Hurl (where were you AI?) and got rid of Runscope.
Now we can see who made what change when and why. It's great.
So, myself and many folks I know have taken to writing tests in the form of ".http" files that can be executed by IDE extensions in VS Code/IDEA.
Those basically go in the form
POST http://localhost:8080/api/foo
Content-Type: application/json
{ "some": "body" }
And then we have a 1-to-1 mapping of "expected.json" outputs for integration tests.
We use a bespoke bash script to run these .http file with cURL, and then compare the outputs with jq, log success/failure to console, and write "actual.json"
Can I use HURL in a similar way? Essentially an IDE-runnable example HTTP request that references a JSON file as the expected output?
And then run HURL over a directory of these files?
You can use hurl in this way. I have projects with a test directory of hurl files, one hurl file per test case. The cases can run one or more http requests. The hurl file can reference external files, capture values from responses for subsequent requests, validate status and outputs. Hurl has various test runner modes and will optionally output overall test results in various parsable formats if you have a larger reporting framework that you would like to hook into.
Obviously better IDEs integration, support for gRPC, Websocket would be very cool.
A favorite of mine is to be available through official `apt`: there has been some work but it's kind of stuck. The Debian integration is the more difficult integration we have to deal. It's not Debian fault, there are a lot of documentation but we've struggled a lot and fail to understand the process.
Hurl is awesome. A while back I ported a small web service from Python to Rust. Having rigorous tests of the public API is amazing; a language-independent integration test! I was able to swap it out with no changes to the public API or website.
Worth mentioning that using Hurl in Rust specifically gives you a nice bonus feature: integration with cargo test tooling. Since Hurl is written in Rust, you can hook into hurl-the-library and reuse your .hurl files directly in your test suite. Demo: https://github.com/perrygeo/axum-hurl-test
I must say, the sample section[1] does an excellent job of making a case for the tool, especially to people who are inclined to make a snap judgement about the usefulness of the tool within the first 5 minutes (I'm sometimes guilty of this).
I took a lot of inspiration from this project when designing my own HTTP testing tool[0]. We needed to be able to run hundreds of tests quickly, and in parallel. If that is something you need and you like Hurl, then you might like Nap also.
The main difference is that nap works off of YAML. You can get an idea of how it works by clicking on "The Basics"[0] on the left-hand side of the page.
https://github.com/mistweaverco/kulala.nvim is an another restish (it can do gRPC to) plugin for neovim. It is intended to be compatible with a Jetbrains as much as possible.
(After I have seen the IntelliJ one from a colleague I was searching for one like that in neovim. That's the best one I found. It's not perfect, but it works.
Edit: The tool from OP looks very neat though. I will try it out. Might be a handy thing for a few prepared tests that I run frequently
yep, I've played with Hurl and find it nice but recently have been leaning into the .http stuff more. IntelliJ has it built in, there's the plugin you linked, and then for CLI i've used httpYac. No "vendor lock in", really easy to share with copy & paste or source control.
I think the idea is nice, but I am struggling for why I should use it. I write using Django, which has plenty of hooks for testing within the framework. Why switch to a tool which is blind to my backend and is going to create more work to keep in sync? At minimum, I lose the ability to easily drop into my debugger to inspect why a result went wrong.
There is probably something to be said for keeping a hard boundary between the backend and testing code, but this would require more effort to create and maintain. I would still need to run the native test suite, so reaching out to an external tool feels a little weird. Unless it was just to ensure an API was fully generic enough for people to run their own clients against it.
> Why switch to a tool which is blind to my backend and is going to create more work to keep in sync? At minimum, I lose the ability to easily drop into my debugger to inspect why a result went wrong.
I don't use hurl but I've used other tools to write language agnostic API tests (and I'm currently working on a new one) so here's what I like about these kinds of tests:
- they're blind to the implementation, and that's actually a pro in my opinion. It makes sure you don't rely on internals, you just get the input and the output
- they can easily serve as documentation because they're language agnostic and relatively easy to share. They're great for sharing between teams in addition to or instead of an OpenAPI spec
- they actually test a contract, and can be reused in case of a migration. I've worked on a huge migration of a public API from Perl to Go and we wanted to keep relatively the same contracts (since the API was public). So we wrote tests for the existing Perl API as a non-regression harness, and could keep the exact same tests for the Go API since they were independent from the language. Keeping the same tests gave us greater confidence than if we had to rewrite tests and it was easy to add more during the double-run/A-B test period
- as a developer, writing these forces you to switch context and become a consumer of the API you just wrote, I've found it easier to write good quality tests with this method
It's just an alternative to Postman and similar so you don't have to start a whole damn electron window just to test a few http requests. It's somewhere between a curl script and Postman, so it hits the right spot for many.
We used Hurl to go from a ktor web server to a spring boot rewrite (Java/Kotlin stack). It was a breeze to have a kind of specifications test suite independent of the server stack and helped us a lot in the transition.
Another benefit is we built a Docker image for production and wanted to have something light and not tight to the implementation for integration tests.
There is no obligation on you to use it, especially if you have better tooling for the tasks.
For my team needs, I see benefits of using self-contained tool which doesn't require any extra modules to be installed and venv-like activated (this is a great barrier when ensuring others can use it too). Not even mention it will run fast.
Testing headers is particularly nice, so can test configuration of webservers and LBs/CDNs.
For my uses it's great that it has both test suite mode and individual invocation mode. I use it to execute a test suite of HTTP requests against a service in CI.
I'm not a super big fan of the configuration language, the blocks are not intuitive and I found some lacking in the documentation assertions that are supported.
Overall the tool has been great, and has been extremely valuable.
I started using interface testing when working on POCs. I found this helps with LLM-assisted development. Tests are written to directly exercise the HTTP methods, it allows for fluidity and evolution of the implementations as the project is evolving.
I also found the separation of testing very helpful, and it further enforces the separation between interface and implementation. Before hurl, the tests I wrote would be written in the test framework of the language the service is written in. The hurl-based tests really help to enforce the "client" perspective. There is no backdoor data access or anything, just strict separation betwen interface, tests and implementation :)
I'm really interested by issues with the documentation: it can always be improved and any issues is welcome!
We had a test suite using Runscope, I hated that changes weren't versioned controlled. Took a little grunt work and I converted them in Hurl (where were you AI?) and got rid of Runscope.
Now we can see who made what change when and why. It's great.
Loved Runscope it served it's purpose until something came along that that offered the same + version control.
Those basically go in the form
And then we have a 1-to-1 mapping of "expected.json" outputs for integration tests.We use a bespoke bash script to run these .http file with cURL, and then compare the outputs with jq, log success/failure to console, and write "actual.json"
Can I use HURL in a similar way? Essentially an IDE-runnable example HTTP request that references a JSON file as the expected output?
And then run HURL over a directory of these files?
Is your expected.json the actual response body, or is it an object containing body, status, header values, and time-taken, etc?
Where do you see hurl in the next 2 years?
A favorite of mine is to be available through official `apt`: there has been some work but it's kind of stuck. The Debian integration is the more difficult integration we have to deal. It's not Debian fault, there are a lot of documentation but we've struggled a lot and fail to understand the process.
[1]: https://github.com/Orange-OpenSource/hurl/issues/366
Dead Comment
Worth mentioning that using Hurl in Rust specifically gives you a nice bonus feature: integration with cargo test tooling. Since Hurl is written in Rust, you can hook into hurl-the-library and reuse your .hurl files directly in your test suite. Demo: https://github.com/perrygeo/axum-hurl-test
[1] https://github.com/Orange-OpenSource/hurl?tab=readme-ov-file...
[0] https://naprun.dev
[0] https://naprun.dev/the-basics/
https://marketplace.visualstudio.com/items?itemName=humao.re...
Which is a banger VS Code extension for all sorts of http xyz testing.
(After I have seen the IntelliJ one from a colleague I was searching for one like that in neovim. That's the best one I found. It's not perfect, but it works.
Edit: The tool from OP looks very neat though. I will try it out. Might be a handy thing for a few prepared tests that I run frequently
It is targeted toward more postman crowd though. May not be as lightweight.
Deleted Comment
There is probably something to be said for keeping a hard boundary between the backend and testing code, but this would require more effort to create and maintain. I would still need to run the native test suite, so reaching out to an external tool feels a little weird. Unless it was just to ensure an API was fully generic enough for people to run their own clients against it.
I don't use hurl but I've used other tools to write language agnostic API tests (and I'm currently working on a new one) so here's what I like about these kinds of tests:
- they're blind to the implementation, and that's actually a pro in my opinion. It makes sure you don't rely on internals, you just get the input and the output
- they can easily serve as documentation because they're language agnostic and relatively easy to share. They're great for sharing between teams in addition to or instead of an OpenAPI spec
- they actually test a contract, and can be reused in case of a migration. I've worked on a huge migration of a public API from Perl to Go and we wanted to keep relatively the same contracts (since the API was public). So we wrote tests for the existing Perl API as a non-regression harness, and could keep the exact same tests for the Go API since they were independent from the language. Keeping the same tests gave us greater confidence than if we had to rewrite tests and it was easy to add more during the double-run/A-B test period
- as a developer, writing these forces you to switch context and become a consumer of the API you just wrote, I've found it easier to write good quality tests with this method
Another benefit is we built a Docker image for production and wanted to have something light and not tight to the implementation for integration tests.
For my team needs, I see benefits of using self-contained tool which doesn't require any extra modules to be installed and venv-like activated (this is a great barrier when ensuring others can use it too). Not even mention it will run fast.
Testing headers is particularly nice, so can test configuration of webservers and LBs/CDNs.