https://github.com/wouterken/itsi/tree/main/examples
For each example you can take a look at the included README and take a look at the Itsi.rb, to get a little glimpse of some of Itsi's capabilities.
https://github.com/wouterken/itsi/tree/main/examples
For each example you can take a look at the included README and take a look at the Itsi.rb, to get a little glimpse of some of Itsi's capabilities.
This particular example crossed this painful divide 1 million times. I found it interesting that despite this disadvantage, the Crystal implementation was still able to take the lead over my identical, naively written Ruby implementation (warts and all). As the author of this post points out, for trivial operations crossing the interface at high frequency, finely tuned Ruby will easily take the lead!
That said, I still believe there are times where having the ability to write and interface with a performant, precompiled language (that is somewhat familiar to the average Rubyist) in an ergonomic way that avoids the need to context switch can be beneficial. Sure, performance is unlikely to match a finely tuned (but arguably more difficult to maintain) C or Rust extension and ergonomics are unlikely to match an approach that sticks to pure Ruby, but it exposes a new middle ground, which at times, may just hit the right spot!
I'd imagine realistic examples of where this type of library could be useful might include:
- Providing an easy way to expose and use high-quality Crystal shards from within your Ruby program.
- Allowing you to easily write performant CPU or memory-intensive procedures for which reusable native libraries do not exist, and where the majority of the overall execution time can be spent within Crystal.
- As a way to glue several different smaller Crystal shared objects together into a single application using Ruby glue code, allowing you to avoid some of the high compile times you might typically see with a large monolithic binary.
I would definitely not suggest this library has any business:
- Blindly replacing swaths of Ruby methods, without any tangible performance metrics to back this decision.
- Replacing code that is already highly performant in pure Ruby (whether that's code that lends itself well to being JIT'd, is backed by an existing native library etc.)
Funnily enough, if you take a look at the commit history of the project, you'll notice that last week I actually replaced the referenced example with one that better demonstrates a performance difference (even compared against YJIT) and crosses the FFI divide only once. This came as a result of having to introduce a Reactor to get the library to play nice in multi-threaded Ruby applications, which regrettably added even more overhead to the FFI interface and further hammers home the point that this library is not going to perform well in cases where you need to jump between Crystal and Ruby at high frequency.
Author of the gem here. Appreciate the attention!
I hacked this together last weekend to scratch an itch and have some fun. This got a lot more attention than I was expecting so early on.
I've had to scramble a bit to start hammering this into something that's less prototype, more legitimate project. I hope it's now close to a cohesive enough state that someone trying it out will have a relatively smooth experience.
Given the interest, I definitely plan to sink a bit more dedicated time into this project, to tick off a few key missing features and add some polish. It would be great to see it grow into something that can be useful to more than just me. Seems like there's definitely some shared desire for this type of tool.
It probably goes without saying that you probably shouldn't convert large amounts of mission critical code to depend on this (for now).
It's still early days and the API hasn't been "crystallized" yet...
Interestingly, the few users that did join, tend to never bother to use the more public social aspects of it, and exclusively post privately. I think this is likely representative of the big shift of the more genuine social media interactions we first enjoyed in old Facebook, all now having migrated to private messaging apps and that the original experience we all remember fondly, may simply not be repeatable in today's social media landscape.
That said, it should be very fast across the board.
FWIW on my M1 Pro MacBook, if I run Itsi with a *single* worker and *single thread* (e.g. `itsi -w 1 -t 1`) I see these figures:
A hello world Rack app E.g. ->(env){[200, {}, ["hello world"]]}
Runs at approximately 100,000 requests per second using wrk http://localhost:3000 -c 60
A simple endpoint E.g. get "/" do |req| req.ok "Hello World" end
Runs at approximately 115,000 requests per second wrk http://localhost:3000 -c 60
Static file serving of small files (no compression) approaches 150K rps.
In essence, it should easily be fast enough that it's very unlikely that HTTP server performance is going to be the bottleneck in your workload except for the most extreme workloads.
All these numbers can increase further as you increase worker count. (There's many performance tuning knobs to twiddle)