Now that the GIL will be removed and adding a JIT is an inevitable step as well, we're looking into replacing everything written in Java with Python in the perspective of the next 10-20 years, depending on how soon people retire in a specific geographic region around the world.
The generation of people who, between 1995-2010, rewrote everything from C++ and COBOL into Java is now in their late 40s and 50s, so it's safe to assume there will be plenty of work for Python people until the next generation begins to mature around 2035-2040.
Now, whether it makes sense today to rewrite in Python something like a proxy, which is not a very complex type of software in itself?
If, starting today, you'd like to build within a year a proxy for something like StackOverflow, it's better to leave it for lower-level languages, like Go and Rust. These are replacements for C and C++, rather than Java, so they would likely be a better choice.
That said, my real message is, don't stick to writing such simple software for too long anyway.
If it's for educational purposes, to learn how all the various protocols work, or how to design server-side software, or to learn how to build an online community, that's a different story.
But you have this high level language in Python that lets you easily accomplish things that the lower-level languages just aren't best suited for, so once you wrote your first proxy and it can handle a few hundred or thousand requests/s, pick a high-level goal and work towards that instead! :-)
P.S: Yes I am looking for some high-level project to participate in or just help with the knowledge I have.
Used rules that are example within repository.
Results: https://pastebin.com/61Fyy2Pe ( too long to past it here... sorry )
Request Time: The average request time in all tests is about the same, ranging from 0.006 to 0.007 seconds. Max request time does increase with more requests; it peaks for the most substantial test of 100,000 requests at 0.136 seconds, which does show that some requests take much longer.
Requests per Second: The number of requests per second is highest in the smaller tests, around 143 RPS for the 10 requests, whereas for 100,000 requests it goes down to about 122 RPS. A probable conclusion in this case could be that while increasing the number of requests, some little slowdown starts to develop in the system.
Percentiles: The median, which usually stands at approximately 0.0035 seconds, essentially means half of the requests are done in under that time. The far higher values of the 90th and 99th percentiles just prove that while most of the requests may be fast, the others take considerably longer.
In general, it performs quite well under a reasonable load but biffs a bit if the number of requests is increased.
I can test OKD/k8s on Thursday at the earliest.
I see some merit to moving the size limits etc out of the application to reduce CPU waste there on overly large requests, but either way I’m still burning some CPUs on it.
Is the use-case for this mostly about sticking some validation in front of a system who’s code you can’t or don’t want to modify for some reason, like in front of Wordpress?