Tangential, but I practically owe my life to this guy. He wrote the flask mega tutorial in what I followed religiously to launch my first website. Then right before launch, in the most critical part of my entire application; piping a fragged file in flask. He answered my stackoverflow question, I put his fix live, and the site went viral. Here's the link for posterity's sake https://stackoverflow.com/a/34391304/4180276
When I was in college I discovered the flask mega tutorial and fell in love with programming. Switched from an economics degree to software engineering and now work in the industry.
Absolutely love seeing like a dozen people piling on Mr Grinberg to show gratitude for his work, and indeed the even little things he does to help uplift others in the field. It’s a good reminder that a small helpful contribution, or bit of teaching given at the right time, can be so valuable!
I also want to chime in and say how you changed my life. I did the same Flask megatutorial and that led me to leaving helpdesk and becoming a support engineer. Years later, and I'm now in big tech. Thanks Miguel!
Amazing to see all of the people thanking you! Great to see that gratitude is still alive and well. You seemed to have touched a lot of lives through that mega tutorial! wow!
Off-topic, but I absolutely loathe new Flask logo. Old one[0] has this vintage, crafty feel. And the new one[1] looks like it was made by a starving high schooler experimenting with WordArt.
(n.b. The Cracker Barrel Rebellion is sometimes associated with MAGA. I am very far from that, but I have to respect when people of any political stripe get something right.)
I didn't know that they have the new logo before reading your comment. Been 2 years since I last searched flask but yeah the old logo was vintage and I also preferred the old logo and the new logo feels mid/sucks.
What the…? I guess I’ve been reaching for FastAPI instead of flask these days because I had no idea this happened. Didn’t all the pallets projects have the old timey logos? I wonder what happened.
For anyone else wondering whether to click to find what "fragged file" means: no, it's not about Quake and the linked page does not mention 'frag' at all. The question asks how to stream a file to the client in Flask as opposed to reading it all into memory at once and then sending it on. I figured as much (also because e.g. IP fragmentation) but first time I hear this alternative term for streaming
The accessibility of this material and also the broader python ecosystem is truly incredible. After reflecting on this recently, started finding ways to give back/donate/contribute.
Yet another appreciation story for Miguel’s mega tutorial. In 2017 I used it to create our wedding site and learn a bit of web dev (my background is in data science). To motivate me to actually do it I used the strategy the fund the then occurring refactoring of the tutorial. I am still very fond and proud of that first time I actually went and funded some open source effort, it gives you back more than you might expect
Please don’t make benchmarks with timing inside the loop creating a sum. Just time the loop and divide by the number. Stuff happens getting the time and the jitter can mess with results.
The real world benchmark is measuring it from invocation, both for cold launches and 'hot' (data cached from the last run).
Interestingly I might have only ever used the time (shell) builtin command. GNU's time measuring command prints a bunch of other performance stats as well.
The biggest thing PyPy adds is JIT compilation. This is precisely what the project to add JIT to CPython is working on these days. It's still early days for the project, but by 3.15 there's a good chance we'll see some really great speedups in some cases.
It's worth noting that PyPy devs are in the loop, and their insights so far have been invaluable.
> That said, I wonder if GIL-less Python will one day enable GIL-less C FFI? That would be a big win that Python needs.
I'm pretty sure that is what freethreading is today? That is why it can't be enabled by default AFAIK, as several C FFI libs haven't gone "GIL-less" yet.
Can you clarify the concern? Starting from C I've come to expect many dialects across many compiler implementations. It seems healthy and encourages experimentation. Is it not a sign of a health language ecosystem?
It's a culture thing. C culture is all about rolling your own bespoke solution, which encourages the formation of dialects. On the other hand, Python culture is all about "There should be one-- and preferably only one --obvious way to do it.": https://peps.python.org/pep-0020/#the-zen-of-python
Well, they added an experimental JIT so that is one step closer to PyPy? Though would assume the trajectory is build a new JIT vs. merge in PyPy, but hopefully people learned a lot from PyPy.
You write the ffi once and let hundreds or thousands of other developers use it. For one off executables it rarely make sense.
Mixing the use with other libraries provided by the Python ecosystem is a another scenario. Do you really want to do HTTP in C or do you prefer requests?
> [Donald Knuth] firmly believes that having an unchanged system that will produce the same output now and in the future is more important than introducing new features
This is such a breath of fresh air in a world where everything is considered obsolete after like 3 years. Our industry has a disease, an insatiable hunger for newness over completeness or correctness.
There's no reason we can't be writing code that lasts 100 years. Code is just math. Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago? LOL dude get with the times, everyone uses Equately for their equations now. It was made by 3 interns at Facebook, so it's pretty much the new hotness." No, I don't think I will use "Equately", I think I'll stick to the tried-and-true idea that has been around for 3000 years.
Forget new versions of everything all the time. The people who can write code that doesn't need to change might be the only people who are really contributing to this industry.
> There's no reason we can't be writing code that lasts 100 years. Code is just math.
In theory, yes. In practice, no, because code is not just math, it's math written in a language with an implementation designed to target specific computing hardware, and computing hardware keeps changing. You could have the complete source code of software written 70 years ago, and at best you would need to write new code to emulate the hardware, and at worst you're SOL.
Software will only stop rotting when hardware stops changing, forever. Programs that refuse to update to take advantage of new hardware are killed by programs that do.
Are you by chance a Common Lisp developer? If not, you may like it (well, judging only by your praise of stability).
Completely sidestepping any debate about the language design, ease of use, quality of the standard library, size of community, etc... one of its strengths these days is that standard code basically remains functional "indefinitely", since the standard is effectively frozen. Of course, this requires implementation support, but there are lots of actively maintained and even newer options popping up.
And because extensibility is baked into the standard, the language (or its usage) can "evolve" through libraries in a backwards compatible way, at least a little more so than many other languages (e.g. syntax and object system extension; notable example: Coalton).
Of course there are caveats (like true, performant async programming) and it seems to be a fairly polarizing language in both directions; "best thing since sliced bread!" and "how massively overrated and annoying to use!". But it seems to fit your description decently at least among the software I use or know of.
Stability is for sure a very seducing trait. Also I can totally understand the fatigue of the chase for the next almost already obsolete new stuff.
>There's no reason we can't be writing code that lasts 100 years.
There are many reason this is most likely not going to happen. Code despite best effort to achieve separation of concern (in the best case) is a highly contextual piece of work. Even with a simple program with no external library, there is a full compiler/interpreter ecosystem that forms a huge dependency. And hardware platforms they abstract from are also moving target. Change is the only constant, as we say.
>Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago?
Well, that might surprise you, but no, they weren't. At least, they were not dealt with as they are thought and understood today in their contemporary most common presentation. When Babylonians (c. 2000 BCE) solved quadratic equation, they didn't have anything near Descartes algebraic notation connected to geometry, and there is a long series evolution in between, and still to this days.
Mathematicians actually do make a lot of fancy innovative things all the time. Some fundamentals stay stable over millennia, yes. But also some problem stay unsolved for millennia until some outrageous move is done out of the standard.
To be fair, if math did have version numbers, we could abandon a lot of hideous notational cruft / symbol overloading, and use tau instead of pi. Math notation is arguably considerably worse than perl -- can you imagine if perl practically required a convention of single-letter variable names everywhere? What modern language designer would make it so placing two variable names right next to each other denotes multiplication? Sheer insanity.
Consider how vastly more accessible programming has become from 1950 until the present. Imagine if math had undergone a similar transition.
> There's no reason we can't be writing code that lasts 100 years. Code is just math. Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago? LOL dude get with the times, everyone uses Equately for their equations now. It was made by 3 interns at Facebook, so it's pretty much the new hotness." No, I don't think I will use "Equately", I think I'll stick to the tried-and-true idea that has been around for 3000 years.
Not sure this is the best example. Mathematical notation evolved a lot in the last thousand years. We're not using roman numerals anymore, and the invention of 0 or of the equal sign were incredible new features.
> an insatiable hunger for newness over completeness or correctness.
I understand some of your frustration, but often the newness is in response to a need for completeness or correctness. "As we've explored how to use the system, we've found some parts were missing/bad and would be better with [new thing]". That's certainly what's happening with Python.
It's like the Incompleteness Theorem, but applied to software systems.
It takes a strong will to say "no, the system is Done, warts and missing pieces and all. Deal With It". Everyone who's had to deal with TeX at any serious level can point to the downsides of that.
If you look at old math treatises from important historical people you'll notice that they use very different notation from the one you're used to. Commonly concepts are also different, because those we use are derived over centuries from material produced without them and in a context where it was traditional to use other concepts to suss out conclusions.
But you have a point, and it's not just "our industry", it's society at large that has abandoned the old in favour of incessant forgetfulness and distaste for tradition and history. I'm by no means a nostalgic but I still mourn the harsh disjoint between contemporary human discourse and historical. Some nerds still read Homer and Cicero and Goethe and Ovid and so on but if you use a trope from any of those that would have been easily recognisable as such by europeans for much of the last millenium you can be quite sure that it won't generally be recognised today.
This also means that a lot of early and mid-modern literature is partially unavailable to contemporary people, because it was traditional to implicitly use much older motifs and riff on them when writing novels and making arguments, and unless you're aware of that older material you'll miss out on it. For e.g. Don Quixote most would need an annotated version which points out and makes explicit all the references and riffing, basically destroying the jokes by explaining them upfront.
Worth noting that few people use the TeX executable as specified by Knuth. Even putting aside the shift to pdf instead of dvi output, LaTeX requires an extended TeX executable with features not part of the Knuth specification from 1988.
Btw, equations and polynomials while conceptually are old, our contemporary notation is much younger, dating to the 16th century, and many aspects of mathematical notation are younger still.
While i think Latex is fantastic, i think there is plenty of low hanging fruit to improve upon it... the ergonomics of the language and its macros aren't great. If nothing else there should be a better investment in tooling and ecosystem.
Mathematical notion has changed over the years. Is Diophantus' original system of polynomials that legible to modern mathematicians? (Even if you ignore the literally being written in ancient greek part.)
I agree somewhat with your sentiment and have some nostalgia for a time when software could be finished, but the comment you're replying to was making a joke that I think you may have missed.
Kinda related question, but is code really just a math? Is it possible to express things like user input, timings, inteerupts, error handling, etc. as math?
Except uh, nobody uses infinitesimals for derivatives anymore, they all use limits now. There's still some cruft left over from the infinitesimal era, like this dx and dy business, but that's just a backwards compatibility layer.
Anyhoo, remarks like this are why the real ones use Typst now. TeX and family are stagnant, difficult to use, difficult to integrate into modern workflows, and not written in Rust.
More than 300 comments here and still no convincing answer. Why the community wastes time on trying to make CPython faster when there is pypy which is already much faster? I understand pypy lacks libraries and feature parity with up to date CPython. But… can’t everyone refocus the efforts and just move to pypy to add all the missing bits and then just continue with pypy as the “official python”? Are there any serious technical reasons not to do it?
> Are there any serious technical reasons not to do it?
Yes.
First is startup time. REPL cycle being fast is a big advantage for development. From a business perspective, dev time is more expensive then compute time by orders of magnitude. Every time you make a change, you have to recompile the program. Meanwhile with regular python, you can literally develop during execution.
Second is compatibility. Numpy and pytorch are ever evolving, and those are written a C extensions.
Third is LLMs. If you really want speed, Gemma27bqat that runs on a single 3090 can translate python codebase into C/C++ pretty easily. No need to have any additional execution layer. My friend at Amazon pretty much writes Java code this way - prototypes a bunch of stuff in Python, and then has an LLM write the java code thats compatible with existing intra-amazon java templates.
I really hope I'll never need to touch code written by people who code in python and throws it at a plausible randomiser to get java or C
If you for some reason do this, please keep the python around so I can at least look at whatever the human was aiming at. It's probably also wrong as they picked this workflow, but there's a chance it has something useful
Repl I get it. Possibly valid point. Yet I guess same issue are valid to node.js which seems much faster in many cases and still has valid dev experience.
C compatibility / extension compatibility - nope. First, it is an issue of limited resources. Add more devs to pypy team and compatibility bugs gets fixed. Second, aren’t people writing C extensions due to python being slow? Make python fast - as pypy - and for some cases native code won’t be that crucial.
So I don’t see a real issue with pypy that could not be solved by simply moving all the dev efforts from CPython.
So are there political, personal or business issues?
Seriously, though. PyPy is 2-3 versions behind CPython (3.11 vs 3.14) and it's not even 100% compatible with 3.11. Libraries such as psycopg and lxml are not fully supported. It's a hard sell.
But this is exactly my point. The resources pypy has are much smaller. And still for years they managed to follow up being just 2-3 versions behind with features and high on performance.
So why not move all the resources from CPython to close the gap with features faster and replace CPython entirely?
Since this is not happening I expect there to be serious reasons, but I fail to see them. This is what I ask for.
> Are there any serious technical reasons not to do it?
Forget technical reasons, how would you ever do it? It feels like the equivalent of cultural reprogramming "You must stop using your preferred interpreter and halt all your efforts contrary to the one true interpreter". Nah, not going to happen in a free and open source language. Who would have the authority and control to make such a directive?
Yes, there may be technical reasons, but the reason it doesn't happen more than any other is that programming languages are languages spoken by people, and therefore they evolve organically at no one's direction. Even in languages like Python with a strong bent for cultural sameness and a BDFL type direction, they still couldn't control it. Often times, dialects happen for technical reasons, but it's hard to get rid of them on technical grounds.
I think for pure python performance it is significantly faster at least on all the benchmarks I have seen. That said a lot of what people actually do in python calls into libraries that are written in C++ or C, which I believe has a similar performance (when it works) on pypy.
I don't know how realistic only using a benchmark that only uses tight loops and integer operations. Something with hashmaps and strings more realistically represents everyday cpu code in python; most python users offload numeric code to external calls.
There is no "realistic" benchmark, all benchmarks are designed to measure in a specific way. I explain what my goals were in the article, in case you are curious and want to read it.
I agree with you, this is not an in depth look, could have been much more rigorous.
But then I think in some ways it's a much more accurate depiction of my use case. I mainly write monte-carlo simulations or simple scientific calculations for a diverse set of problems every day. And I'm not going to write a fast algorithm or use an unfamiliar library for a one-off simulation, even if the sim is going to take 10 minutes to run (yes I use scipy and numpy, but often those aren't the bottlenecks). This is for the sake of simplicity as I might iterate over the assumptions a few times, and optimized algorithms or library impls are not as trivial to work on or modify on the go. My code often looks super ugly, and is as laughably unoptimized as the bubble sort or fib(40) examples (tail calls and nested for loops). And then if I really need the speed I will take my time to write some clean cpp with zmq or pybind or numba.
It's still interesting though. If the most basic thing isn't notably faster, it makes it pretty likely the more complex things aren't either.
If your actual load is 1% python and 99% offloaded, the effect of a faster python might not mater a lot to you, but to measure python you kinda have to look at python
Or have it run some super common use case like a FastAPI endpoint or a numpy calculation. Yes, they are not all python, but it's what most people use Python for.
FastAPI is a web framework, which by definition is (or should be!) an I/O bound process. My benchmark evaluates CPU, so it's a different thing. There are a ton of web framework benchmarks out there if you are interested in FastAPI and other frameworks.
And numpy is a) written in C, not Python, and b) is not part of Python, so it hasn't changed when 3.14 was released. The goal was to evaluate the Python 3.14 interpreter. Not to say that it wouldn't be interesting to evaluate the performance of other things as well, but that is not what I set out to do here.
For me the "criminal" thing is that Pypy exists on a shoestring and yet delivers the performance and multithreading that others gradually try to add to cpython.
It's problem is, IMO, compatibility. Long ago I wanted to run it on yocto but something or other didn't work. I think this problem is gradually disappearing but it could be solved far more rapidly with a bit of money and effort probably.
Thank you for the work you put in.
When I started my first job as a Data Scientist, it helped me deploy my first model to production. Since then, I’ve focused much more on engineering.
You’ve truly started an amazing journey for me.
Thank you. :)
But just now checking out the Mega Flask Tutorial, wow looks pretty awesome.
Off-topic, but I absolutely loathe new Flask logo. Old one[0] has this vintage, crafty feel. And the new one[1] looks like it was made by a starving high schooler experimenting with WordArt.
[0] - https://upload.wikimedia.org/wikipedia/commons/3/3c/Flask_lo...
[1] - https://flask.palletsprojects.com/en/stable/_images/flask-na...
1. Original logo has country charm and soul.
2. Replaced with a modern soulless logo.
3. Customer outrage!
4. Company (or open source project) comes to its senses and returns to old logo.
https://media.nbcboston.com/2025/08/cracker-barrel-split.jpg
(n.b. The Cracker Barrel Rebellion is sometimes associated with MAGA. I am very far from that, but I have to respect when people of any political stripe get something right.)
The old logo is classic and bespoke. I could recall it from memory. It's impressionable.
The new one looks like an unfunded 2005-era dorm room startup. XmlHttpRequests for sheep herders.
The old logo is much better.
Thinking about hand-rolled web services, I usually imagine either a stealth alcoholic's metal flask or a mad scientist's Erlenmeyer flask.
An once whimsical corner of web development has lost its charm due to arbitrary trends.
Hope I'm able to do the same for someone one day :)
The accessibility of this material and also the broader python ecosystem is truly incredible. After reflecting on this recently, started finding ways to give back/donate/contribute.
one day of vibe coding
Edit: corrected typo in "typo".
Deleted Comment
Seemingly.
https://docs.python.org/3/library/timeit.html
Interestingly I might have only ever used the time (shell) builtin command. GNU's time measuring command prints a bunch of other performance stats as well.
That said, I wonder if GIL-less Python will one day enable GIL-less C FFI? That would be a big win that Python needs.
It's worth noting that PyPy devs are in the loop, and their insights so far have been invaluable.
What do you mean exactly? C FFI has always been able to release the GIL manually.
I'm pretty sure that is what freethreading is today? That is why it can't be enabled by default AFAIK, as several C FFI libs haven't gone "GIL-less" yet.
Pypy compatibility with cpython seems very minor in comparison https://pypy.org/compat.html
Python introduce another breaking change than also randomly affects performance, making it worse for large classes of users?
Why would the Python organisers want to do that?
The amount of time it takes spent to write all the cffi stuff is the same amount it takes to write an executable in C and call it from python.
The only time cffi is useful is if you want to have that code be dynamic, which is a very niche use case.
Mixing the use with other libraries provided by the Python ecosystem is a another scenario. Do you really want to do HTTP in C or do you prefer requests?
Dead Comment
https://www.reddit.com/r/RedditDayOf/comments/7we430/donald_...
> [Donald Knuth] firmly believes that having an unchanged system that will produce the same output now and in the future is more important than introducing new features
This is such a breath of fresh air in a world where everything is considered obsolete after like 3 years. Our industry has a disease, an insatiable hunger for newness over completeness or correctness.
There's no reason we can't be writing code that lasts 100 years. Code is just math. Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago? LOL dude get with the times, everyone uses Equately for their equations now. It was made by 3 interns at Facebook, so it's pretty much the new hotness." No, I don't think I will use "Equately", I think I'll stick to the tried-and-true idea that has been around for 3000 years.
Forget new versions of everything all the time. The people who can write code that doesn't need to change might be the only people who are really contributing to this industry.
In theory, yes. In practice, no, because code is not just math, it's math written in a language with an implementation designed to target specific computing hardware, and computing hardware keeps changing. You could have the complete source code of software written 70 years ago, and at best you would need to write new code to emulate the hardware, and at worst you're SOL.
Software will only stop rotting when hardware stops changing, forever. Programs that refuse to update to take advantage of new hardware are killed by programs that do.
Completely sidestepping any debate about the language design, ease of use, quality of the standard library, size of community, etc... one of its strengths these days is that standard code basically remains functional "indefinitely", since the standard is effectively frozen. Of course, this requires implementation support, but there are lots of actively maintained and even newer options popping up.
And because extensibility is baked into the standard, the language (or its usage) can "evolve" through libraries in a backwards compatible way, at least a little more so than many other languages (e.g. syntax and object system extension; notable example: Coalton).
Of course there are caveats (like true, performant async programming) and it seems to be a fairly polarizing language in both directions; "best thing since sliced bread!" and "how massively overrated and annoying to use!". But it seems to fit your description decently at least among the software I use or know of.
>There's no reason we can't be writing code that lasts 100 years.
There are many reason this is most likely not going to happen. Code despite best effort to achieve separation of concern (in the best case) is a highly contextual piece of work. Even with a simple program with no external library, there is a full compiler/interpreter ecosystem that forms a huge dependency. And hardware platforms they abstract from are also moving target. Change is the only constant, as we say.
>Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago?
Well, that might surprise you, but no, they weren't. At least, they were not dealt with as they are thought and understood today in their contemporary most common presentation. When Babylonians (c. 2000 BCE) solved quadratic equation, they didn't have anything near Descartes algebraic notation connected to geometry, and there is a long series evolution in between, and still to this days.
Mathematicians actually do make a lot of fancy innovative things all the time. Some fundamentals stay stable over millennia, yes. But also some problem stay unsolved for millennia until some outrageous move is done out of the standard.
Consider how vastly more accessible programming has become from 1950 until the present. Imagine if math had undergone a similar transition.
Not sure this is the best example. Mathematical notation evolved a lot in the last thousand years. We're not using roman numerals anymore, and the invention of 0 or of the equal sign were incredible new features.
Math is continually updated, clarified and rewritten. 100 years ago was before the Bourbaki group.
I understand some of your frustration, but often the newness is in response to a need for completeness or correctness. "As we've explored how to use the system, we've found some parts were missing/bad and would be better with [new thing]". That's certainly what's happening with Python.
It's like the Incompleteness Theorem, but applied to software systems.
It takes a strong will to say "no, the system is Done, warts and missing pieces and all. Deal With It". Everyone who's had to deal with TeX at any serious level can point to the downsides of that.
But you have a point, and it's not just "our industry", it's society at large that has abandoned the old in favour of incessant forgetfulness and distaste for tradition and history. I'm by no means a nostalgic but I still mourn the harsh disjoint between contemporary human discourse and historical. Some nerds still read Homer and Cicero and Goethe and Ovid and so on but if you use a trope from any of those that would have been easily recognisable as such by europeans for much of the last millenium you can be quite sure that it won't generally be recognised today.
This also means that a lot of early and mid-modern literature is partially unavailable to contemporary people, because it was traditional to implicitly use much older motifs and riff on them when writing novels and making arguments, and unless you're aware of that older material you'll miss out on it. For e.g. Don Quixote most would need an annotated version which points out and makes explicit all the references and riffing, basically destroying the jokes by explaining them upfront.
Btw, equations and polynomials while conceptually are old, our contemporary notation is much younger, dating to the 16th century, and many aspects of mathematical notation are younger still.
Even C/C++ introduces breaking changes from time to time (after decades of deprecation though).
There’s no practical reason why Python should commit to a 100+ year code stability, as all that comes at a price.
Having said that, Python 2 -> 3 is a textbook example of how not to do these things.
The weather forecast is also “just math”, yet yesterday’s won’t be terribly useful next April.
Most of my python from that era also works (python 3.1)
The problem is not really the language syntax, but how libraries change a lot.
Deleted Comment
I dunno man, there's an equal amount of bullshit that still exists only because that's how it was before we were born.
> Code is just math.
What?? No. If it was there'd never be any bugs.
Anyhoo, remarks like this are why the real ones use Typst now. TeX and family are stagnant, difficult to use, difficult to integrate into modern workflows, and not written in Rust.
Yes.
First is startup time. REPL cycle being fast is a big advantage for development. From a business perspective, dev time is more expensive then compute time by orders of magnitude. Every time you make a change, you have to recompile the program. Meanwhile with regular python, you can literally develop during execution.
Second is compatibility. Numpy and pytorch are ever evolving, and those are written a C extensions.
Third is LLMs. If you really want speed, Gemma27bqat that runs on a single 3090 can translate python codebase into C/C++ pretty easily. No need to have any additional execution layer. My friend at Amazon pretty much writes Java code this way - prototypes a bunch of stuff in Python, and then has an LLM write the java code thats compatible with existing intra-amazon java templates.
If you for some reason do this, please keep the python around so I can at least look at whatever the human was aiming at. It's probably also wrong as they picked this workflow, but there's a chance it has something useful
C compatibility / extension compatibility - nope. First, it is an issue of limited resources. Add more devs to pypy team and compatibility bugs gets fixed. Second, aren’t people writing C extensions due to python being slow? Make python fast - as pypy - and for some cases native code won’t be that crucial.
So I don’t see a real issue with pypy that could not be solved by simply moving all the dev efforts from CPython.
So are there political, personal or business issues?
You have answered your own question.
Seriously, though. PyPy is 2-3 versions behind CPython (3.11 vs 3.14) and it's not even 100% compatible with 3.11. Libraries such as psycopg and lxml are not fully supported. It's a hard sell.
So why not move all the resources from CPython to close the gap with features faster and replace CPython entirely?
Since this is not happening I expect there to be serious reasons, but I fail to see them. This is what I ask for.
Forget technical reasons, how would you ever do it? It feels like the equivalent of cultural reprogramming "You must stop using your preferred interpreter and halt all your efforts contrary to the one true interpreter". Nah, not going to happen in a free and open source language. Who would have the authority and control to make such a directive?
Yes, there may be technical reasons, but the reason it doesn't happen more than any other is that programming languages are languages spoken by people, and therefore they evolve organically at no one's direction. Even in languages like Python with a strong bent for cultural sameness and a BDFL type direction, they still couldn't control it. Often times, dialects happen for technical reasons, but it's hard to get rid of them on technical grounds.
It isn't.
Not only that, it is a lot easier to hack on. I might be biased, but the whole implementstion idea of PyPy seems a lot more sane.
But then I think in some ways it's a much more accurate depiction of my use case. I mainly write monte-carlo simulations or simple scientific calculations for a diverse set of problems every day. And I'm not going to write a fast algorithm or use an unfamiliar library for a one-off simulation, even if the sim is going to take 10 minutes to run (yes I use scipy and numpy, but often those aren't the bottlenecks). This is for the sake of simplicity as I might iterate over the assumptions a few times, and optimized algorithms or library impls are not as trivial to work on or modify on the go. My code often looks super ugly, and is as laughably unoptimized as the bubble sort or fib(40) examples (tail calls and nested for loops). And then if I really need the speed I will take my time to write some clean cpp with zmq or pybind or numba.
If your actual load is 1% python and 99% offloaded, the effect of a faster python might not mater a lot to you, but to measure python you kinda have to look at python
And numpy is a) written in C, not Python, and b) is not part of Python, so it hasn't changed when 3.14 was released. The goal was to evaluate the Python 3.14 interpreter. Not to say that it wouldn't be interesting to evaluate the performance of other things as well, but that is not what I set out to do here.
It's problem is, IMO, compatibility. Long ago I wanted to run it on yocto but something or other didn't work. I think this problem is gradually disappearing but it could be solved far more rapidly with a bit of money and effort probably.
However, the JIT does make things much faster