I don't even mind (although it bothers me that python doesn't have pipes ala R, or any sort of nice lambda syntax) but I wonder how much work has been put to take python-like dialect and hammer it over and over into being fast
NumPy giving a whole other set of verbs you have to memorize to just wrap around C, then JAX+etc which take that subset and make it faster, then the pypy+pyston to rewrite big parts, with numba to do individual functions if you really want for loops. I know it's an over simplification and it's all pretty amazing work, but it's exhausting to me how many companies and people have put work into taking a fast language and trying to make subsets fast
Yeah. Python is a great language for glue. Glue does not need to be fast, it's hopefully small in scope, and it has (hopefully) one or few jobs. Python can be a great frontend to faster C code, but sooner or later it starts being the bottleneck where it matters. These pseudo-Python compiler/interpreters are... cool, I guess, and might even solve that problem in some cases. The issue is half of them target the syntax of Python, rather than integrating with the runtime (really, CPython). The problem, I think, is that we have huge applications implemented from top to bottom in Python where the bottleneck is... everywhere and there is no good solution that isn't somehow writing a faster Python implementation (Pypy will get you far) or rewriting it all in a different language.
Here, we present Codon, a domain-extensible compiler and DSL framework for high-performance DSLs with Python’s syntax and semantics. Codon builds on previous work on ahead-of-time type checking and compilation of Python programs and leverages a novel intermediate representation to easily incorporate domain-specific optimizations and analyses.
Anything performant will require compilation. Interpreters are inherently slower. But some languages do the compilation implicitly for you and that's usually close enough.
No explicit compilation often means no ahead of time error reporting though which is a really useful feature for run-once programs.
It depends on the nature of your compute. If it is dominated by IO, or if you are actually calling native libraries (like `numpy` does, or it is something that is handled by `arrow`), there is no reason to switch away from Python. If you are writing custom algorithms, I think https://julialang.org/ is a great option.
Docs: https://docs.exaloop.io/codon/
NumPy giving a whole other set of verbs you have to memorize to just wrap around C, then JAX+etc which take that subset and make it faster, then the pypy+pyston to rewrite big parts, with numba to do individual functions if you really want for loops. I know it's an over simplification and it's all pretty amazing work, but it's exhausting to me how many companies and people have put work into taking a fast language and trying to make subsets fast
https://sr.ht/~tpapastylianou/chain-ops-python/
It's clean, works well, it's debuggable ... having a special operator might have been nice syntactic sugar, but isnt really necessary.
Wasn't python back then, probably isn't now. Just something that looks similar.
Lot less haskell to switch between different levels of order logics.
Just depends on what/how using python (or any other language) for.
[0] : https://en.wikipedia.org/wiki/Unicon_(programming_language)
- pretty readable,
- not require compilation,
- have convenient data structures and a math library,
- and be performant out of the box.
Anything performant will require compilation. Interpreters are inherently slower. But some languages do the compilation implicitly for you and that's usually close enough.
No explicit compilation often means no ahead of time error reporting though which is a really useful feature for run-once programs.
> low-level languages like C or C++
I stopped reading. I'm willing to bet the "high" in "high-performance" is relative to something very low.