The better error messages alone make it worthwhile to upgrade. The trend started in 3.10, and it already made a difference for me, my coworkers and students.
But remember that while it's great to play with it freshly out of the oven, and that you might want to test your projects/libs with it, we should wait a bit before migrating production.
Indeed, every first release of a new major version of Python eventually important bugs that get ironed out in a later patch. Also, some libs on pypi may simply not be compatible with it yet, breaking your pip install.
I usually wait until the 3rd patch myself, after many years of paying the price of greedy upgrades.
Once everything gets wheels/bumped, it'll be a lot easier. The last few major versions have been fairly straight forward to upgrade once they're all in place, and the nice thing is this should hopefully fix any remaining packages that aren't built for the arm64 macs.
About this new asyncio.TaskGroup thing, I found this from Guido on the related GH issue*
> After some conversations with Yury, and encouraged by the SC's approval of PEP-654, I am proposing to add a new class, asyncio.TaskGroup, which introduces structured concurrency similar to nurseries in Trio.
I have never used but have been told that Trio's nurseries make it much easier to handle exceptions in asyncio tasks. Does someone more knowledgeable can tell if this will help? Looking at the docs*, this only seems to be a helper when you want to await several tasks at once, so I am not sure this changes much for exception handling.
As an emperical point, I moved from asyncio to Trio and it was transformative. This will help bring asyncio almost up to parity but it's a pity that it's still possible to make tasks that don't belong to a task group - in Trio, the only way to start a task is to run it in a specified nursery. (But of course understandable for backwards compatibility.)
> this only seems to be a helper when you want to await several tasks at once
Sort of. It's a helper for if you want to run multiple tasks at once, not necessarily awaiting them. And you're definitely running multiple tasks at once otherwise you wouldn't be using asyncio in the first place.
Task groups do require you to wait for the tasks - after all, you have to start the task in a task group, and then implicit await the tasks in it (by falling off the end of the task group context block). But you can always have an outer task group representing tasks that you indent to run indefinitely in the background. In that way, task groups force you to think about when a task would cancel other tasks, representing the overall structure of your program.
> This highlighting will occur for every frame in the traceback. For instance, if a similar error is part of a complex function call chain, the traceback would display the code associated to the current instruction in every frame:
Traceback (most recent call last):
File "test.py", line 14, in <module>
lel3(x)
^^^^^^^
File "test.py", line 12, in lel3
return lel2(x) / 23
^^^^^^^
File "test.py", line 9, in lel2
return 25 + lel(x) + lel(x)
^^^^^^
File "test.py", line 6, in lel
return 1 + foo(a,b,c=x['z']['x']['y']['z']['y'], d=e)
~~~~~~~~~~~~~~~~^^^^^
TypeError: 'NoneType' object is not subscriptable
I sometimes sacrifice readability just because I hate creating variables. But then if it affects debugging times, my boss would be furious. As such, I use a full debugger anyway so I can trace quickly.
In addition to some ways to try to not have nogil have as much overhead he added a lot of unrelated speed improvements so that python without the gil would still be faster not slower in single thread mode. They seem to have merged those performance patches first that means if they add his Gil removal patches in say python 3.12 it will still be substantially slower then 3.11 although faster then 3.10. I hope that doesn't stop them from removing the gil (at least by default)
It's not a language feature, but I wanted to point out a new aspect of how Python is released: releases are now signed with Sigstore[1], producing a certificate and signature for each source distribution/build in the release table[2].
This is intended to supplement the current PGP signatures, giving Python distributors (and the packagers themselves) a similar degree of authenticity/identity without needing to perform PGP keyring maintenance.
For anyone browsing on Android and confused, the sigstore website has a major design issue hiding the menu button on some devices. You need to scroll the page to the right: https://github.com/sigstore/sigstore-website/issues/132
‘di said it, but to emphasize: with sigstore, there is no key management whatsoever. The keys in question are ephemeral and never leave your machine; the entire idea of the project is to bind an identity (like an email or GitHub username) to short-lived signing certificates.
Exciting release. All useful additions. Love the Variadic Generics (embed array layout into its type top avoid confusion). A surprisingly common issue in data science code.
But.. I am I the only one who struggles to parse the Exception groups?
Would it not have been better to left or right align the exception group id? Centering them just clobbers them with the actual error output and makes it a bit hard to parse.
That output looks super complicated, but if you get an error like that then I think you're in a super complicated situation to start with: you've started a hierarchy of tasks, of which 6 raised exceptions (only counting leaf-node exceptions) at 4 different levels of the hierarchy. I could believe that left aligning the exception group index could've made it a little simpler though.
If you're noticing that the numbers that form a list are right below each other in the same column, it kind of makes sense. Suddenly it seems a lot more ordered. Could be done differently though. Left alignment seems clearer:
I have dropped flake8 everywhere due to how hostile it has become to the rest of the python ecosystem. They pull a lot of nonsense like this, such as a refusal to fix their dependency pinning with no logical reasoning.
Besides… Between Black / Tan for cosmetic issues and Mypy / Pylance / Pyright for logical issues, flake8 has never since caught any concrete problem with my codebase and has solely been a source of things to disable or work around.
I find Pylint to be great, catches a lot, integrates well enough into pyproject, and the new standalone vscode extension is solid. If only I didn't have to restart the Pylint server every time I update a signature...
I think python 3.11 has effectively killed off both Pypy and Pyston. Now that the CPython team has finally shown both willingness and ability to deal with performance problems, few people are going to fool around with some esoteric version of python for an increasingly questionable performance-gains/headache ratio. Especially given how painful it already is to package and deploy normal python code and how hostile Guido always has been to alternative implementations. I don't think being maybe 2x faster right now is anywhere good enough to justify the additional risks and hassle, and it looks like the performance gap might shrink further with 3.12.
Pyston may be considered estoreric, but Pypy is pretty well-established already and is still a good deal faster. It could be that CPython starts to eat into its user base as it accumulates more performance gains, but Pypy is definitely not done yet.
Incidentally, I was working on a 15 year old Python 2 project for a client last week that used Pysco (the predecessor for PyPy). The cool thing about Pysco was that you could just import the library and it would make many operations much faster.
If PyPy had a similar mode where you could load it as a library it would have a much easier time gaining traction.
Are there any plans to change Python the language to make it faster? AFAIK most of the slowness comes from dynamic overhead, like object attributes may change or disappear in the middle of a loop and so on.
Our sloppy container spec bit us today though. We had
FROM: python:3-slim
with a bunch of pip requirements following. Some of those were not 3.11 ready, eg scipy==1.8.0, and our build broke. Our answer was to not be sloppy and pin until everything catches up, eg
FROM: python:3.10.8-slim
and we're good. Hope someone sees this that needs reminding.
Good question and thank you for raising that possibility. I am ignorant here. Of course I'd prefer patches applied asap but...
We are almost daily discovering upstream changes like this one that breaks something N components removed so our kneejerk response is usually to pin aggressively when found and periodically upgrade deps for a whole component.
What are the chances I have some dep somewhere that says python<=3.10.8 and is working today but when that 3.10-slim spec allows 3.10.8 turn into 3.10.9 it will break? That's what happened today for scipy but on the 2nd int and not the third one, because we had started with 3-slim.
A related note is for any requirements files. Something like this bit me the other day.
Libraryname >=3.1
After a few years,the package was updated substantially and has lots of breaking changes in the recent branch. Fix was to so ==3.1 until we work out the next step
I was bitten once a few years ago and have always pinned everything since then.
And of course last year I was bitten by a ~ in a package.json that I hadn't got around to pinning in a code base I'd inherited.
But remember that while it's great to play with it freshly out of the oven, and that you might want to test your projects/libs with it, we should wait a bit before migrating production.
Indeed, every first release of a new major version of Python eventually important bugs that get ironed out in a later patch. Also, some libs on pypi may simply not be compatible with it yet, breaking your pip install.
I usually wait until the 3rd patch myself, after many years of paying the price of greedy upgrades.
We wouldn't get there if everyone does that though.
> After some conversations with Yury, and encouraged by the SC's approval of PEP-654, I am proposing to add a new class, asyncio.TaskGroup, which introduces structured concurrency similar to nurseries in Trio.
I have never used but have been told that Trio's nurseries make it much easier to handle exceptions in asyncio tasks. Does someone more knowledgeable can tell if this will help? Looking at the docs*, this only seems to be a helper when you want to await several tasks at once, so I am not sure this changes much for exception handling.
* https://github.com/python/cpython/issues/90908
** https://docs.python.org/3.11/library/asyncio-task.html#task-...
> this only seems to be a helper when you want to await several tasks at once
Sort of. It's a helper for if you want to run multiple tasks at once, not necessarily awaiting them. And you're definitely running multiple tasks at once otherwise you wouldn't be using asyncio in the first place.
Task groups do require you to wait for the tasks - after all, you have to start the task in a task group, and then implicit await the tasks in it (by falling off the end of the task group context block). But you can always have an outer task group representing tasks that you indent to run indefinitely in the background. In that way, task groups force you to think about when a task would cancel other tasks, representing the overall structure of your program.
I managed to make a very very simple OTP-like framework with Trio: https://linkdd.github.io/triotp/
Nice!
> PEP 657 – Include Fine Grained Error Locations in Tracebacks
Hmm, what’s this?
YessssI love writing chained expressions but debugging them is like visiting a special kind of hell.
I sometimes sacrifice readability just because I hate creating variables. But then if it affects debugging times, my boss would be furious. As such, I use a full debugger anyway so I can trace quickly.
My understanding is that it's based on the most recent attempt to remove the GIL by Sam Gross
https://github.com/colesbury/nogil
In addition to some ways to try to not have nogil have as much overhead he added a lot of unrelated speed improvements so that python without the gil would still be faster not slower in single thread mode. They seem to have merged those performance patches first that means if they add his Gil removal patches in say python 3.12 it will still be substantially slower then 3.11 although faster then 3.10. I hope that doesn't stop them from removing the gil (at least by default)
The geometric mean of the 3.8 to 3.11b benchmarks was a 45% speedup.
Deleted Comment
This is intended to supplement the current PGP signatures, giving Python distributors (and the packagers themselves) a similar degree of authenticity/identity without needing to perform PGP keyring maintenance.
[1]: https://www.sigstore.dev/
[2]: https://www.python.org/downloads/release/python-3110/
But.. I am I the only one who struggles to parse the Exception groups?
Would it not have been better to left or right align the exception group id? Centering them just clobbers them with the actual error output and makes it a bit hard to parse.Maybe it'd look better in the terminal, but to me it feels like the table formatting makes it HARDER to understand.
Perhaps now flake8 will finally add the support for pyproject.toml as a config file...
Besides… Between Black / Tan for cosmetic issues and Mypy / Pylance / Pyright for logical issues, flake8 has never since caught any concrete problem with my codebase and has solely been a source of things to disable or work around.
See https://github.com/PyCQA/flake8/issues/234#issuecomment-1206...
https://www.phoronix.com/review/python311-pyston-pypy
If PyPy had a similar mode where you could load it as a library it would have a much easier time gaining traction.
Deleted Comment
Isn’t mypyc effectively an alternative (AOT-compiled) Python implementation? Guido doesn’t seem too hostile too it.
> Simple "JIT" compiler for small regions. Compile small regions of specialized code, using a relatively simple, fast compiler.
https://github.com/markshannon/faster-cpython/blob/master/pl...
Being dynamic make it harder to be fast, but JS/v8 is as dynamic as python, and much faster.
Our sloppy container spec bit us today though. We had
with a bunch of pip requirements following. Some of those were not 3.11 ready, eg scipy==1.8.0, and our build broke. Our answer was to not be sloppy and pin until everything catches up, eg and we're good. Hope someone sees this that needs reminding.Any reason to not use python:3.10-slim? That seems to keep up-to-date on patch releases.
We are almost daily discovering upstream changes like this one that breaks something N components removed so our kneejerk response is usually to pin aggressively when found and periodically upgrade deps for a whole component.
What are the chances I have some dep somewhere that says python<=3.10.8 and is working today but when that 3.10-slim spec allows 3.10.8 turn into 3.10.9 it will break? That's what happened today for scipy but on the 2nd int and not the third one, because we had started with 3-slim.
A related note is for any requirements files. Something like this bit me the other day.
Libraryname >=3.1
After a few years,the package was updated substantially and has lots of breaking changes in the recent branch. Fix was to so ==3.1 until we work out the next step