Readit News logoReadit News
BiteCode_dev · 3 years ago
The better error messages alone make it worthwhile to upgrade. The trend started in 3.10, and it already made a difference for me, my coworkers and students.

But remember that while it's great to play with it freshly out of the oven, and that you might want to test your projects/libs with it, we should wait a bit before migrating production.

Indeed, every first release of a new major version of Python eventually important bugs that get ironed out in a later patch. Also, some libs on pypi may simply not be compatible with it yet, breaking your pip install.

I usually wait until the 3rd patch myself, after many years of paying the price of greedy upgrades.

4m1rk · 3 years ago
> I usually wait until the 3rd patch myself, after many years of paying the price of greedy upgrades.

We wouldn't get there if everyone does that though.

fluidcruft · 3 years ago
Why not? Production isn't where bugs should be found. Plenty of time to test things before production.
thejosh · 3 years ago
Once everything gets wheels/bumped, it'll be a lot easier. The last few major versions have been fairly straight forward to upgrade once they're all in place, and the nice thing is this should hopefully fix any remaining packages that aren't built for the arm64 macs.
snvzz · 3 years ago
Once 3.12 is out, perhaps it'll be time to move to 3.11.x.
nicoco · 3 years ago
About this new asyncio.TaskGroup thing, I found this from Guido on the related GH issue*

> After some conversations with Yury, and encouraged by the SC's approval of PEP-654, I am proposing to add a new class, asyncio.TaskGroup, which introduces structured concurrency similar to nurseries in Trio.

I have never used but have been told that Trio's nurseries make it much easier to handle exceptions in asyncio tasks. Does someone more knowledgeable can tell if this will help? Looking at the docs*, this only seems to be a helper when you want to await several tasks at once, so I am not sure this changes much for exception handling.

* https://github.com/python/cpython/issues/90908

** https://docs.python.org/3.11/library/asyncio-task.html#task-...

quietbritishjim · 3 years ago
As an emperical point, I moved from asyncio to Trio and it was transformative. This will help bring asyncio almost up to parity but it's a pity that it's still possible to make tasks that don't belong to a task group - in Trio, the only way to start a task is to run it in a specified nursery. (But of course understandable for backwards compatibility.)

> this only seems to be a helper when you want to await several tasks at once

Sort of. It's a helper for if you want to run multiple tasks at once, not necessarily awaiting them. And you're definitely running multiple tasks at once otherwise you wouldn't be using asyncio in the first place.

Task groups do require you to wait for the tasks - after all, you have to start the task in a task group, and then implicit await the tasks in it (by falling off the end of the task group context block). But you can always have an outer task group representing tasks that you indent to run indefinitely in the background. In that way, task groups force you to think about when a task would cancel other tasks, representing the overall structure of your program.

linkdd · 3 years ago
Trio is truly great, and they implemented MultiErrors (ExceptionGroup) long before it was considered to be a PEP.

I managed to make a very very simple OTP-like framework with Trio: https://linkdd.github.io/triotp/

d0mine · 3 years ago
The context here is that TaskGroup + except* enable "structured concurrency" https://vorpus.org/blog/notes-on-structured-concurrency-or-g...
pavon · 3 years ago
Nice, Python 3.11: Python for TaskGroups
teddyh · 3 years ago
> Python 3.11 is up to 10-60% faster than Python 3.10.

Nice!

> PEP 657 – Include Fine Grained Error Locations in Tracebacks

Hmm, what’s this?

    Traceback (most recent call last):
      File "test.py", line 2, in <module>
        x['a']['b']['c']['d'] = 1
        ~~~~~~~~~~~^^^^^
    TypeError: 'NoneType' object is not subscriptable
Yessss

wodenokoto · 3 years ago
Does that work for chained pandas expressions too?

I love writing chained expressions but debugging them is like visiting a special kind of hell.

iruoy · 3 years ago
> This highlighting will occur for every frame in the traceback. For instance, if a similar error is part of a complex function call chain, the traceback would display the code associated to the current instruction in every frame:

  Traceback (most recent call last):
    File "test.py", line 14, in <module>
      lel3(x)
      ^^^^^^^
    File "test.py", line 12, in lel3
      return lel2(x) / 23
             ^^^^^^^
    File "test.py", line 9, in lel2
      return 25 + lel(x) + lel(x)
                  ^^^^^^
    File "test.py", line 6, in lel
      return 1 + foo(a,b,c=x['z']['x']['y']['z']['y'], d=e)
                           ~~~~~~~~~~~~~~~~^^^^^
  TypeError: 'NoneType' object is not subscriptable

hnews_account_1 · 3 years ago
All my data devs please stand up!

I sometimes sacrifice readability just because I hate creating variables. But then if it affects debugging times, my boss would be furious. As such, I use a full debugger anyway so I can trace quickly.

emehex · 3 years ago
If you like writing chain-able pandas, you should check out: https://github.com/maxhumber/redframes
rat87 · 3 years ago
I'm worried about the speedup

My understanding is that it's based on the most recent attempt to remove the GIL by Sam Gross

https://github.com/colesbury/nogil

In addition to some ways to try to not have nogil have as much overhead he added a lot of unrelated speed improvements so that python without the gil would still be faster not slower in single thread mode. They seem to have merged those performance patches first that means if they add his Gil removal patches in say python 3.12 it will still be substantially slower then 3.11 although faster then 3.10. I hope that doesn't stop them from removing the gil (at least by default)

kzrdude · 3 years ago
I hope Mark Shannon can work constructively with Sam and that they can make nogil happen
pydry · 3 years ago
As an encore they could bring pytest style assert errors into core.
jononor · 3 years ago
Yes, this would save me so much time. And make asserts so much more valuable, including checking of pre- and post-conditions
pknerd · 3 years ago
I wonder how much it is faster than other versions > 3.7?
IanOzsvald · 3 years ago
Here's a benchmark for 3.8-3.11b: https://www.phoronix.com/review/python-311-benchmarks/4

The geometric mean of the 3.8 to 3.11b benchmarks was a 45% speedup.

Deleted Comment

woodruffw · 3 years ago
It's not a language feature, but I wanted to point out a new aspect of how Python is released: releases are now signed with Sigstore[1], producing a certificate and signature for each source distribution/build in the release table[2].

This is intended to supplement the current PGP signatures, giving Python distributors (and the packagers themselves) a similar degree of authenticity/identity without needing to perform PGP keyring maintenance.

[1]: https://www.sigstore.dev/

[2]: https://www.python.org/downloads/release/python-3110/

remram · 3 years ago
For anyone browsing on Android and confused, the sigstore website has a major design issue hiding the menu button on some devices. You need to scroll the page to the right: https://github.com/sigstore/sigstore-website/issues/132
LtWorf · 3 years ago
So, if I understand, the solution to key management is to hand your keys to someone else?
woodruffw · 3 years ago
‘di said it, but to emphasize: with sigstore, there is no key management whatsoever. The keys in question are ephemeral and never leave your machine; the entire idea of the project is to bind an identity (like an email or GitHub username) to short-lived signing certificates.
di · 3 years ago
Quite the opposite: keys are generated once per signing event, and the private key never leaves memory.
prennert · 3 years ago
Exciting release. All useful additions. Love the Variadic Generics (embed array layout into its type top avoid confusion). A surprisingly common issue in data science code.

But.. I am I the only one who struggles to parse the Exception groups?

  *ValueError: ExceptionGroup('eg', [ValueError(1), ExceptionGroup('nested', [ValueError(6)])])
  *OSError: ExceptionGroup('eg', [OSError(3), ExceptionGroup('nested', [OSError(4)])])
  | ExceptionGroup:  (2 sub-exceptions)
  +-+---------------- 1 ----------------
    | Exception Group Traceback (most recent call last):
    |   File "<stdin>", line 15, in <module>
    |   File "<stdin>", line 2, in <module>
    | ExceptionGroup: eg (2 sub-exceptions)
    +-+---------------- 1 ----------------
      | ValueError: 1
      +---------------- 2 ----------------
      | ExceptionGroup: nested (1 sub-exception)
      +-+---------------- 1 ----------------
        | ValueError: 6
        +------------------------------------
    +---------------- 2 ----------------
    | Exception Group Traceback (most recent call last):
    |   File "<stdin>", line 2, in <module>
    | ExceptionGroup: eg (3 sub-exceptions)
    +-+---------------- 1 ----------------
      | TypeError: 2
      +---------------- 2 ----------------
      | OSError: 3
      +---------------- 3 ----------------
      | ExceptionGroup: nested (2 sub-exceptions)
      +-+---------------- 1 ----------------
        | OSError: 4
        +---------------- 2 ----------------
        | TypeError: 5
        +------------------------------------
Would it not have been better to left or right align the exception group id? Centering them just clobbers them with the actual error output and makes it a bit hard to parse.

quietbritishjim · 3 years ago
That output looks super complicated, but if you get an error like that then I think you're in a super complicated situation to start with: you've started a hierarchy of tasks, of which 6 raised exceptions (only counting leaf-node exceptions) at 4 different levels of the hierarchy. I could believe that left aligning the exception group index could've made it a little simpler though.
joenot443 · 3 years ago
I've written Python for a long time now but I still had a very difficult time grokking the Exception groups format.

Maybe it'd look better in the terminal, but to me it feels like the table formatting makes it HARDER to understand.

sva_ · 3 years ago
If you're noticing that the numbers that form a list are right below each other in the same column, it kind of makes sense. Suddenly it seems a lot more ordered. Could be done differently though. Left alignment seems clearer:

  *ValueError: ExceptionGroup('eg', [ValueError(1), ExceptionGroup('nested', [ValueError(6)])])
  *OSError: ExceptionGroup('eg', [OSError(3), ExceptionGroup('nested', [OSError(4)])])
  | ExceptionGroup:  (2 sub-exceptions)
  +-+- 1 -------------------------------
    | Exception Group Traceback (most recent call last):
    |   File "<stdin>", line 15, in <module>
    |   File "<stdin>", line 2, in <module>
    | ExceptionGroup: eg (2 sub-exceptions)
    +-+- 1 -------------------------------
      | ValueError: 1
      +- 2 -------------------------------
      | ExceptionGroup: nested (1 sub-exception)
      +-+- 1 -------------------------------
        | ValueError: 6
        +------------------------------------
    +- 2 -------------------------------
    | Exception Group Traceback (most recent call last):
    |   File "<stdin>", line 2, in <module>
    | ExceptionGroup: eg (3 sub-exceptions)
    +-+- 1 -------------------------------
      | TypeError: 2
      +- 2 -------------------------------
      | OSError: 3
      +- 3 -------------------------------
      | ExceptionGroup: nested (2 sub-exceptions)
      +-+- 1 -------------------------------
        | OSError: 4
        +- 2 -------------------------------
        | TypeError: 5
        +------------------------------------

BerislavLopac · 3 years ago
> PEP 680 – tomllib: Support for Parsing TOML in the Standard Library

Perhaps now flake8 will finally add the support for pyproject.toml as a config file...

scrollaway · 3 years ago
I have dropped flake8 everywhere due to how hostile it has become to the rest of the python ecosystem. They pull a lot of nonsense like this, such as a refusal to fix their dependency pinning with no logical reasoning.

Besides… Between Black / Tan for cosmetic issues and Mypy / Pylance / Pyright for logical issues, flake8 has never since caught any concrete problem with my codebase and has solely been a source of things to disable or work around.

claytonjy · 3 years ago
I find Pylint to be great, catches a lot, integrates well enough into pyproject, and the new standalone vscode extension is solid. If only I didn't have to restart the Pylint server every time I update a signature...
benji-york · 3 years ago
That probably won't happen any time soon.

See https://github.com/PyCQA/flake8/issues/234#issuecomment-1206...

iruoy · 3 years ago
Pypy and Pyston are still quite a bit faster than CPython, but this is a huge improvement in one release.

https://www.phoronix.com/review/python311-pyston-pypy

patrec · 3 years ago
I think python 3.11 has effectively killed off both Pypy and Pyston. Now that the CPython team has finally shown both willingness and ability to deal with performance problems, few people are going to fool around with some esoteric version of python for an increasingly questionable performance-gains/headache ratio. Especially given how painful it already is to package and deploy normal python code and how hostile Guido always has been to alternative implementations. I don't think being maybe 2x faster right now is anywhere good enough to justify the additional risks and hassle, and it looks like the performance gap might shrink further with 3.12.
smcl · 3 years ago
Pyston may be considered estoreric, but Pypy is pretty well-established already and is still a good deal faster. It could be that CPython starts to eat into its user base as it accumulates more performance gains, but Pypy is definitely not done yet.
__mharrison__ · 3 years ago
Incidentally, I was working on a 15 year old Python 2 project for a client last week that used Pysco (the predecessor for PyPy). The cool thing about Pysco was that you could just import the library and it would make many operations much faster.

If PyPy had a similar mode where you could load it as a library it would have a much easier time gaining traction.

Deleted Comment

dragonwriter · 3 years ago
> how hostile Guido always has been to alternative implementations.

Isn’t mypyc effectively an alternative (AOT-compiled) Python implementation? Guido doesn’t seem too hostile too it.

pjmlp · 3 years ago
Someday it will get a proper JIT I guess.
nigma1337 · 3 years ago
Planned for 3.12

> Simple "JIT" compiler for small regions. Compile small regions of specialized code, using a relatively simple, fast compiler.

https://github.com/markshannon/faster-cpython/blob/master/pl...

donkeyboy · 3 years ago
They already have jit through numba.
miohtama · 3 years ago
Are there any plans to change Python the language to make it faster? AFAIK most of the slowness comes from dynamic overhead, like object attributes may change or disappear in the middle of a loop and so on.
cdavid · 3 years ago
I doubt so, it would break almost any non trivial piece of python. And the ecosystem is a large reason for current python's success.

Being dynamic make it harder to be fast, but JS/v8 is as dynamic as python, and much faster.

imglorp · 3 years ago
Happy to see the new features.

Our sloppy container spec bit us today though. We had

    FROM: python:3-slim
with a bunch of pip requirements following. Some of those were not 3.11 ready, eg scipy==1.8.0, and our build broke. Our answer was to not be sloppy and pin until everything catches up, eg

    FROM: python:3.10.8-slim
and we're good. Hope someone sees this that needs reminding.

Sohcahtoa82 · 3 years ago
> FROM: python:3.10.8-slim

Any reason to not use python:3.10-slim? That seems to keep up-to-date on patch releases.

imglorp · 3 years ago
Good question and thank you for raising that possibility. I am ignorant here. Of course I'd prefer patches applied asap but...

We are almost daily discovering upstream changes like this one that breaks something N components removed so our kneejerk response is usually to pin aggressively when found and periodically upgrade deps for a whole component.

What are the chances I have some dep somewhere that says python<=3.10.8 and is working today but when that 3.10-slim spec allows 3.10.8 turn into 3.10.9 it will break? That's what happened today for scipy but on the 2nd int and not the third one, because we had started with 3-slim.

rlayton2 · 3 years ago
Thanks for that.

A related note is for any requirements files. Something like this bit me the other day.

Libraryname >=3.1

After a few years,the package was updated substantially and has lots of breaking changes in the recent branch. Fix was to so ==3.1 until we work out the next step

jamesfinlayson · 3 years ago
I was bitten once a few years ago and have always pinned everything since then. And of course last year I was bitten by a ~ in a package.json that I hadn't got around to pinning in a code base I'd inherited.