{{'foo': 'bar'}: 1, {3:4, 5:6}: 7}
...and there is no reasonable builtin way to get around this!
You may ask: "Why on earth would you ever want a dictionary with dictionaries for its keys?"
More generally, sometimes you have an array, and for whatever reason, it is convenient to use its members as keys. Sometimes, the array in question happens to be an array of dicts. Bang, suddenly it's impossible to use said array's elements as keys! I'm not sure what infuriates me more: said impossibility, or the python community's collective attitude that "that never happens or is needed, therefore no frozendict for you"
There is nothing at all bizarre or unexpected about this. Mutable objects should not be expected to be valid keys for a hash-based mapping — because the entire point of that data structure is to look things up by a hash value that doesn't change, but mutating an object in general changes what its hash should be.
Besides which, looking things up in such a dictionary is awkward.
> More generally, sometimes you have an array, and for whatever reason, it is convenient to use its members as keys.
We call them lists, unless you're talking about e.g. Numpy arrays with a `dtype` of `object` or something. I can't think of ever being in the situation you describe, but if the point is that your keys are drawn from the list contents, you could just use the list index as a key. Or just store key-value tuples. It would help if you could point at an actual project where you encountered the problem.
> At first this seems like a pretty big limitation. But it’s actually the point: You can use PythoC as a code generation system for C programs that run independently, rather than C modules imported into Python.
I have a feeling this model isn't going to be very popular, and they'd be much better off with a way to reuse the compilation result.
> One possibility is that it could integrate more closely with Python at runtime. For instance, a @cached decorator could compile modules once, ahead of time, and then re-use the compiled modules when they’re called from within Python, instead of being recompiled at each run.
Yeah, pretty much what I was thinking. The @compile-decorated function is usable from Python, so the necessary binding logic is already implemented; so surely the decorator could just check a cache to see if the compiled equivalent is already available.
It seems like a powerful and flexible idea overall, though. The toy examples are probably not doing a great job of showcasing what's possible this way.
Putting aside matters of real-world politics and "culture war", my main objection to the PSF is their funding allocation. Almost all core developers are volunteering (aside from a couple of "developer-in-residence" positions; we're talking about on the order of a hundred volunteers here), and presumably not all of them have high-paying day jobs at major tech companies (although certainly a lot of the more easily recognized names do). Many more administrative staff are paid, not obscenely much but certainly more than zero.
And then the lion's share of the rest of the budget goes to operating PyCon US. I guess (reviewing the financial reports) this is profitable for them, given that their "operating service revenue" must be pretty close to 100% from that. But it's hard for me to imagine why people would pay those prices to see talks that will later be available on YouTube (and which could almost universally do with a lot of editing) unless they're really there to meet people in person. Meanwhile other meetups worldwide get a pittance in support. I know that there are grants awarded specifically to enable the less privileged to attend PyCon, but the whole thing still strikes me as rather elitist.
But on the other hand, I do sympathize with the Python project (not so much the Foundation) being severely underfunded for what it is. When I see the Wikipedia banners I'm disgusted because I know the Wikimedia Foundation is already, by non-profit "do good things on the Internet" foundation standards, absolutely drowning in cash. But the PSF's existing dues and contributions wouldn't even cover the costs of the "Packaging Work Group / Infrastructure / Other" category in the 2024 breakdown. And that is without considering Fastly's generous in-kind donation of bandwidth (which AIUI is in the exabyte/year range now). The PSF really have not "been paid massively", notwithstanding estimates of the value of that in-kind donation. You can see the numbers for yourself (https://www.python.org/psf/annual-report/2024/).
Deleted Comment
Type the dict as a mapping when you want immutability:
x: Mapping[int, int] = {1: 1}
x[1] = 2 # Unsupported target for indexed assignment ("Mapping[int, int]").
The only problem I've seen with this is: y = {}
y[x] = 0 # Mypy thinks this is fine. Mapping is hashable, after all!
The issue here is less that dict isn't hashable than that Mapping is, though.> It is important to know that NumPy, like Python itself and most other well known scientific Python projects, does not use semantic versioning. Instead, backwards incompatible API changes require deprecation warnings for at least two releases.
- if it’s hard to name, that’s a good sign that you haven’t clearly delineated use case or set of responsibilities for the thing
- best case for a name is that it’s weird and whimsical on first encounter. Then when somebody tells you the meaning/backstory for the name it reveals some deeper meaning/history that makes it really memorable and cements it in your mind
- the single best tech naming thing I’ve encountered (I didn’t come up with it) was the A/B testing team at Spotify naming themselves “ABBA”
As long as you're naming products and features, rather than variables.
The problem was that assuming that keys would be sorted was frequently true, but not guaranteed. An alternative solution would have been to randomize them more, but that would probably break a lot of old code. Sorting the keys makes no difference if you don't expect them to be, but it will now be a greater surprise if you switch language.
And the reason we have ordered dict keys now is because it's trivial with the new compact structure (the actual hash table contains indices to an auxiliary array, which can just be appended to with every insertion). It has nothing to do with any randomization of the hashing process.