My favorite function here is more_itertools.one. Especially in something like a unit test, where ValueErrors from unexpected conditions are desirable, we can use it to turn code like
results = list(get_some_stuff(...))
assert len(results) = 1
result = results[0]
into
result = one(get_some_stuff(...))
I guess you could also use tuple-unpacking:
result, = get_some_stuff(...)
But the syntax is awkward to unpack a single item. Doesn't that trailing comma just look implausible? (Also I've worked with type-checkers that will complain when a tuple-unpacking could potentially fail, while one has a clear type signatures Iterable[T] -> T.)
That’s not the same though. Your unpacking allows for any non-empty iterable while OPs only allows for an iterable with exactly one item or else it throws an exception.
The proposal seemed very close to getting shipped alongside https://github.com/tc39/proposal-iterator-helpers while basically accepting many of the constraints of current async iteration (one at a time consumption). But the folks really accepted that concurrency needs had evolved, decided to hold back & keep iterating & churning for better.
I feel like a lot of the easy visible mood on the web (against the web) is that there's too much, that stuff is just piled in. But I see a lot of caring & deliberation & trying to get shit right & good. Sometimes that too can be maddening, but ultimately with the web there aren't really re-do-es & the deliberation is good.
Disclaimer: this code was written several years ago with few downstream users, not all of these are super high performing, and they have not been super extensively tested.
Your nice work on the JS itertools port has a todo for a "better tee". This was my fault because the old "rough equivalent" code in the Python docs was too obscure and didn't provide a good emulation.
Here is an update that should be much easier to convert to JS:
def tee(iterable, n=2):
iterator = iter(iterable)
shared_link = [None, None]
return tuple(_tee(iterator, shared_link) for _ in range(n))
def _tee(iterator, link):
try:
while True:
if link[1] is None:
link[0] = next(iterator)
link[1] = [None, None]
value, link = link
yield value
except StopIteration:
return
> But the folks really accepted that concurrency needs had evolved, decided to hold back & keep iterating & churning for better
I'm not sure if it was this proposal or another one in a similar space, but I've recently heard about several async improvements that were woefully under-spec'd, and would likely have caused much more harm than good due to all the edge cases that were missed.
This library is my python productivity secret weapon. So many things I've needed to impliment in the past is now just chaining functions in itertools, functions, and this
Nice! These can make code a ton simpler. Also no python dependencies, which is a requirement for me adopting. Would love to see this brought into the standard lib at some point.
It’s possible but tends not to be common for a multitude of reasons. The biggest issue is library updates become synced to version patch updates, which doesn’t provide a lot of flexibility. A package would have to be exceptionally stable to be a reasonable candidate.
That’s not actually true. While dataclasses to most of its inspiration from attrs, there are many features of attrs that were deliberately not implemented in dataclasses, just so it could “fit” in the stdlib.
Or maybe you mean the backport of dataclasses to 3.6 that is available on PyPI? That actually came after dataclasses was added to 3.7.
Yes, I think it might have been a slight design mistake to make the variadic version the default. I've only very rarely used it, whereas I use chain.from_iterable a lot.
Maybe in some cases, but the performance characteristics are way different. The functions in `more_itertools` return lazy generators, but it looks like `np.flatten` materializes the results in an ndarray.
Deleted Comment
https://pypi.org/project/boltons/
result, _* = iterable()
The proposal seemed very close to getting shipped alongside https://github.com/tc39/proposal-iterator-helpers while basically accepting many of the constraints of current async iteration (one at a time consumption). But the folks really accepted that concurrency needs had evolved, decided to hold back & keep iterating & churning for better.
I feel like a lot of the easy visible mood on the web (against the web) is that there's too much, that stuff is just piled in. But I see a lot of caring & deliberation & trying to get shit right & good. Sometimes that too can be maddening, but ultimately with the web there aren't really re-do-es & the deliberation is good.
Disclaimer: this code was written several years ago with few downstream users, not all of these are super high performing, and they have not been super extensively tested.
Here is an update that should be much easier to convert to JS:
I'm not sure if it was this proposal or another one in a similar space, but I've recently heard about several async improvements that were woefully under-spec'd, and would likely have caused much more harm than good due to all the edge cases that were missed.
For an idea of the process followed, look up PEP417 (Python Enhancement Proposal.
Or maybe you mean the backport of dataclasses to 3.6 that is available on PyPI? That actually came after dataclasses was added to 3.7.
Source: I wrote dataclasses.
from itertools import chain
flatten = chain.from_iterable
Ref: pytudes - https://github.com/norvig/pytudes/blob/main/ipynb/Advent-202...