def test_empty_drc():
drc = Mock(
spec_set=DockerRegistryClient,
get_repos=lambda: []
)
assert {} == get_repos_w_tags_drc(drc)
Maybe it's just a poor example to make the point. I personally think it's the wrong point to make. I would argue: don't mock anything _at all_ – unless you absolutely have to. And if you have to mock, by all means mock code you don't own, as far _down_ the stack as possible. And only mock your own code if it significantly reduces the amount of test code you have to write and maintain.I would not write the test from the article in the way presented. I would capture the actual HTTP responses and replay those in my tests. It is a completely different approach.
The question of "when to mock" is very interesting and dear to my heart, but it's not the question this article is trying to answer.
If you care about being alerted when your dependencies break, writing only the kind of tests described in the article is risky. You’ve removed those dependencies from your test suite. If a minor library update changes `.json()` to `.parse(format="json")`, and you assumed they followed semver but they didn’t: you’ll find out after deployment.
Ah, but you use static typing? Great! That’ll catch some API changes. But if you discover an API changed without warning (because you thought nobody would ever do that) you’re on your own again. I suggest using a nice HTTP recording/replay library for your tests so you can adapt easily (without making live HTTP calls in your tests, which would be way too flaky, even if feasible).
I stopped worrying long ago about what is or isn’t “real” unit testing. I test as much of the software stack as I can. If a test covers too many abstraction layers at once, I split it into lower- and higher-level cases. These days, I prefer fewer “poorly” factored tests that cover many real layers of the code over countless razor-thin unit tests that only check whether a loop was implemented correctly. While risking that the whole system doesn’t work together. Because by the time you get to write your system/integration/whatever tests, you’re already exhausted from writing and refactoring all those near-pointless micro-tests.
You make it sounds as if the article would argue for test isolation which it emphatically doesn't. It in fact even links out to the Mock Hell talk.
Every mock makes the test suite less meaningful and the question the article is trying to answer is how to minimize the damage the mocks do to your software if you actually need them.
I just pasted the YouTube link into AI Studio and gave it this prompt if you want to replicate:
reformat this talk as an article. remove ums/ahs, but do not summarize, the context should be substantively the same. include content from the slides as well if possible.
I tried pulling out the Youtube transcript, but it was very uncomfortable to read with asides and jokes and "ums" that are all native artifacts of speaking in front of a crowd but that only represent noise in when converted to long written form.
---
FWIW, I'm the speaker and let me be honest with you: I'm super unmotivated to write nowadays.
In the past, my usual MO was writing a bunch of blog posts and submit the ones that resonated to CfPs (e.g. <https://hynek.me/articles/python-subclassing-redux/> → <https://hynek.me/talks/subclassing/>).
However, nowadays thanks to the recent-ish changes in Twitter and Google, my only chance to have my stuff read by a nontrivial amount of people is hitting HN frontage which is a lottery. It's so bad I even got into YouTubing to get a roll at the algorithm wheel.
It takes (me) a lot of work to crystallize and compress my thoughts like this. Giving it as a talk at a big conference, at least opens the door to interesting IRL interactions which are important (to me), because I'm an introvert.
I can't stress enough how we're currently eating the seed corn by killing the public web.
Can you elaborate?
@attr.s
class C:
x = attr.ib()
as its main api (with `attr.attrs` and `attr.attrib` as serious business aliases so you didn't have to use it).That API was always polarizing, some loved it, some hated it.
I will point out though, that it predates type hints and it was an effective way to declare classes with little "syntax noise" which made it easy to write but also easy to read, because you used the import name as part of the APIs.
Here is more context: https://www.attrs.org/en/stable/names.html
I REGRET NOTHING
Genuine question, we are drowning in options here!
I personally love PDM, and PDM is in the process of adopting uv’s lower-leveln functionality to install/resolve packages, but I can see how having a single binary for bootstrapping a whole dev environment is really nice.
In the end, uv’s biggest upside is that it has several people work 8h / day on it and one would be surprised how much can be achieved in such amount of time.
uv is meant to supplant Rye eventually (it mostly already has: see also this post by the creator of Rye: <https://lucumr.pocoo.org/2024/8/21/harvest-season/>). But you can’t put a virtualenv into a Kubernetes, so Docker containers are still interesting if that’s something you want to do.