Save that as script.py and you can use "uv run script.py" to run it with the specified dependencies, magically installed into a temporary virtual environment without you having to think about them at all.
Claude 4 actually knows about this trick, which means you can ask it to write you a Python script "with inline script dependencies" and it will do the right thing, e.g. https://claude.ai/share/1217b467-d273-40d0-9699-f6a38113f045 - the prompt there was:
Write a Python script with inline script
dependencies that uses httpx and click to
download a large file and show a progress bar
I really love this and I've been using it a lot. The one thing I'm unsure about is the best way to get my LSP working with inline dependencies.
Usually, when I use uv along with a pyproject.toml, I'll activate the venv before starting neovim, and then my LSP (basedpyright) is aware of the dependencies and it all just works. But with inline dependencies, I'm not sure what the best way to do this is.
I usually end up just manually creating a venv with the dependencies so I can edit inside of it, but then I run the script using the shebang/inline dependencies when I'm done developing it.
I plan to support it in PAPER as well. A big part of the original rationale for PEP 723 (and 722) was to support people who want to distribute a single Python file to colleagues etc. without worrying about Python packaging (or making a "project"), and allow them to run it with the appropriate dependencies and their own local Python. So while I am trying to make a tool for users (that incidentally covers a portion of developer use cases) rather than developers, this definitely fits.
My question is: if you already have to put the "import" there, why not have a PEP to specify the version in that same import statement, and let `uv` scan that dependency? It's something I never understood.
PEP 723 had a deliberate goal of not making extra work for core devs, and not requiring tools to parse Python code. Note that if you put inline comments on the import lines, that wouldn't affect core devs, but would still complicate matters for tools. Python's import statement is just another statement that occurs at runtime and can appear anywhere in the code.
Besides that, trying to associate version numbers with the imported module is a complete non-starter. Version numbers belong to the distributions[1], which may define zero or more top-level packages. The distribution name may be completely independent from the names of what is imported, and need not even be a valid identifier (there is a standard normalization process, but that still allows your distribution's name to start with a digit, for example).
[1]: You say "packages", but I'm trying to avoid the unfortunate overloading the term. PyPA recommends (https://packaging.python.org/en/latest/discussions/distribut...) "distribution package" for what you download from PyPI, and "import package" for what you import in the code; I think this is a bit unwieldy.
How do you never learn? No, honestly, how do you never learn this simple thing: it will break! I will bet my pension on that it will break, and perhaps not you, but some hundreds of developers will have to try to debug and try to figure out where the dependencies went and why they weren't installed correctly, or why something was missing and so on.
There will never be a situation that you don't have to think about something as important as dependencies at all.
> Save that as script.py and you can use "uv run script.py" to run it with the specified dependencies,
Be aware that uv will create a full copy of that environment for each script by default. Depending on your number of scripts, this could become wasteful really fast. There is a flag "--link-mode symlink" which will link the dependencies from the cache. I'm not sure why this isn't the default, or which disadvantages this has, but so far it's working fine for me, and saved me several gigabytes of storage.
This is cool, but honestly I wish it was builtin language syntax not a magic comment, magic comments are kind of ugly. Maybe some day…
(I realise there are some architectural issues with making it built-in syntax-magic comments are easier for external tools to parse, whereas the Python core has very limited knowledge of packaging and dependencies… still, one of these days…)
It IS built-in language syntax. It's defined in the PEP, that's built-in. It's syntax:
"Any Python script may have top-level comment blocks that MUST start with the line # /// TYPE where TYPE determines how to process the content. That is: a single #, followed by a single space, followed by three forward slashes, followed by a single space, followed by the type of metadata. Block MUST end with the line # ///. That is: a single #, followed by a single space, followed by three forward slashes. The TYPE MUST only consist of ASCII letters, numbers and hyphens."
I'm not a Python dev, but had to write a script the other day and got all cought up with the virtual env stuff. Why can't `uv` just infer the dependencies from the `import ...` line? Why declare the dependencies twice?
Python import names are not necessarily unique or the name of the package on pypi/pip. Something like PyYaml is imported as yaml, but potentially other packages could supply a slightly different yaml to import
One gotcha I caught myself in with this technique is using it in a script that would remediate a situation where my home has lost internet and needed the router to be power cycled. When the internet is out, `uv` cannot download the dependencies specified in the script, and the script would fail. Thankfully I noticed this problem after writing it but before needing it to actually work, and refactored my setup to pre-install the needed dependencies. But don't make the same mistake I almost made! Don't use this for code that may need to run airgapped! Even with uv caching you may still get a cache miss.
Here's a fun one (says he, in a panic): Will we get to a point where (through fault or, gasp, design) a given AI will generatively code a missing dependency, on the fly - perhaps as a "last ditch" effort?
(I can imagine languages having official LLMs which would more or less "compress/know" enough of the language to be an ...
... import of last resort, of sorts, by virtue of which an approximation to the missing code would be provided.-
I'm pretty sure we've already reached a point where people generatively code possibly-malicious packages to publish under names of non-existent packages commonly hallucinated by LLMs prompted by other devs.
This is my absolute favourite uv features and the reason I switched to uv.
I have a bunch of scripts in my git-hooks which have dependencies which I don't want in my main venv.
#!/usr/bin/env -S uv run --script --python 3.13
This single feature meant that I could use the dependencies without making its own venv, but just include "brew install uv" as instructions to the devs.
If I may interject, does anyone have any idea why the `-S` flag is required? Testing this out on my machine with a BSD env, `/usr/bin/env -S uv run --python 3.11 python` and `/usr/bin/env uv run --python 3.11 python` do the same thing (launch a Python interactive shell). Man page for env doesn't clarify a lot to me, as the end result still seems to be the same("Split apart the given string into multiple strings, and process each of the resulting strings as separate arguments to the env utility...")
$ cat test.sh
#!/usr/bin/env bash -c "echo hello"
$ ./test.sh
/usr/bin/env: ‘bash -c "echo hello"’: No such file or directory
/usr/bin/env: use -[v]S to pass options in shebang lines
$ ./test.sh # with -S
hello
Completely agree. UV for stalled what was going to be a major project to move lots of python to golang. There will still be a lot migrated, but smaller script like things are no longer in scope.
I still write small some scripts in golang when the bootup time is important. Python still takes its time to boot up, and it's not the best tool for the job if it's gonna be called like a shell utility for thousands of files, for example.
Note that this only works for single-file scripts.
If you have a project with modules, and you'd like a module to declare its dependencies, this won't work. uv will only get those dependencies declared in the invoked file.
I love this feature of uv but getting linters/language servers to pick up the venv when editing the files is a bit of a pain. I currently have a script 'uv-edit' which I am using to run Neovim with the correct environment:
Love this feature of UV. Here's a one-liner to launch jupyter notebook without even "installing" it:
uv run --with jupyter jupyter notebook
Everything is put into a temporary virtual environment that's cleaned up afterwards. Best thing is that if you run it from a project it will pick up those dependencies as well.
I wish there was a straightforward way to let VS Code pick up the venv that uv transparently creates.
Out of the box, the Python extension redlines all the third-party imports.
As a workaround, I have to plunge into the guts of uv's Cache directory to tell VS Code the cached venv path manually, and cross fingers that it won't recreate that venv too often.
Could you tell me how you do this in code ? Started using uv in a very old school team and they don’t like uv simply because it’s new. Something like this is what they base it on.
It's super cumbersome so I can't really recommend it for work.
I invoke the "Select Interpreter" action. A file selector opens, then I go to the user cache directory (e.g. ~/.cache on Linux, something like %LOCALAPPDATA%\Cache on Windows). It has a `uv` subdirectory, then I drill further down until I find the directory where uv keeps its venvs. Find the venv that corresponds to your script, then go to its `bin` subdirectory and select the Python executable.
The upside is that you only have to do this once per script.
The downside is that you have to do this once per script.
in scripts for including per-script dependencies. This is language agnostic as long as the interpreter and its dependencies are available as guix packages. I think there may be a similiar approach for utilizing nix shells that way as well.
It's an implementation of Python PEP 723: https://peps.python.org/pep-0723/
Claude 4 actually knows about this trick, which means you can ask it to write you a Python script "with inline script dependencies" and it will do the right thing, e.g. https://claude.ai/share/1217b467-d273-40d0-9699-f6a38113f045 - the prompt there was:
Prior to Claude 4 I had a custom Claude project that included special instructions on how to do this, but that's not necessary any more: https://simonwillison.net/2024/Dec/19/one-shot-python-tools/Usually, when I use uv along with a pyproject.toml, I'll activate the venv before starting neovim, and then my LSP (basedpyright) is aware of the dependencies and it all just works. But with inline dependencies, I'm not sure what the best way to do this is.
I usually end up just manually creating a venv with the dependencies so I can edit inside of it, but then I run the script using the shebang/inline dependencies when I'm done developing it.
Others like pip-tools have support in the roadmap (https://github.com/jazzband/pip-tools/issues/2027)
This gave me the questionable idea of doing the same sort of thing for Go: https://github.com/imjasonh/gos
(Not necessarily endorsing, I was just curious to see how it would go, and it worked out okay!)
https://gist.github.com/JetSetIlly/97846331a8666e950fc33af9c...
My own take:
PEP 723 had a deliberate goal of not making extra work for core devs, and not requiring tools to parse Python code. Note that if you put inline comments on the import lines, that wouldn't affect core devs, but would still complicate matters for tools. Python's import statement is just another statement that occurs at runtime and can appear anywhere in the code.
Besides that, trying to associate version numbers with the imported module is a complete non-starter. Version numbers belong to the distributions[1], which may define zero or more top-level packages. The distribution name may be completely independent from the names of what is imported, and need not even be a valid identifier (there is a standard normalization process, but that still allows your distribution's name to start with a digit, for example).
[1]: You say "packages", but I'm trying to avoid the unfortunate overloading the term. PyPA recommends (https://packaging.python.org/en/latest/discussions/distribut...) "distribution package" for what you download from PyPI, and "import package" for what you import in the code; I think this is a bit unwieldy.
See also https://peps.python.org/pep-0723/#why-not-infer-the-requirem...
Hahahahaha.
Oh. I'm rolling on the floor. Hahahahaha.
How do you never learn? No, honestly, how do you never learn this simple thing: it will break! I will bet my pension on that it will break, and perhaps not you, but some hundreds of developers will have to try to debug and try to figure out where the dependencies went and why they weren't installed correctly, or why something was missing and so on.
There will never be a situation that you don't have to think about something as important as dependencies at all.
Be aware that uv will create a full copy of that environment for each script by default. Depending on your number of scripts, this could become wasteful really fast. There is a flag "--link-mode symlink" which will link the dependencies from the cache. I'm not sure why this isn't the default, or which disadvantages this has, but so far it's working fine for me, and saved me several gigabytes of storage.
The hard-link strategy still saves disk space — although not as much (see other replies and discussion).
Possible reasons not to symlink that I can think of:
* stack traces in uncaught exceptions might be wonky (I haven't tried it)
* it presumably adds a tiny hit to imports from Python, but I find it hard to imagine anyone caring
* with the hard-linking strategy, you can purge the cache (perhaps accidentally?) without affecting existing environments
(I realise there are some architectural issues with making it built-in syntax-magic comments are easier for external tools to parse, whereas the Python core has very limited knowledge of packaging and dependencies… still, one of these days…)
"Any Python script may have top-level comment blocks that MUST start with the line # /// TYPE where TYPE determines how to process the content. That is: a single #, followed by a single space, followed by three forward slashes, followed by a single space, followed by the type of metadata. Block MUST end with the line # ///. That is: a single #, followed by a single space, followed by three forward slashes. The TYPE MUST only consist of ASCII letters, numbers and hyphens."
That's the syntax.
Built-in language syntax.
Something like `pip install -r <(head myscript.py)`. (Not exactly, but you get the idea).
(I can imagine languages having official LLMs which would more or less "compress/know" enough of the language to be an ...
... import of last resort, of sorts, by virtue of which an approximation to the missing code would be provided.-
I have a bunch of scripts in my git-hooks which have dependencies which I don't want in my main venv.
#!/usr/bin/env -S uv run --script --python 3.13
This single feature meant that I could use the dependencies without making its own venv, but just include "brew install uv" as instructions to the devs.
If you have a project with modules, and you'd like a module to declare its dependencies, this won't work. uv will only get those dependencies declared in the invoked file.
For a multi-file project, you must have a `pyproject.toml`, see https://docs.astral.sh/uv/guides/projects/#managing-dependen...
In both cases, the script/project writer can use `uv add <dependency>`, just in the single-file case they must add `--script`.
uvx marimo edit
A one liner with marimo that also respects (and records) inline script metadata using uv:
uvx marimo edit --sandbox my_notebook.py
https://docs.astral.sh/uv/guides/integration/marimo/
Out of the box, the Python extension redlines all the third-party imports.
As a workaround, I have to plunge into the guts of uv's Cache directory to tell VS Code the cached venv path manually, and cross fingers that it won't recreate that venv too often.
I've been working on a VSCode extension to automatically detect and activate the right interpreter: https://marketplace.visualstudio.com/items?itemName=nsarrazi...
I invoke the "Select Interpreter" action. A file selector opens, then I go to the user cache directory (e.g. ~/.cache on Linux, something like %LOCALAPPDATA%\Cache on Windows). It has a `uv` subdirectory, then I drill further down until I find the directory where uv keeps its venvs. Find the venv that corresponds to your script, then go to its `bin` subdirectory and select the Python executable.
The upside is that you only have to do this once per script.
The downside is that you have to do this once per script.
#!/usr/bin/env -S guix shell python python-requests python-pandas -- python3
in scripts for including per-script dependencies. This is language agnostic as long as the interpreter and its dependencies are available as guix packages. I think there may be a similiar approach for utilizing nix shells that way as well.
[0]: https://guix.gnu.org/manual/en/html_node/Invoking-guix-shell...