Readit News logoReadit News
frou_dh · 2 years ago
To me the Astral folks have a lot of credibility because both their ruff linter and formatter have been fantastic. Elevates this kind of announcement from yet another Python packaging thingy to something worth paying attention to.

I like the idea of that single-file Python script with inline dependency info construct, but it's probably going to be a bummer in terms of editor experience. I doubt the typical LSP servers will be aware of dependencies specified in that way and so won't offer autocompletion etc for those libraries.

zanie · 2 years ago
Good thing we've been investing in our LSP server lately[1]! We're still a ways out from integrating uv into the LSP (and, more generally, providing auto-completions) but it's definitely on our minds.

The script dependency metadata _is_ standardized[2], so other LSP servers could support a good experience here (at least in theory).

[1] The Ruff Language Server: https://astral.sh/blog/ruff-v0.4.5

[2] Inline script metadata: https://peps.python.org/pep-0723/

drawnwren · 2 years ago
Do you intend to get to feature parity with pyright for ruff-lsp? I wasn't aware you were aiming to make it a full fledged lsp, despite using it daily.
oblvious-earth · 2 years ago
> I doubt the typical LSP servers will be aware of dependencies specified in that way and so won't offer autocompletion etc for those libraries.

Given this is a recently accepted standard (PEP 723), why would language servers not start to support features based on it?

frou_dh · 2 years ago
Well it's not that they can't, but it's definitely work because it's a departure from the traditional model.

Consider an editor feature, e.g. goto-definition. When working in a normal Python environment (global or virtual) the code of your dependencies actually exists on the filesystem. With one of these scripts with inline dependency information, that dependency code perhaps doesn't exist on the filesystem at all, and possibly won't ever until after the script has been run in its special way (e.g. with a `pipx run` shebang?).

0cf8612b2e1e · 2 years ago
I love the idea of Rye/uv/PyBi on managing the interpreter, but I get queasy that they are not official Python builds. Probably no issue, but it only takes one subtle bug to ruin my day.

Plus the supply chain potential attack. I know official Python releases are as good as I can expect of a free open source project. While the third party Python distribution is probably being built in Nebraska.

zanie · 2 years ago
We'd love it if there _were_ official portable Python binaries, but there just aren't. We're not just distributing someone else's builds though, we're actively getting involved in the project, e.g., we did the last five releases.

We've invested quite a bit of effort into finding system Python interpreters though and support for bringing your own Python versions isn't going anywhere.

westurner · 2 years ago
I'm from Nebraska. Unfortunately if your Python is compiled in a datacenter in Iowa, it's more likely that it was powered with wind energy. Claim: Iowa has better Clean Energy PPAs for datacenters than Nebraska (mostly due to rational wind energy subsidies).

Anyways, software supply chain security and Python & package build signing and then containers and signing them too

Conda-forge's builds are probably faster than the official CPython builds. conda-forge/python-feedstock//recipe/meta.yml: https://github.com/conda-forge/python-feedstock/blob/main/re...

Conda-forge also has OpenBLAS, blos, accelerate, netlib, and Intel MKL; conda-forge docs > switching BLAS implementation: https://conda-forge.org/docs/maintainer/knowledge_base/#swit...

From "Building optimized packages for conda-forge and PyPI" at EuroSciPy 2024: https://pretalx.com/euroscipy-2024/talk/JXB79J/ :

> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.

> We will show how to use this in practice with `rattler-build`

> For GPUs, conda-forge has supported different CUDA levels for a long time, and we'll look at how that is used as well.

> Lastly, we also take a look at PyPI. There are ongoing discussions on how to improve support for wheels with CUDA support. We are going to discuss how the (pre-)PEP works and synergy possibilities of rattler-build and cibuildwheel

Linux distros build and sign Python and python3-* packages with GPG keys or similar, and then the package manager optionally checks the per-repo keys for each downloaded package. Packages should include a manifest of files to be installed, with per-file checksums. Package manifests and/or the package containing the manifest should be signed (so that tools like debsums and rpm --verify can detect disk-resident executable, script, data asset, and configuration file changes)

virtualenvs can be mounted as a volume at build time with -v with some container image builders, or copied into a container image with the ADD or COPY instructions in a Containerfile. What is added to the virtualenv should have a signature and a version.

ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/ :

  rpm-ostree rebase ostree-image-signed:registry:<oci image>
  rpm-ostree rebase ostree-image-signed:docker://<oci image>
> Fetch a container image and verify that the container image is signed according to the policy set in /etc/containers/policy.json (see containers-policy.json(5)).

So, when you sign a container full of packages, you should check the package signatures; and verify that all package dependencies are identified by the SBOM tool you plan to use to keep dependencies upgraded when there are security upgrades.

e.g. Dependabot - if working - will regularly run and send a pull request when it detects that the version strings in e.g. a requirements.txt or environment.yml file are out of date and need to be changed because of reported security vulnerabilities in ossf/osv-schema format.

Is there already a way to, as a developer, sign Python packages built with cibuildwheel with Twine and TUF or sigstore to be https://SLSA.dev/ compliant?

dang · 2 years ago
We changed the URL from https://github.com/astral-sh/uv/releases/tag/0.3.0 to the project page. Interested readers probably should look at both.
claytonjy · 2 years ago
Is there any reason to still use Rye, now? looks like this release adds all the things I would have missed from Rye, but I don't think I use all of Rye's features
charliermarsh · 2 years ago
kzrdude · 2 years ago
I think uv hits everything that rye does, and with a solid implementation.

With love, rye is all vision, philosophy and duct tape. uv is built by a full-time team.

thenipper · 2 years ago
Aren't they the same team?
jdnier · 2 years ago
See "Rye and Uv: August Is Harvest Season for Python Packaging"[1]

[1] https://lucumr.pocoo.org/2024/8/21/harvest-season/

Lorak_ · 2 years ago
Does it support building native extensions and Cython modules or are setuptools still the only reasonable way to do this?
Mehdi2277 · 2 years ago
Uv is installer not a build backend. It’s similar to pip. If you install library with uv it will call backend like setuptools as needed. It is not a replacement for setuptools.
visarga · 2 years ago
Does it support Numpy and Pytorch?
eaq · 2 years ago
The astral team is definitely doing great work, and it's wonderful that these tools are permissively licensed, but what happens if astral doesn't work out as a business?
taraskuzyk · 2 years ago
Does anyone know if Astral has plans to build an LSP like pyright? There are many projects that try to replicate pyright's functionality, pylyzer comes to mind, but don't have sufficient coverage (e.g. missing Generic support). Having a team like Astral's behind creating a fast and good LSP for Python would be great.
pantsforbirds · 2 years ago
Ruff (the linter/formatter from Astral) has its own LSP right now: https://github.com/astral-sh/ruff-lsp. Although Ruff itself now ships with a language server already integrated, so I have no idea what the plan is long term.
pietz · 2 years ago
What does the Astral team recommend for setting up their tools on a new machine? Install uv with curl, manage python versions and projects from there and install ruff with uv?
milliams · 2 years ago
Not on the Astral team, but to the first step, I'd get uv from your distro package manager (e.g. https://build.opensuse.org/package/show/openSUSE:Factory/uv) and then the rest as you say ("manage python versions and projects from there and install ruff with uv").

If you have some other tool manager on your system (e.g. mise) then you can likely install uv through that.

simonw · 2 years ago
Yeah, I think so. Their documentation includes a note on how to bootstrap from a fresh Docker image here: https://docs.astral.sh/uv/guides/integration/docker/