Being new to python, the astral stuff is such a relief.
I don’t think experienced python folks realize how much the flexible tooling slows people down, and creates friction for adopters. Setting up my project I tried 3 environment managers, 2 type checkers, 3 formatters/linters, 3 packagers/dependancy/project managers.
I know this is kinda the n+1 issue where astral is just adding another. But it feels like a complete and well designed stack, not a box of parts encouraging me to build my own. Reminds me a bit of go tooling.
I am a convert. I’ll happily jump on their type checker and packager when ready.
>Setting up my project I tried 3 environment managers, 2 type checkers, 3 formatters/linters, 3 packagers/dependancy/project managers.
I've been using Python for right about 20 years now. I've been through various prior iterations of the "standard" packaging tools. But I've never found myself wanting a type checker, formatter or linter; and aside from a brief period with Poetry (in which I used barely any of its functionality), I've had no interest in dependency management tools, either. What I have found myself wanting is fixes for the problems in Pip.
Different people work on different kinds of projects, and approach the language differently, and have radically different needs as a result.
Do you still write type hints, but not check them? Or just use dynamic typing as in Python 2?
Actually, I guess the first case is really type-hints-as-comments, so isn't substantially different from the second.
> I've been using Python for right about 20 years now
Either way, I've found it quite common among long-time Python users to stick with Python's traditional dynamic typing, whereas newcomers to the language tend to really dislike it in favour of strict static typing. (One exception is GvR himself, who seems to be very pro-type-checking.)
As the second group continue to add increasingly complex typing features to the language, I wonder if Python is becoming less desirable for those in the first group.
>Do you still write type hints, but not check them? Or just use dynamic typing
I occasionally write them as a form of documentation - when it would be more concise than explaining in prose.
There is nothing specifically 2.x-style about making use of dynamic typing - it's a core property of the language. While 3.0 introduced the annotation feature, it wasn't even specifically intended for typing (https://peps.python.org/pep-3107/) and standard library support wasn't added until 3.5 (https://docs.python.org/3/library/typing.html).
>As the second group continue to add increasingly complex typing features to the language, I wonder if Python is becoming less desirable for those in the first group.
Some new changes have been missteps, in my view. There's still nobody forcing my hand when it comes to typing, really; I just shake my head when I see people trying to make it do things it really wasn't designed for. On the other hand, there are definitely things I'd love to see improved about the language.
Working on a team in a larger python application, type checkers and linters are such a time saver. It's so nice not to think about how I'd like to format my code any longer. We've made some tweaks to our ruff rules to suit the team's opinion, and now even when I don't _love_ the way it formats a particular part of code, I just move on and get something productive done instead.
And type checking is so great for both preventing bugs (which it does all the time) and self-documenting code. Can't recommend them enough.
It takes out so much frustration too. No more nitpicking about style/... in merge requests, the pre-commit hook will fix it and CI will catch it if you don't.
The more of this you can automate, the more you get to spend on "real work".
In my organisation, some co-workers used to write def func(*args,**kwargs) all the time.
That was so tiring to look for what you should put as argument. Type checking is mandatory for well organized large project.*
I’ve been using it for around the same time, and never cared much about formatters. Linting is useful but wouldn’t call it essential. But type checking is a non negotiable on any project I lead. It’s not perfect by any means, but beats having to crawl my way through a call stack trying to figure out what the hell a function is expected to take and why it’s getting a None instead.
I have yet to find a single drawback of adopting mypy (or similars) that isn’t completely eclipsed by the benefits.
> I've been using Python for right about 20 years now.
That's actually the biggest issue I've seen when a bit back joined a python-shop. Everyone there had mostly done python their whole careers. Fell in love with language during studies, then applied for jobs using that language. Stuck in the old ways, perhaps python felt nimble and nice 20 years ago compared to other things at the time, but with little progress (from its users) in using modern tooling it felt so arcane coming to this shop. And it wasn't the job's fault, it was a state of the art python stack, just a decade behind what other languages have.
No. On the occasions where I've tried to use an IDE for other programming languages, I've felt like it got in my way more than it helped; and Python is if anything designed more for IDE-less development (due to its overall elegance), and opposed to straightforward IDE guidance (due to its dynamicity).
The point he is making is that a lot of stuff that you are told you need. You actually don't. Especially if you are working by yourself or in a very small team.
Getting stuff working is much more important. I'd rather people concentrate on stuff like CI, Unit Tests and Deployments.
I like unit tests, and I would happily adopt a CI system if my project included non-Python code and had to build multiple wheels. But formatting code properly in my editor is second nature by now; the functions I write are typically so short that it'd be hard to do anything a linter would object to; and type-checking doesn't just introduce busy-work, it works against part of the reason I'm using Python in the first place. I actively don't want to tell people not to call my code with a perfectly compatible type just because I hadn't considered it as within the range of possible compatible types.
Never said it was a silver bullet. I said I would rather people concentrate on more important things than configuring a linter. Half the time this stuff gives you weird errors that don't make a lot of sense (especially with JavaScript/TypeScript), sometimes you are literally making the compiler warning go away because there is literally nothing wrong with the code.
I do use eslint/prettier btw, but other collegues can never seem to get this working so I've just given up on it and then fix the linter issues whenever I come across them.
You've never met people who take what you now call obvious hyperbole absolutely seriously and literally?
I guess you're going to continue to get surprised by how often you get criticised in this specific way, by many, many other people.
Hint: when someone doesn't understand you, you were in fact not obvious, no matter what you think — communication is a multiplayer game, not a single-player endeavour.
> I know this is kinda the n+1 issue where astral is just adding another.
My bugbear with that XKCD is that even if it's true, it's also how progress is made. By this point I'm so annoyed whenever anyone links it that I kinda just wish Randall would take it down.
I used to use any standard editor but these days I use PyCharm or Visual Studio (not Code) otherwise, I just use Sublime Text or any adjacent reasonable editor.
Elementary OS has the nicest editor that other Distros have not snatched up yet not sure why. Its fast, and fits the bill where KATE and Sublime Text are for me.
Yeah but a global environment is a really bad idea. It’s going to trip up those beginners when second project that requires a different version of pytorch.
I primarily work on typescript projects, little bit of go. Never enjoyed working with python until I found uv. Glad to see folks rally around astral's tools.
People just need a mentor. Could you imagine getting into plumbing and just trying to work out what kind of tools you need by looking at the pipes? Of course not. You'd learn from someone else.
I'm a little sad that Ruff took off as a re-implementation of a whole bunch of work that was already done, rather than as a project to improve the work that was already done.
It was nice to be able to write little extra linters or flake8 plugins in the language I was linting. Plus decades of effort had gone into making those plugins pretty great, finding the right separation of linting codes so that the right things could be linted/ignored per codebase.
I understand why they did it, and they have done it remarkably well, but "rewrite it from scratch" is almost never the best answer and part of me wonders if this result could have been even better achieved another way.
If something can be rewritten from scratch fairly quickly and ends up being much faster, it makes you wonder what we (the general “we”) might have done “wrong” with the “decades of effort” before.
To be clear: there was nothing wrong with the old toolchains (hence the quotes). And of course, Ruff/uv can do what they do because they build on all that earlier knowledge. My point is just that sometimes it's easier to start over than to fix the old, and that fact itself feels kind of sad to me.
> If something can be rewritten from scratch fairly quickly and ends up being much faster, it makes you wonder what we (the general “we”) might have done “wrong” with the “decades of effort” before.
I think the impact of Rust (and Go) shouldn't be underestimated here. Prior to these languages if you wanted fast runtime you were either using C or C++ with all their associated footguns, tricky build systems, and poor support for exposing and consuming libraries. Or you were using something like Java or C# which meant managing another runtime just for tools, which is especially bad for something like UV which manages runtimes - imagine if you were using a python tool to manage your Java versions and a Java tool to manage you Python versions, it would be a mess!
With both Go and Rust you have fast languages without memory safety issues that compile to a single, easily-installable binary and have a broad ecosystem of libraries available that mean that half the time you don't even need to write any difficult code and can just glue some modules together, and the other half of the time you only need to write the code you actually want to write and not all the support code for general pupose stuff like reading config files, file watching, etc.
> there was nothing wrong with the old toolchains (hence the quotes). And of course, Ruff/uv can do what they do because they build on all that earlier knowledge
I don't think that's true. I think the old toolchains really were bad, partly because the Python community was uniquely resistant to building on what worked in other ecosystems, and uv in particular largely succeeds because it's the first tool in this space to finally just ignore the nonsense that had somehow become accepted wisdom in Python-land and apply a little bit of outside experience.
> I think the old toolchains really were bad, partly because the Python community was uniquely resistant to building on what worked in other ecosystems
I mostly attribute it to backwards compatibility concerns (which I in turn attribute to the traumatic 3.x migration). PyPI continued to accept egg uploads until August 2023 (https://packaging.python.org/en/latest/discussions/package-f...). The easy_install program, along the direct command-line use of `setup.py` (as opposed to having Setuptools invoke it behind the scenes), have been deprecated since October 2021 (https://setuptools.pypa.io/en/latest/history.html#v58-3-0); but not only are they still supported, Setuptools can't even remove support for `setup.py test` (as they tried to in 72.0 and immediately reverted - https://github.com/pypa/setuptools/issues/4519) without causing major disruption to the ecosystem. One of the packages affected was Requests, which: a) is one of the most downloaded packages on PyPI; b) is pure Python (aside from dependencies) with nothing particular complicated about its metadata (I see no reason it couldn't be done in pyproject.toml and/or setup.cfg); c) wasn't even reliant on that interface to run tests (they use Tox).
(For that matter: Requests has been maintained by the PSF since mid-2019 (https://www.reddit.com/r/Python/comments/cgtp87/seems_like_r...) and the problem could have easily been avoided once the deprecation was announced, but nobody did anything about it. The project still defines most of its metadata in `setup.py`; `pyproject.toml` exists but is only used for pytest and isort config, while `setup.cfg` is used for flake8 config and some requirements metadata that's completely redundant with `setup.py`.)
I hate to say it, but the development of Requests itself has stagnated and really needs some attention.
The most notable fiasco recently was the introduction of significant changes to TLS/SSL in version 2.32.0 [1][2], which caused widespread breaking issues and even led to a security vulnerability.
Attempts to address these problems in versions .2 and .3 introduced new major issues which still exists in current version [3].
A patch to resolve the new issue was provided by one of the core members of the Requests project as early as June 2024 [4], but for some reason, nothing has been done about it, despite repeated pushes from the community.
If you check the commit history, updates have been sparse lately, even though there are still many unresolved issues.
I am quite grateful that they rewrote so many tools that I had been using. Upgrades were super painful with them split into a dozen separate packages, I routinely ran into incompatibilities and had to pin to specific versions to deal with transitive dependencies.
Given the state of python packaging and tooling, I'd say that consolidating the tooling is a big win in and of itself, and with the tremendous speed wins on top...
Whenever Ruff comes up, I reflexively go and check https://github.com/astral-sh/ruff/issues/970, since Pylint is the one tool whose performance I find to be egregiously bad. Looks like Ruff is still very far from being able to replace it.
Rewrite from scratch is exactly what the Python ecosystem needs more of.
The strong push to always contribute to the existing projects rather than starting new ones in the Python community is what caused the previous subpar state of tooling. I think Astral is also picking a good middle-ground where they still aim to build as much as possible on existing Python standards (and help contributing to new ones), so it's still building on what has been done in the past.
There was probably a lot of engineering that went into those designs and that took time.
Having a clean process to rewrite from is what made it so fast. They knew exactly what outcome they wanted and they had access to the internal implementations on how to do it.
FWIW, I think the reason I'm conflicted is probably a similar reason to why you made a separate new thing. Overcoming the inertia and ways of doing things, or proposing widespread changes, often doesn't go down well with existing maintainers – for very valid reasons. I probably wouldn't want to go into the grep project and suggest rearchitecting it, and I can see why the Ruff developers didn't want to go into projects like Flake8 and Pylint to do the same.
But that doesn't stop me from feeling that there were things lost in this process and wishing for a better outcome.
A huge advantage of going your own way is that you don't need to address an audience you think are just wrong.
When this happens periodically (rather than, every other week) you also get the original GNU advantage which is you can revisit an old tool but knowing what you know now, for example today "everybody" has DVCS, and so ripgrep checks for the .gitignore file.
Yeah it's somewhat ideological, I think open source software is better for society when built as a community project than built and controlled by a VC funded company.
I don't think you could get flake8 to be as fast as Ruff, but I think you could improve it to be within an order of magnitude, and honestly that's plenty good enough. There are a lot of low hanging fruit, particularly around caching. I'd push back on there being no chance of being "as good as" Ruff, because "good" is not just speed. Ruff is _not complete_, if you want all the features and linters, you still need Flake8, or you have to sacrifice some. It's also not publicly extensible, and not written in Python, both of which are fair choices but I think deviate from the ideal scenario here.
Blame Guido. Until recently when he was bought out by Microsoft, he had been the primary blocker of higher Python performance. There were a bunch of attempts at adding a JIT to Python but Guido was more interested in splitting hairs over syntax than any real heavy lifting. Python could have been as performant as LuaJIT or V8 but their dictator ruined it. Python need more Mike Palls.
I'm very impressed by the recent developer experience improvements in the python ecosystem. Between ruff, uv, and https://github.com/gauge-sh/tach we'll be able to keep our django monolith going for a long time.
Any opinions about the current state of the art type checker?
I'm very happy with pyright. Most bug reports are fixed within a week and new peps/features are added very rapidly usually before pep is accepted (under experimental flag). Enough that I ended up dropping pylint and consider pyright enough for lint purposes as well. The most valuable lints for my work require good multi-file/semantic analysis and pylint had various false positives.
Main tradeoff is this only works if your codebase/core dependencies are typed. For a while that was not true and we used pylint + pyright. Eventually most of our code is typed and we added type stubs for our main untyped dependencies.
edit: Also on pylint, it did work well mostly. tensorflow was main library that created most false positives. Other thing I found awkward was occasionally pylint produces non-deterministic lints on my codebase.
Thanks for linking this. I wasn't surprised Microsoft made their AI auto completion proprietary (they did similar for C# in VSCode). But it grated me that semantic highlighting wasn't open source.
They added some closed source features to the C# developer extension in VSCode. So anybody using a non-Microsoft fork of VSCode can't use those features.
If you are referring to the debugger, they did not "add" it - it was like that from the start. The C# extension is MIT, but the 'vsdbg' it ships with isn't because it's a debugger Visual Studio uses made into a standalone cross-platform component.
You can use an extension fork which swaps it with NetCoreDbg maintained by Samsung instead. This is what VSCodium uses.
Also note that both of these essentially consume the debugger API exposed by the runtime itself, you can easily make one yourself in a weekend but no one bothered because there isn't much need for another one.
Personally the thing that annoys me isn't so much the open vs closed source of (parts of) these extensions, but the blocking of using these extensions on VSCode forks.
Extensions are not blocked. It's the redistribution restriction the 'vsdbg' component specifically comes with. But you can easily use the fork if it's an issue :)
I have always used MyPy but I have noticed some large open source projects adopt Pyright as of late. The folks behind ruff and uv are currently working on a type checker as well but haven't heard when they plan on releasing it.
First thanks for mentioning tach, I wished this tool existed for a long time and I'm happy to give in a try in the following days!
For typechecker, I also vouch for Pyright which is what we use for all our django backends at work.
Just be aware that you will have hard time to typecheck part of your code where you rely heavily on django's magic (magic strings, auto-generated properties/methods, etc...).
In these cases, it's sometimes better to avoid these features entirely or accept that some part of your code will not be typechecked.
Python development tools produced by Astral (https://github.com/astral-sh), primarily uv and ruff. GP is a complaint about how often people submit links to these projects. The resulting discussion tends not to produce any new insight and the most common points IMX are "it's fast because it's written in Rust" (which is fairly specious logic - most problems with existing native Python tools are due to algorithmic problems such as poorly designed caching) or "it avoids bootstrapping problems because it's not written in Python" (it's completely possible to avoid those problems for everything except actually provisioning the Python interpreter, which some people apparently do see as a killer feature).
How do you read me comment and take away that I still dont know what Astral is? You acted like everyone had heard of it. My point is not even all the users of uv have.
I first attempted to use ruff for a small project ~2 years ago, and at the time felt that it wasn't quite good enough to replace the: black+isort+whatever linter combo we were using at work.
I've used it a few times since then and now I'm a big proponent of using only ruff. I think most of its value comes from:
1. Being fast (or at least fast enough that it's not annoying).
2. Replaces the linting/formatting combo of multiple tools, reducing the cognitive load for the developer.
I've been using an amalgamation of pyenv, pip-tools, black, isort, etc. for projects and just gave uv and ruff a try. Wow, it really is fast! Skeptical of anything VC-backed but I'll be keeping my eye on it.
I’m used to running command line tools on fast machines, but the first time I ran ruff on my codebase I was blown away. My codebase isn’t massive but most (non-rust) tools just take a while to run. It might be less than a second of startup overhead, but it’s noticeable. Ruff is so fast you wonder if it even did anything. It reminded me how fast computers actually are.
How speedy is the rust-language tooling itself these days? I remember wishing for a 'optimize nothing' or even 'just interpret' mode. Compile times noticably contributing to the feedback loop are a serious killjoy.
Compile times are still a bit much, but there are ways around it:
- The insanely long times are only for the initial build. Further builds are incremental and more tolerable.
- Compilation errors can be checked using 'cargo check'. It avoids the code generation step that's part of a build. I find myself doing it way more often than builds. So it's a time saver depending on how frequently you use it.
- You can extend the incremental build mentioned above using sccache [1]. At the minimum, it allows you to share the build cache between all your local projects. So it saves time if your projects or other builds share a lot of common libraries (that's very common in Rust, though). But sccache can go further by using online build caches (for example, using S3) that can be shared between hosts. Finally, sccache also supports distributed builds if you have a few machines sitting idle. (This is like distcc with extra features).
There have been significant (if not earth shattering) improvements in the compiler itself. But for me at least, the bigger change has been from better hardware. I now have a processor (Apple M1 Pro) that's 10x (multi core) / 2x (single core) faster than the one I had when I first started programming using Rust (intel dual core processor in a 2015 MBP) and that seems to have translated almost perfectly into faster compile times.
There is no "coupling" inherent to native Python tools. They can generally be installed in a separate virtual environment from the one they operate on.
For example, with Pip you simply pass the `--python` option to have it install for a different environment - you can create your venvs `--without-pip`, and share a copy of Pip across all of them. If you use Pipx, you can expose and use Pipx's vendored Pip for this, as I described in a recent blog post (https://zahlman.github.io/posts/2025/01/07/python-packaging-...).
Twine and Build don't care about your project's environment at all - Twine uploads a specified file, and Build by default creates a new environment for the build (and even withough build isolation, it only requires your build environment to contain the build backend, and the build environment doesn't have to be the one you use for the rest of development). Setuptools, similarly, either gets installed by the build frontend into an isolated build environment, or can reside in a separate dedicated build environment. It doesn't actually operate on your project's environment - it only operates on your project's source tree.
Sometimes it takes a lot of words to debunk a misconception. What you initially said didn't have anything to do with setup effort, but also there is quite little setup effort actually described here.
In that case I have absolutely no idea what point you're trying to make. What coupling are you talking about, and why is it a problem? What do you mean by "carting around a bunch of Python stuff", and why and how is it "useful" to avoid that?
Not the person you’re replying to, but if I’m write Python 3.13, and my linter requires Python <= 3.11, now I have to install 2 Pythons. It’s nice to not have to consider that case at all.
As a developer, with a real use case for these tools, I plan to support multiple Python versions anyway, and have them all installed for testing purposes. There only needs to be one actual installation of each; everything else is done through venvs, which are trivial to spin up.
If I want to ship this to colleagues (some of whom probably won't know what a venv is but do have to write python every now and again) I only have to worry about a single binary. Getting consistent python environments on peoples machines, particularly on windows, is expensive (salary).
>won't know what a venv is but do have to write python every now and again
In practical terms, you have to understand what a venv is in order to be a Python developer, for the same reason you have to understand environment variables to do anything serious in the shell. Learning the fundamentals is a one-time cost.
Even twine decoupled itself from my python toolchain some time ago [1], through some dependency. Cannot install it unless you are on a system trusting trust in rust™.
Prettier can do HTML CSS and JS but I don't think there's any one tool that can do it all. (It would be interesting if there were something like a tree sitter for auto formatting)
I recently learned that tree-sitter does not have a goal of accurately representing the whole source input, and thus as it currently stands could not be used for that purpose
Now, whether a tree-sitter-esque ecosystem that is designed for source modeling could catch on like t-s did is a more interesting question
Django templates snippets get mixed inside CSS, JavaScript and HTML. So regular when CSS/JavaScript/HTML formatters get confused by those, it's not obvious what combination to use. I know of djLint but it's pretty slow.
I'm not that fan of Ruff because to me it doesn't make any sense to add a rust dependency to "python" and it blows my mind that people are so keen to accept the ugly formatting inherited from Black just because "it gives consistency to the team, so I don't have to think, and we don't have to discuss code style"...
But all of that personal opinion set aside, what triggers my initial statement is that so many persons are so excited to run to use Ruff because... "It is so fast"... when I'm quite sure most of them would never notice any real speed difference with their modest codebase.
In almost all the codebases I have seen, proprietary and OSS, pylint or any other linter was quasi-instant. There are probably a few huge codebases that would see a speed benefit of a few seconds to use Ruff, but their owner would probably better have a look at their code to understand how their reached such a monster!
> quite sure most of them would never notice any real speed difference with their modest codebase
On the contrary, within only a year or so of coding by a small team, this ecosystem takes single digit seconds or less, while the traditional tools are taking minutes and more.
Particularly in analytics or scientific computing we've seen minutes versus under a second.
Having the tools not require additional/parallel Python dependency management is a plus.
Note that we watched these tools for a long time, and jumped in when rye converged with uv. It was promising to see fewer ways to do it.
I don't know what kinds of codebases you've worked with, but I can tell you that pylint is so far from instant, it became the longest running CI job in multiple reasonably sized codebases I've worked with. Think tens of minutes. Other linters were not much better, until ruff came along. But that's far from the only advantage that ruff brings.
There are other issues with what you said, but the biggest one is: you have some strongly worded criticism for a project that has set a new bar for usability and consistency in Python code quality tooling. These tools are developed by humans like you and distributed to you for free with no obligation to use them. No matter how I look at your comment, I don't see how it's helping.
Pre-commit hooks are great in small, focused codebases with small, homogeneous teams. In large monorepos with lots of different teams committing, it's impossible to guarantee any kind of consistency in which pre-commit hooks get run, so you need CI to actually enforce the consistency or you'll spend all your time chasing (accidental) violations.
Well, for one, I use Jujutsu, where commits happen every time you run jj status and traditional notions of pre-commit hooks don't really apply. But also, I think (as a matter of principle) nothing should get in the way of performing commits or amends.
Depends on what you're trying to achieve. Jobs like lint checks should ideally be pre-push checks so that the long process doesn't get in the way of commits. But very fast and small checks like warning about trailing whitespaces or ensuring a newline at the end of the file can be done during every commit (even if it was in jujutsu). I would rather not wait till the end to find out. And of course, there are ways to temporarily or permanently disable one or more checks when you absolutely need it.
My editor takes care of trailing whitespace and newline termination. I don't think Jujutsu commits should fail on this or fix it every time jj status is run — seems too magical.
Because commits should be small and fast, and always work, like a save. If you’re running a multi second process during commit it’s going to get ripped out.
My thinking is that the linter only has to operate on the code that was actually checked in. And just how many things are you checking about it, anyway?
I have issues with some of Black’s formatting decisions but I’ve also suffered from inconsistent formatting and there is no question in my mind that consistent formatting that I find ugly sometimes is 1000% better than the alternative. And after so many years of dealing with it it’s so refreshing to just “give up” and let the formatter win.
Same here. I appreciate that Black annoyed everyone on our team about the same amount but in different ways. From the instant we added it, stupid style arguments completely disappeared.
Know what I care about more than Black making my own code look less pretty? It making my coworker’s code look less horrid. (And funnily, I’m certain everyone on the team thinks exactly that same thing.)
> I appreciate that Black annoyed everyone on our team about the same amount but in different ways.
If it did affect people equally, it would be great. Unfortunately, spaces for indentation is an accessibility issue and the Black maintainers are hostile to making this configurable. Normally I am in agreement about minimising configurability, but this isn’t a matter of taste, it is making the Python ecosystem more difficult for some disabled people to participate in.
Thank you. I've heard this argument before and it seems to be a very strong one.
I do have a question, though -- can accessibility tools infer tabs based on visual layout or some other mechanism? It seems like one option is for accessibility tools to transform code into tabs internally, which would make them instantly compatible with the vast majority of existing code that uses spaces. What are the practical barriers to making that happen? And is there a good article that discusses all this in a clear, even-handed fashion?
> But all of that personal opinion set aside, what triggers my initial statement is that so many persons are so excited to run to use Ruff because... "It is so fast"... when I'm quite sure most of them would never notice any real speed difference with their modest codebase.
I work on a fairly modest code base (22k lines of python according to sloccount), and I'm seeing a significant decrease in runtime between `pylint src/ tests/` with default parameters and `ruff check src/ tests/` with the majority of checks enabled. More specifically, I'm seeing a decrease from 18 second to less than a tenth of a second with a hot FS and having deleted the ruff cache between runs.
I like that it's fast, and it is noticeably faster for even moderately sized codebases.
But the main thing I like about ruff is that it is a single tool with consistent configuration, as opposed to a combination of several different tools for formatting and linting that each have their own special way of configuring them and marking exceptions to lint rules.
When I write argparse or click, I use one parameter per line, like:
parser.add_argument(
"--stdout",
choices = ("true", "false"),
default = "true",
help = "log data to stdout"
)
Black turns it into:
parser.add_argument(
"--stdout", choices=("true", "false"), default="true", help="log data to stdout"
)
I find the one-line-per-term easier to understand, and even though it fits on a single line, I would rather have all add_argument()/@click.option() calls follow the same layout, to make it easier to discern the structure across dozens of similar calls.
I also like to have spaces around the "=" in my keyword=arguments, except for very short and simple calls.
PEP 8 says "Don’t use spaces around the = sign when used to indicate a keyword argument" so black is following that recommendation.
However, everywhere else in the PEP (assignment like 'x = 5', and annotations like 'class Spam: foo: int = 4' or 'def spam(foo: int = 4)', there are spaces on either side of the equals sign.
The point they're trying to make, I think, is: Black/Ruff format, or any other formatter necessarily need to operate on universal rules. In context, sometimes these rules don't make sense. I still would love some kind of stateful linter and formatter, where it suggests me changes that I can then accept or ignore (and won't be told about again).
Formatters _are_ a compromise. They make your coworkers' code nicer, and your code worse.
The removal of spaces around keyword arguments is per the PEP8 style guide. There is no point arguing with the style guide. It is there to end discussions. I also do not agree with everything in the style guide. But I keep that to myself. Because a single universal (but flawed) style guides >> competing style guides >> complete anarchy
That I think it's wrong and ugly is an entirely different point.
PEP 8 specifically says it it not universal:
> Many projects have their own coding style guidelines. In the event of any conflicts, such project-specific guides take precedence for that project. ...
> However, know when to be inconsistent – sometimes style guide recommendations just aren’t applicable. When in doubt, use your best judgment. ...
> Some other good reasons to ignore a particular guideline:
> When applying the guideline would make the code less readable, even for someone who is used to reading code that follows this PEP.
I think always omitting spaces there makes it less readable, even for someone who is used to reading PEP 8.
That makes me more compliant to PEP 8 than black. ;)
Being new to python, the astral stuff is such a relief.
I don’t think experienced python folks realize how much the flexible tooling slows people down, and creates friction for adopters. Setting up my project I tried 3 environment managers, 2 type checkers, 3 formatters/linters, 3 packagers/dependancy/project managers.
I know this is kinda the n+1 issue where astral is just adding another. But it feels like a complete and well designed stack, not a box of parts encouraging me to build my own. Reminds me a bit of go tooling.
I am a convert. I’ll happily jump on their type checker and packager when ready.
>Setting up my project I tried 3 environment managers, 2 type checkers, 3 formatters/linters, 3 packagers/dependancy/project managers.
I've been using Python for right about 20 years now. I've been through various prior iterations of the "standard" packaging tools. But I've never found myself wanting a type checker, formatter or linter; and aside from a brief period with Poetry (in which I used barely any of its functionality), I've had no interest in dependency management tools, either. What I have found myself wanting is fixes for the problems in Pip.
Different people work on different kinds of projects, and approach the language differently, and have radically different needs as a result.
> I've never found myself wanting a type checker
Do you still write type hints, but not check them? Or just use dynamic typing as in Python 2?
Actually, I guess the first case is really type-hints-as-comments, so isn't substantially different from the second.
> I've been using Python for right about 20 years now
Either way, I've found it quite common among long-time Python users to stick with Python's traditional dynamic typing, whereas newcomers to the language tend to really dislike it in favour of strict static typing. (One exception is GvR himself, who seems to be very pro-type-checking.)
As the second group continue to add increasingly complex typing features to the language, I wonder if Python is becoming less desirable for those in the first group.
>Do you still write type hints, but not check them? Or just use dynamic typing
I occasionally write them as a form of documentation - when it would be more concise than explaining in prose.
There is nothing specifically 2.x-style about making use of dynamic typing - it's a core property of the language. While 3.0 introduced the annotation feature, it wasn't even specifically intended for typing (https://peps.python.org/pep-3107/) and standard library support wasn't added until 3.5 (https://docs.python.org/3/library/typing.html).
>As the second group continue to add increasingly complex typing features to the language, I wonder if Python is becoming less desirable for those in the first group.
Some new changes have been missteps, in my view. There's still nobody forcing my hand when it comes to typing, really; I just shake my head when I see people trying to make it do things it really wasn't designed for. On the other hand, there are definitely things I'd love to see improved about the language.
What new changes do you think have been missteps?
Mmh - that's difficult. I should dedicate a blog article to that topic. Probably can't get to it until next month, though.
Working on a team in a larger python application, type checkers and linters are such a time saver. It's so nice not to think about how I'd like to format my code any longer. We've made some tweaks to our ruff rules to suit the team's opinion, and now even when I don't _love_ the way it formats a particular part of code, I just move on and get something productive done instead.
And type checking is so great for both preventing bugs (which it does all the time) and self-documenting code. Can't recommend them enough.
It takes out so much frustration too. No more nitpicking about style/... in merge requests, the pre-commit hook will fix it and CI will catch it if you don't.
The more of this you can automate, the more you get to spend on "real work".
In my organisation, some co-workers used to write def func(*args,**kwargs) all the time. That was so tiring to look for what you should put as argument. Type checking is mandatory for well organized large project.*
Yes yes yes.
I have my own formatting preferences, but my desire to not have to think about them or go to meetings debating them is much much greater.
I’ve been using it for around the same time, and never cared much about formatters. Linting is useful but wouldn’t call it essential. But type checking is a non negotiable on any project I lead. It’s not perfect by any means, but beats having to crawl my way through a call stack trying to figure out what the hell a function is expected to take and why it’s getting a None instead.
I have yet to find a single drawback of adopting mypy (or similars) that isn’t completely eclipsed by the benefits.
> I've been using Python for right about 20 years now.
That's actually the biggest issue I've seen when a bit back joined a python-shop. Everyone there had mostly done python their whole careers. Fell in love with language during studies, then applied for jobs using that language. Stuck in the old ways, perhaps python felt nimble and nice 20 years ago compared to other things at the time, but with little progress (from its users) in using modern tooling it felt so arcane coming to this shop. And it wasn't the job's fault, it was a state of the art python stack, just a decade behind what other languages have.
>I've never found myself wanting a type checker, formatter or linter;
I guess that makes one of us.
https://en.wikipedia.org/wiki/The_Fox_and_the_Grapes
No. On the occasions where I've tried to use an IDE for other programming languages, I've felt like it got in my way more than it helped; and Python is if anything designed more for IDE-less development (due to its overall elegance), and opposed to straightforward IDE guidance (due to its dynamicity).
The point he is making is that a lot of stuff that you are told you need. You actually don't. Especially if you are working by yourself or in a very small team.
Getting stuff working is much more important. I'd rather people concentrate on stuff like CI, Unit Tests and Deployments.
I've seen plenty of projects where people had that attitude except the thing they saw as time-wasting was the CI and Unit Tests.
Those projects weren't even dumpster fires.
You can, genuinely, do without all of this stuff — but they're just helpful tools, not silver bullets or the only way to do things, but helpful.
I like unit tests, and I would happily adopt a CI system if my project included non-Python code and had to build multiple wheels. But formatting code properly in my editor is second nature by now; the functions I write are typically so short that it'd be hard to do anything a linter would object to; and type-checking doesn't just introduce busy-work, it works against part of the reason I'm using Python in the first place. I actively don't want to tell people not to call my code with a perfectly compatible type just because I hadn't considered it as within the range of possible compatible types.
Never said it was a silver bullet. I said I would rather people concentrate on more important things than configuring a linter. Half the time this stuff gives you weird errors that don't make a lot of sense (especially with JavaScript/TypeScript), sometimes you are literally making the compiler warning go away because there is literally nothing wrong with the code.
I do use eslint/prettier btw, but other collegues can never seem to get this working so I've just given up on it and then fix the linter issues whenever I come across them.
> Never said it was a silver bullet.
Between this and your other comment*, you seem to be simultaneously expecting me to be more and also less literal in how I read your comments.
* https://news.ycombinator.com/item?id=42786325
I expect you to be able to use your brain to distinguish obvious hyperbole (it was absolutely obvious) with a matter of fact statement.
I am starting to suspect that this playing dumb is entirely disingenuous.
You've never met people who take what you now call obvious hyperbole absolutely seriously and literally?
I guess you're going to continue to get surprised by how often you get criticised in this specific way, by many, many other people.
Hint: when someone doesn't understand you, you were in fact not obvious, no matter what you think — communication is a multiplayer game, not a single-player endeavour.
[dead]
I’ve been writing Python for 25 years and I love love love ruff and uv and friends.
Experienced Python folks have Stockholm Syndrome about its tooling, even rejoice at all the "options" for things like dependency managers.
> I know this is kinda the n+1 issue where astral is just adding another.
My bugbear with that XKCD is that even if it's true, it's also how progress is made. By this point I'm so annoyed whenever anyone links it that I kinda just wish Randall would take it down.
> it's also how progress is made
Introducing n+1 solution/ standard may be the necessary cost of making progress, but it is possible to incur that cost without gaining anything
(at the extreme, trolling is fun activity with net negative on bigger scale)
Yeah, and that comic is about standards anyways, not tools, so it's doubly annoying when it's linked out of context
I used to use any standard editor but these days I use PyCharm or Visual Studio (not Code) otherwise, I just use Sublime Text or any adjacent reasonable editor.
Elementary OS has the nicest editor that other Distros have not snatched up yet not sure why. Its fast, and fits the bill where KATE and Sublime Text are for me.
I started with python dependency management just executing pip install and thinking wow that's cool. No envs, no linters or formatters.
The point is, if you are learning to code, you can skip nearly all the ecosystem tools. Their need will arise when the time comes.
Yeah but a global environment is a really bad idea. It’s going to trip up those beginners when second project that requires a different version of pytorch.
I primarily work on typescript projects, little bit of go. Never enjoyed working with python until I found uv. Glad to see folks rally around astral's tools.
`uv` really is amazing.
People just need a mentor. Could you imagine getting into plumbing and just trying to work out what kind of tools you need by looking at the pipes? Of course not. You'd learn from someone else.
I’ve been coding for 25 years. I don’t need a mentor. I want nice tools when I try yet another language.
This is why I tried ruff, managing the different settings and stuff and an experienced Python person was getting tedious. Now I have one tool.
I moved to poetry for things a bit ago until I found out about uv recently, it’s so fast I thought it didn’t even do what it asked a couple times.
I'm a little sad that Ruff took off as a re-implementation of a whole bunch of work that was already done, rather than as a project to improve the work that was already done.
It was nice to be able to write little extra linters or flake8 plugins in the language I was linting. Plus decades of effort had gone into making those plugins pretty great, finding the right separation of linting codes so that the right things could be linted/ignored per codebase.
I understand why they did it, and they have done it remarkably well, but "rewrite it from scratch" is almost never the best answer and part of me wonders if this result could have been even better achieved another way.
I'm also sad but from a different perspective.
If something can be rewritten from scratch fairly quickly and ends up being much faster, it makes you wonder what we (the general “we”) might have done “wrong” with the “decades of effort” before.
To be clear: there was nothing wrong with the old toolchains (hence the quotes). And of course, Ruff/uv can do what they do because they build on all that earlier knowledge. My point is just that sometimes it's easier to start over than to fix the old, and that fact itself feels kind of sad to me.
> If something can be rewritten from scratch fairly quickly and ends up being much faster, it makes you wonder what we (the general “we”) might have done “wrong” with the “decades of effort” before.
I think the impact of Rust (and Go) shouldn't be underestimated here. Prior to these languages if you wanted fast runtime you were either using C or C++ with all their associated footguns, tricky build systems, and poor support for exposing and consuming libraries. Or you were using something like Java or C# which meant managing another runtime just for tools, which is especially bad for something like UV which manages runtimes - imagine if you were using a python tool to manage your Java versions and a Java tool to manage you Python versions, it would be a mess!
With both Go and Rust you have fast languages without memory safety issues that compile to a single, easily-installable binary and have a broad ecosystem of libraries available that mean that half the time you don't even need to write any difficult code and can just glue some modules together, and the other half of the time you only need to write the code you actually want to write and not all the support code for general pupose stuff like reading config files, file watching, etc.
> there was nothing wrong with the old toolchains (hence the quotes). And of course, Ruff/uv can do what they do because they build on all that earlier knowledge
I don't think that's true. I think the old toolchains really were bad, partly because the Python community was uniquely resistant to building on what worked in other ecosystems, and uv in particular largely succeeds because it's the first tool in this space to finally just ignore the nonsense that had somehow become accepted wisdom in Python-land and apply a little bit of outside experience.
> I think the old toolchains really were bad, partly because the Python community was uniquely resistant to building on what worked in other ecosystems
I mostly attribute it to backwards compatibility concerns (which I in turn attribute to the traumatic 3.x migration). PyPI continued to accept egg uploads until August 2023 (https://packaging.python.org/en/latest/discussions/package-f...). The easy_install program, along the direct command-line use of `setup.py` (as opposed to having Setuptools invoke it behind the scenes), have been deprecated since October 2021 (https://setuptools.pypa.io/en/latest/history.html#v58-3-0); but not only are they still supported, Setuptools can't even remove support for `setup.py test` (as they tried to in 72.0 and immediately reverted - https://github.com/pypa/setuptools/issues/4519) without causing major disruption to the ecosystem. One of the packages affected was Requests, which: a) is one of the most downloaded packages on PyPI; b) is pure Python (aside from dependencies) with nothing particular complicated about its metadata (I see no reason it couldn't be done in pyproject.toml and/or setup.cfg); c) wasn't even reliant on that interface to run tests (they use Tox).
(For that matter: Requests has been maintained by the PSF since mid-2019 (https://www.reddit.com/r/Python/comments/cgtp87/seems_like_r...) and the problem could have easily been avoided once the deprecation was announced, but nobody did anything about it. The project still defines most of its metadata in `setup.py`; `pyproject.toml` exists but is only used for pytest and isort config, while `setup.cfg` is used for flake8 config and some requirements metadata that's completely redundant with `setup.py`.)
A lot of it also just has to do with lack of attention and focus. https://peps.python.org/pep-0427/ (defining the wheel format) was proposed in September 2012, and accepted in February 2013. But Setuptools itself wasn't available as a wheel until November 2013 (https://setuptools.pypa.io/en/latest/history.html#id1646), and in 2017 there were still reports of people having outdated versions of Pip and not being able to install Setuptools from that wheel (https://setuptools.pypa.io/en/latest/history.html#v34-0-0). Setuptools relied on a separate package to actually make wheels until July of last year (https://setuptools.pypa.io/en/latest/history.html#v70-1-0) - an effort which was initially proposed in June 2018 (https://github.com/pypa/setuptools/issues/1386). It also took years to notice that Pip and Setuptools had separate implementations for the code that understands the "tags" in wheel filenames and factor it out into `packaging` (https://github.com/pypa/packaging/pull/156).
> Requests
I hate to say it, but the development of Requests itself has stagnated and really needs some attention.
The most notable fiasco recently was the introduction of significant changes to TLS/SSL in version 2.32.0 [1][2], which caused widespread breaking issues and even led to a security vulnerability.
Attempts to address these problems in versions .2 and .3 introduced new major issues which still exists in current version [3].
A patch to resolve the new issue was provided by one of the core members of the Requests project as early as June 2024 [4], but for some reason, nothing has been done about it, despite repeated pushes from the community.
If you check the commit history, updates have been sparse lately, even though there are still many unresolved issues.
[1] https://github.com/psf/requests/issues/6655
[2] https://github.com/psf/requests/pull/6667
[3] https://github.com/psf/requests/issues/6730
[4] https://github.com/psf/requests/pull/6731
I've mostly ditched requests in favour of httpx these days. https://www.python-httpx.org
> My point is just that sometimes it's easier to start over than to fix the old, and that fact itself feels kind of sad to me.
Sad, but also liberating. There are surely more projects out there that would benefit from breaking with conventional wisdom to start over.
I am quite grateful that they rewrote so many tools that I had been using. Upgrades were super painful with them split into a dozen separate packages, I routinely ran into incompatibilities and had to pin to specific versions to deal with transitive dependencies.
Given the state of python packaging and tooling, I'd say that consolidating the tooling is a big win in and of itself, and with the tremendous speed wins on top...
Whenever Ruff comes up, I reflexively go and check https://github.com/astral-sh/ruff/issues/970, since Pylint is the one tool whose performance I find to be egregiously bad. Looks like Ruff is still very far from being able to replace it.
What about ruff + pyright? From a skim it seems like a lot of the missing features would be covered by a type checker?
(Pyright isn't written in Rust, but adequate performance is still a design goal)
Rewrite from scratch is exactly what the Python ecosystem needs more of.
The strong push to always contribute to the existing projects rather than starting new ones in the Python community is what caused the previous subpar state of tooling. I think Astral is also picking a good middle-ground where they still aim to build as much as possible on existing Python standards (and help contributing to new ones), so it's still building on what has been done in the past.
> this result could have been even better achieved another way.
Don't you have "decades of effort" to resolve this wonder?
There was probably a lot of engineering that went into those designs and that took time.
Having a clean process to rewrite from is what made it so fast. They knew exactly what outcome they wanted and they had access to the internal implementations on how to do it.
All that effort was not wasted at all.
Your argument seems ideological. There is no chance they could have improved Flake8 to be as good as Ruff is.
Folks said the same thing to me about grep 8.5 years ago when I released ripgrep.
FWIW, I think the reason I'm conflicted is probably a similar reason to why you made a separate new thing. Overcoming the inertia and ways of doing things, or proposing widespread changes, often doesn't go down well with existing maintainers – for very valid reasons. I probably wouldn't want to go into the grep project and suggest rearchitecting it, and I can see why the Ruff developers didn't want to go into projects like Flake8 and Pylint to do the same.
But that doesn't stop me from feeling that there were things lost in this process and wishing for a better outcome.
A huge advantage of going your own way is that you don't need to address an audience you think are just wrong.
When this happens periodically (rather than, every other week) you also get the original GNU advantage which is you can revisit an old tool but knowing what you know now, for example today "everybody" has DVCS, and so ripgrep checks for the .gitignore file.
Wait... Did you misunderstand this comment? Or are you saying grep caught up with ripgrep now?
I'm agreeing with them. Some folks told me I should have improved grep instead of building my own thing.
And you think you've done it?
Yeah it's somewhat ideological, I think open source software is better for society when built as a community project than built and controlled by a VC funded company.
I don't think you could get flake8 to be as fast as Ruff, but I think you could improve it to be within an order of magnitude, and honestly that's plenty good enough. There are a lot of low hanging fruit, particularly around caching. I'd push back on there being no chance of being "as good as" Ruff, because "good" is not just speed. Ruff is _not complete_, if you want all the features and linters, you still need Flake8, or you have to sacrifice some. It's also not publicly extensible, and not written in Python, both of which are fair choices but I think deviate from the ideal scenario here.
Blame Guido. Until recently when he was bought out by Microsoft, he had been the primary blocker of higher Python performance. There were a bunch of attempts at adding a JIT to Python but Guido was more interested in splitting hairs over syntax than any real heavy lifting. Python could have been as performant as LuaJIT or V8 but their dictator ruined it. Python need more Mike Palls.
I'm very impressed by the recent developer experience improvements in the python ecosystem. Between ruff, uv, and https://github.com/gauge-sh/tach we'll be able to keep our django monolith going for a long time.
Any opinions about the current state of the art type checker?
I'm very happy with pyright. Most bug reports are fixed within a week and new peps/features are added very rapidly usually before pep is accepted (under experimental flag). Enough that I ended up dropping pylint and consider pyright enough for lint purposes as well. The most valuable lints for my work require good multi-file/semantic analysis and pylint had various false positives.
Main tradeoff is this only works if your codebase/core dependencies are typed. For a while that was not true and we used pylint + pyright. Eventually most of our code is typed and we added type stubs for our main untyped dependencies.
edit: Also on pylint, it did work well mostly. tensorflow was main library that created most false positives. Other thing I found awkward was occasionally pylint produces non-deterministic lints on my codebase.
Everyone is already recommending pyright, but I'll suggest checking the "based" community fork: https://github.com/detachhead/basedpyright
Besides re-adding features that Microsoft makes exclusive to pylance, it tweaks a number of features that IMO makes pyright work better out the box:
https://docs.basedpyright.com/latest/benefits-over-pyright/p...
Thanks for linking this. I wasn't surprised Microsoft made their AI auto completion proprietary (they did similar for C# in VSCode). But it grated me that semantic highlighting wasn't open source.
What did they do for C#?
They added some closed source features to the C# developer extension in VSCode. So anybody using a non-Microsoft fork of VSCode can't use those features.
If you are referring to the debugger, they did not "add" it - it was like that from the start. The C# extension is MIT, but the 'vsdbg' it ships with isn't because it's a debugger Visual Studio uses made into a standalone cross-platform component.
You can use an extension fork which swaps it with NetCoreDbg maintained by Samsung instead. This is what VSCodium uses.
Also note that both of these essentially consume the debugger API exposed by the runtime itself, you can easily make one yourself in a weekend but no one bothered because there isn't much need for another one.
Personally the thing that annoys me isn't so much the open vs closed source of (parts of) these extensions, but the blocking of using these extensions on VSCode forks.
Extensions are not blocked. It's the redistribution restriction the 'vsdbg' component specifically comes with. But you can easily use the fork if it's an issue :)
I have always used MyPy but I have noticed some large open source projects adopt Pyright as of late. The folks behind ruff and uv are currently working on a type checker as well but haven't heard when they plan on releasing it.
has Astral confirmed they’re doing a type checker? i’m not surprised, everyone has been asking for it, but i haven’t seen them say they’re doing it
Yes it's confirmed, you can listen to Charlie talk about here: https://youtu.be/byynvdS_7ac?si=JWeeD3uwXflWl5jo&t=1980
Code Name is Red Knot: https://github.com/astral-sh/ruff/discussions/12143
MyPy is nice, but it has missed some things that pyright caught, which were legitimate bugs that could have arisen.
First thanks for mentioning tach, I wished this tool existed for a long time and I'm happy to give in a try in the following days!
For typechecker, I also vouch for Pyright which is what we use for all our django backends at work.
Just be aware that you will have hard time to typecheck part of your code where you rely heavily on django's magic (magic strings, auto-generated properties/methods, etc...).
In these cases, it's sometimes better to avoid these features entirely or accept that some part of your code will not be typechecked.
I haven't dug into tach yet, but I'm very optimistic on this one.
Pyright is pretty good. It's easy to setup, and has first-class VSCode support, if that's what your team uses.
At this point I think even people trapped for two years on the International Space Station have heard about the Astral toolchain.
Any more info about "Astral" the org?
I've used ruff/uv for sure, but I never pay attention to Astral who is behind it.
VC funded organization. Not sure what their business model is yet.
Astral toolchain?
Python development tools produced by Astral (https://github.com/astral-sh), primarily uv and ruff. GP is a complaint about how often people submit links to these projects. The resulting discussion tends not to produce any new insight and the most common points IMX are "it's fast because it's written in Rust" (which is fairly specious logic - most problems with existing native Python tools are due to algorithmic problems such as poorly designed caching) or "it avoids bootstrapping problems because it's not written in Python" (it's completely possible to avoid those problems for everything except actually provisioning the Python interpreter, which some people apparently do see as a killer feature).
Maybe, I use uv and rust has been my primary language for several years. I have never heard of astral though.
Astral is the organization making uv. It's right there in the GitHub URL.
How do you read me comment and take away that I still dont know what Astral is? You acted like everyone had heard of it. My point is not even all the users of uv have.
> How do you read me comment and take away that I still dont know what Astral is?
The comment where you literally say "I have never heard of astral"? Gee, I wonder.
>How do you read me comment and take away that I still dont know what Astral is?
From the comment I read, quoted directly:
>I have never heard of astral though.
If you meant "I only learned about Astral thanks to this post", then I pointed out how you might have found out by yourself before.
I first attempted to use ruff for a small project ~2 years ago, and at the time felt that it wasn't quite good enough to replace the: black+isort+whatever linter combo we were using at work.
I've used it a few times since then and now I'm a big proponent of using only ruff. I think most of its value comes from:
1. Being fast (or at least fast enough that it's not annoying). 2. Replaces the linting/formatting combo of multiple tools, reducing the cognitive load for the developer.
Anyway, big fan.
I've been using an amalgamation of pyenv, pip-tools, black, isort, etc. for projects and just gave uv and ruff a try. Wow, it really is fast! Skeptical of anything VC-backed but I'll be keeping my eye on it.
It really is as fast as it claims. I sometimes intentionally add something it will complain about just to make sure it’s still working.
I do the same thing, and I keep doing it regardless of how many times I see it working properly.
I’m used to running command line tools on fast machines, but the first time I ran ruff on my codebase I was blown away. My codebase isn’t massive but most (non-rust) tools just take a while to run. It might be less than a second of startup overhead, but it’s noticeable. Ruff is so fast you wonder if it even did anything. It reminded me how fast computers actually are.
I don't care one bit about py-land, but it's always nice to see a rust project swoope in and save the day.
Language processing like compiling or linting is just one of the many aspect where rust can really play out it's awesome strength.
How speedy is the rust-language tooling itself these days? I remember wishing for a 'optimize nothing' or even 'just interpret' mode. Compile times noticably contributing to the feedback loop are a serious killjoy.
Compile times are still a bit much, but there are ways around it:
- The insanely long times are only for the initial build. Further builds are incremental and more tolerable.
- Compilation errors can be checked using 'cargo check'. It avoids the code generation step that's part of a build. I find myself doing it way more often than builds. So it's a time saver depending on how frequently you use it.
- You can extend the incremental build mentioned above using sccache [1]. At the minimum, it allows you to share the build cache between all your local projects. So it saves time if your projects or other builds share a lot of common libraries (that's very common in Rust, though). But sccache can go further by using online build caches (for example, using S3) that can be shared between hosts. Finally, sccache also supports distributed builds if you have a few machines sitting idle. (This is like distcc with extra features).
[1]: https://github.com/mozilla/sccache
There have been significant (if not earth shattering) improvements in the compiler itself. But for me at least, the bigger change has been from better hardware. I now have a processor (Apple M1 Pro) that's 10x (multi core) / 2x (single core) faster than the one I had when I first started programming using Rust (intel dual core processor in a 2015 MBP) and that seems to have translated almost perfectly into faster compile times.
A small boon to it not being in python — it's now decoupled from your python toolchain.
There is no "coupling" inherent to native Python tools. They can generally be installed in a separate virtual environment from the one they operate on.
For example, with Pip you simply pass the `--python` option to have it install for a different environment - you can create your venvs `--without-pip`, and share a copy of Pip across all of them. If you use Pipx, you can expose and use Pipx's vendored Pip for this, as I described in a recent blog post (https://zahlman.github.io/posts/2025/01/07/python-packaging-...).
Twine and Build don't care about your project's environment at all - Twine uploads a specified file, and Build by default creates a new environment for the build (and even withough build isolation, it only requires your build environment to contain the build backend, and the build environment doesn't have to be the one you use for the rest of development). Setuptools, similarly, either gets installed by the build frontend into an isolated build environment, or can reside in a separate dedicated build environment. It doesn't actually operate on your project's environment - it only operates on your project's source tree.
That's a lot of words compared to "Download binary from server. Run it"
Sometimes it takes a lot of words to debunk a misconception. What you initially said didn't have anything to do with setup effort, but also there is quite little setup effort actually described here.
There is nothing to debunk, my original comment was correct. It's useful to have tooling that doesn't rely on carting around a bunch of python stuff.
In that case I have absolutely no idea what point you're trying to make. What coupling are you talking about, and why is it a problem? What do you mean by "carting around a bunch of Python stuff", and why and how is it "useful" to avoid that?
Not the person you’re replying to, but if I’m write Python 3.13, and my linter requires Python <= 3.11, now I have to install 2 Pythons. It’s nice to not have to consider that case at all.
As a developer, with a real use case for these tools, I plan to support multiple Python versions anyway, and have them all installed for testing purposes. There only needs to be one actual installation of each; everything else is done through venvs, which are trivial to spin up.
If I want to ship this to colleagues (some of whom probably won't know what a venv is but do have to write python every now and again) I only have to worry about a single binary. Getting consistent python environments on peoples machines, particularly on windows, is expensive (salary).
>If I want to ship this to colleagues
then they don't need your dev tools.
>won't know what a venv is but do have to write python every now and again
In practical terms, you have to understand what a venv is in order to be a Python developer, for the same reason you have to understand environment variables to do anything serious in the shell. Learning the fundamentals is a one-time cost.
Even twine decoupled itself from my python toolchain some time ago [1], through some dependency. Cannot install it unless you are on a system trusting trust in rust™.
[1] https://github.com/pypa/twine/issues/1015
Some previous discussions:
https://news.ycombinator.com/item?id=34788020
https://news.ycombinator.com/item?id=37908760
For Django projects, what's fast that will cover Python, HTML, CSS and JavaScript files? Ruff only does Python?
Prettier can do HTML CSS and JS but I don't think there's any one tool that can do it all. (It would be interesting if there were something like a tree sitter for auto formatting)
I recently learned that tree-sitter does not have a goal of accurately representing the whole source input, and thus as it currently stands could not be used for that purpose
Now, whether a tree-sitter-esque ecosystem that is designed for source modeling could catch on like t-s did is a more interesting question
For Django projects, I would say DjLint [1]. It doesn't cover all you asked, but makes the job of writing the templates (HTML) much more pleasant.
[1] https://djlint.com/
If you already have tooling to cover each separately, why would it be better for it to be a single tool?
Django templates snippets get mixed inside CSS, JavaScript and HTML. So regular when CSS/JavaScript/HTML formatters get confused by those, it's not obvious what combination to use. I know of djLint but it's pretty slow.
Does formatting match black? (Unlikely, but one can dream of standard being established)
https://docs.astral.sh/ruff/formatter/black/
Now we just need this but for mypy
I would be surprised if this does not happen, given astral's ambitiousness.
So much marketing coolaid/bullshit.
I'm not that fan of Ruff because to me it doesn't make any sense to add a rust dependency to "python" and it blows my mind that people are so keen to accept the ugly formatting inherited from Black just because "it gives consistency to the team, so I don't have to think, and we don't have to discuss code style"...
But all of that personal opinion set aside, what triggers my initial statement is that so many persons are so excited to run to use Ruff because... "It is so fast"... when I'm quite sure most of them would never notice any real speed difference with their modest codebase.
In almost all the codebases I have seen, proprietary and OSS, pylint or any other linter was quasi-instant. There are probably a few huge codebases that would see a speed benefit of a few seconds to use Ruff, but their owner would probably better have a look at their code to understand how their reached such a monster!
> quite sure most of them would never notice any real speed difference with their modest codebase
On the contrary, within only a year or so of coding by a small team, this ecosystem takes single digit seconds or less, while the traditional tools are taking minutes and more.
Particularly in analytics or scientific computing we've seen minutes versus under a second.
Having the tools not require additional/parallel Python dependency management is a plus.
Note that we watched these tools for a long time, and jumped in when rye converged with uv. It was promising to see fewer ways to do it.
// More on that here: https://news.ycombinator.com/item?id=41309072
I don't know what kinds of codebases you've worked with, but I can tell you that pylint is so far from instant, it became the longest running CI job in multiple reasonably sized codebases I've worked with. Think tens of minutes. Other linters were not much better, until ruff came along. But that's far from the only advantage that ruff brings.
There are other issues with what you said, but the biggest one is: you have some strongly worded criticism for a project that has set a new bar for usability and consistency in Python code quality tooling. These tools are developed by humans like you and distributed to you for free with no obligation to use them. No matter how I look at your comment, I don't see how it's helping.
Can you tell us a little bit more about your codebase? I'm curious. Because for it to take tens of minutes, something should be crazy over there.
I'm confused: why are you linting code in CI, rather than as a precommit hook?
Pre-commit hooks are great in small, focused codebases with small, homogeneous teams. In large monorepos with lots of different teams committing, it's impossible to guarantee any kind of consistency in which pre-commit hooks get run, so you need CI to actually enforce the consistency or you'll spend all your time chasing (accidental) violations.
... am I the only one who figures that linting would logically be a very low priority in those circumstances?
Apparently so. Mind explaining your reasoning?
Because devs can disable precommit hooks much more easily than they can work around CO.
I see precommit hooks as where you avoid the low-hanging fruit, like “is this code actually parsable?”
Well, for one, I use Jujutsu, where commits happen every time you run jj status and traditional notions of pre-commit hooks don't really apply. But also, I think (as a matter of principle) nothing should get in the way of performing commits or amends.
Depends on what you're trying to achieve. Jobs like lint checks should ideally be pre-push checks so that the long process doesn't get in the way of commits. But very fast and small checks like warning about trailing whitespaces or ensuring a newline at the end of the file can be done during every commit (even if it was in jujutsu). I would rather not wait till the end to find out. And of course, there are ways to temporarily or permanently disable one or more checks when you absolutely need it.
My editor takes care of trailing whitespace and newline termination. I don't think Jujutsu commits should fail on this or fix it every time jj status is run — seems too magical.
Because commits should be small and fast, and always work, like a save. If you’re running a multi second process during commit it’s going to get ripped out.
Ahah! I worked on a project which used dotnet format in a commit hook. That was a frustrating experience, trying to rebase code.
(Unlike most formatters which are instant, dotnet format takes at least half a minute, because it performs a build just to format your code.)
My thinking is that the linter only has to operate on the code that was actually checked in. And just how many things are you checking about it, anyway?
I have issues with some of Black’s formatting decisions but I’ve also suffered from inconsistent formatting and there is no question in my mind that consistent formatting that I find ugly sometimes is 1000% better than the alternative. And after so many years of dealing with it it’s so refreshing to just “give up” and let the formatter win.
Same here. I appreciate that Black annoyed everyone on our team about the same amount but in different ways. From the instant we added it, stupid style arguments completely disappeared.
Know what I care about more than Black making my own code look less pretty? It making my coworker’s code look less horrid. (And funnily, I’m certain everyone on the team thinks exactly that same thing.)
> I appreciate that Black annoyed everyone on our team about the same amount but in different ways.
If it did affect people equally, it would be great. Unfortunately, spaces for indentation is an accessibility issue and the Black maintainers are hostile to making this configurable. Normally I am in agreement about minimising configurability, but this isn’t a matter of taste, it is making the Python ecosystem more difficult for some disabled people to participate in.
https://github.com/psf/black/issues/2798
Fortunately, Ruff makes this configurable, so you don’t have to choose between autoformatting and accessibility any more.
Thank you. I've heard this argument before and it seems to be a very strong one.
I do have a question, though -- can accessibility tools infer tabs based on visual layout or some other mechanism? It seems like one option is for accessibility tools to transform code into tabs internally, which would make them instantly compatible with the vast majority of existing code that uses spaces. What are the practical barriers to making that happen? And is there a good article that discusses all this in a clear, even-handed fashion?
Personally, I think excellence is a virtue in tooling in and of itself. The Python world just hasn't experienced this kind of excellence before.
> But all of that personal opinion set aside, what triggers my initial statement is that so many persons are so excited to run to use Ruff because... "It is so fast"... when I'm quite sure most of them would never notice any real speed difference with their modest codebase.
I work on a fairly modest code base (22k lines of python according to sloccount), and I'm seeing a significant decrease in runtime between `pylint src/ tests/` with default parameters and `ruff check src/ tests/` with the majority of checks enabled. More specifically, I'm seeing a decrease from 18 second to less than a tenth of a second with a hot FS and having deleted the ruff cache between runs.
I like that it's fast, and it is noticeably faster for even moderately sized codebases.
But the main thing I like about ruff is that it is a single tool with consistent configuration, as opposed to a combination of several different tools for formatting and linting that each have their own special way of configuring them and marking exceptions to lint rules.
Doesn’t Black mostly stick to PEP-8 style? What rules do you consider ugly?
When I write argparse or click, I use one parameter per line, like:
Black turns it into: I find the one-line-per-term easier to understand, and even though it fits on a single line, I would rather have all add_argument()/@click.option() calls follow the same layout, to make it easier to discern the structure across dozens of similar calls.I also like to have spaces around the "=" in my keyword=arguments, except for very short and simple calls.
PEP 8 says "Don’t use spaces around the = sign when used to indicate a keyword argument" so black is following that recommendation.
However, everywhere else in the PEP (assignment like 'x = 5', and annotations like 'class Spam: foo: int = 4' or 'def spam(foo: int = 4)', there are spaces on either side of the equals sign.
That irritates me every time I have to use black.
add a comma after he last argument to make black explicitly use multi-line formatting
It also removed the superfluous spaces in the keyword arg assignmentsThe point they're trying to make, I think, is: Black/Ruff format, or any other formatter necessarily need to operate on universal rules. In context, sometimes these rules don't make sense. I still would love some kind of stateful linter and formatter, where it suggests me changes that I can then accept or ignore (and won't be told about again).
Formatters _are_ a compromise. They make your coworkers' code nicer, and your code worse.
The point I was trying to make is to give examples of rules I thought were ugly, to give a concrete response to paulgb's request for such a rule.
One, as I learned, could be resolved by a simple use of a terminal ",".
The other is how it removes spaces from around "=" for keyword arguments, but not for other uses of "=".
I can't provide much more as I rarely use black. As a single developer, I don't have to worry much about that sort of compromise. ;)
The removal of spaces around keyword arguments is per the PEP8 style guide. There is no point arguing with the style guide. It is there to end discussions. I also do not agree with everything in the style guide. But I keep that to myself. Because a single universal (but flawed) style guides >> competing style guides >> complete anarchy
Yes, I even pointed out how it's in PEP 8.
That I think it's wrong and ugly is an entirely different point.
PEP 8 specifically says it it not universal:
> Many projects have their own coding style guidelines. In the event of any conflicts, such project-specific guides take precedence for that project. ...
> However, know when to be inconsistent – sometimes style guide recommendations just aren’t applicable. When in doubt, use your best judgment. ...
> Some other good reasons to ignore a particular guideline:
> When applying the guideline would make the code less readable, even for someone who is used to reading code that follows this PEP.
I think always omitting spaces there makes it less readable, even for someone who is used to reading PEP 8.
That makes me more compliant to PEP 8 than black. ;)
Sweet! Thanks! I did not know that, and I've no problem with a terminal comma there.
My point stands - I do not think those spaces are superfluous. Consider the following:
Why is it "i=n" instead of "i = n" when every other use of "=" has spaces?For this one case of a short function with simple names, okay, I don't always use spaces.
But otherwise I think the lack of spaces makes it the code harder to read, and thus "uglier".
you can discuss the spaces and lack there of here: https://stackoverflow.com/questions/8853063/pep-8-why-no-spa...
The rest of us just follow the PEP8 style guide and move on
The question was 'What rules do you consider ugly?'.
I think that rule is ugly. I explained why.
If you don't like the thread, move on.