86 points by BiteCode_dev12 days ago | 79 comments
How to play: Some comments in this thread were written by AI. Read through and click flag as AI on any comment you think is fake. When you're done, hit reveal at the bottom to see your score.got it
> 2. Centralized venv storage — keep .venvs out of your project dirs
I do not like this. virtual environments have been always associated with projects and colocated with them. Moving .venv to centralized storage recreates conda philosophy which is very different from pip/uv approach.
In any case, I am using pixi now and like it a lot.
I like it. Enjoyed having it with Conda, was sorry when it was lost with uv. Been a pain to search my projects and have irrelevant results that I then have to filter. Or to remember to filter in the first place. The venvs may be associated with the projects, but they're just extraneous clutter unless there's actually something to be done directly on them, which is very rare.
But what happens when you accidentally delete the central store, or it gets corrupted? With colocated venvs you lose one project's dependencies. With centralized storage you potentially nuke everything at once. Has anyone actually stress-tested that failure mode?
One problem I have on my work machine is that it will do a blind backup of project directories. Useless .venv structure with thousands of files completely trashes the backup process. Having at least the flexibility to push the .venv to a cache location is useful. There was (is?) a uv issue about this similar use case (eg having a Dropbox/Onedrive monitored folder).
thats my biggest problem with uv, i liked the way pipenv did it much better. I want to be able to use find and recursive grep without worrying that libraries are in my project directory.
Pip doesn’t have any philosophy here. It doesn’t manage your virtualenv at all, and definitely doesn’t suggest installing dependencies into a working directory.
Putting the venv in the project repository is a mess; it mixes a bunch of third party code and artifacts into the current workspace. It also makes cleaning disk space a pain, since virtualenvs end up littered all over the place. And every time you “git clean” you have to bootstrap all over again.
Perhaps a flag to control this might be a good fit, but honestly, I always found uv’s workflow here super annoying.
Disagree—better to have space allocated in each project where they can be easily deleted at once. Rather than half hidden in your home folder somewhere with random names and forgotten about.
If for some rare reason you wanted to delete all venvs, a find command is easy enough to write.
pixi is a general multi-languge, multi-platform package manager. I am using it now on my new macbook neo as a homebrew _replacement_. Yes, it goes beyond python and allows you to install git, jj, fzf, cmake, compilers, pandoc, and many more.
For python, pixi uses conda-forge and PyPI as package repos and relies on uv's rattler dependency resolver. pixi is as fast as uv (it uses fast code path from uv) but goes further beyond python wheels.
For detail see [0] or google it :-)
Virtual environments have been always associated with projects in your use case I guess.
In my use case, they almost never are. Most people in my industry have 1-2 venvs that they use across all their projects, and uv forcing it into a single project directory made it quite inconvenient and unnecessary duplication of the same sets of libraries.
I dislike conda not because of the centralized venvs, but because it's bloated, poorly engineered, slow and inconvenient to use.
At the end of the day, this gives us choice. People can use uv or they can use fyn and have both use cases covered.
> and uv forcing it into a single project directory made it quite inconvenient and unnecessary duplication of the same sets of libraries.
Actually, uv intelligently uses hardlinks or reflinks
to avoid file duplication. On the surface, venvs in different projects are duplicate, but in reality they reference the same files in the uv's cache.
BTW, pixi does the same. And `pixi global` allows you to create global environments in central location if you prefer this workflow.
EDIT: I forgot to mention an elephant in the room. With agentic AI coding you do want all your dependencies to be under your project root. AI agents run in sandboxes and I do not want to give them extra permissions pocking around in my entire storage. I start an agent in the project root and all my code and .venv are there. This provides sense of locality to the agent. They only need to pock around under the project root and nowhere else.
This is actually the feature that initially drew me towards uv. I never have to worry about where venvs live while suffering literally zero downsides. It's blazing fast, uses minimal storage, and version conflicts are virtually impossible.
Do you only work on projects individually? Without project-specific environments I don’t know how you could share code with someone else without frequent breakages.
We do something similar — one shared "data science" venv across six or seven notebooks that all need numpy, pandas, the usual suspects. Recreating that per project would be pointless. It's really a data/research workflow vs. software engineering workflow split.
Given the telemetry, how did uv ever get approved/adopted by the open source community to begin with, or did it creep in? Why isn't it currently burning in a fire?
The field that guesses if something is running in a CI environment is particularly useful, because it helps package authors tell if their package is genuinely popular or if it's just being installed in CI thousands of times a day by one heavy user who doesn't cache their requirements.
Honestly, stripping this data and then implying that it was collected by Astral/OpenAI in a creepy way is a bad look for this new fork. They should at least clarify in their documentation what the "telemetry" does so as not to make people think Astral were acting in a negative way.
Personally I think stripping the telemetry damages the Python community's ability to understand the demographics of package consumption while not having any meaningful impact on end-user privacy at all.
Then give me your version of why it's not reasonable for the Python packaging community (who are the recipients of this data, it doesn't go to Astral) to want to collect aggregate numbers against those platform details.
I don't think it is too bad, the telemetry it sends is quite rudimentary. However, would have been a good move from astral-sh to be open and explicit about it, and allow turning it off.
> These things include your OS, py version, CPU architecture, Linux distro, whether you're in CI. All baked into the User-Agent header via something called "linehaul". We ripped that out. Now it just sends fyn/0.10.13. That's it.
I imagine it's just that the User-Agent is something that we've grown accustomed to passing information in. I am fairly biased since I'd always opt-in even to popcon. I think it's useful to have such usage information.
This is so useful, I'm shocked they even make a big thing out of it. And now I'm questioning whether this is even their real intention, or just a diversion?
Theyre saying "we removed telemetry" with the hopes of getting an emotional response from people who are privacy-focused, to get quick stars/attention.
Ran it on a medium-sized project with around 40 deps. Resolution felt identical to uv, no breakage. Honestly the telemetry thing is more about principle than real risk -- it's just nice having a fork where the User-Agent isn't broadcasting your entire environment to Astral's servers.
On the contrary, OSS is precisely where this kind of spying on your users is least useful, since there's already a culture of them telling you, sometimes with code, what they need.
If that's the issue, that's a problem. They are telling you X. People, if they tell you, don't give their honest feedback. Or they might be a loud minority.
If you ask people what coffee they want, they will all tell you low-sugar, very bitter black coffee. Then you see what they buy, and they keep buying sugary and creamy coffee that contains almost no caffeine.
Telemetry isn't spying. At least when done properly. How do you figure out rare OOM crashes without some telemetry data? What if the reporter doesn't know how to figure out their OS and installed software that's required for debugging?
I'm NOT saying telemetry should capture everything and sell that data to info brokers. I'm saying, done properly it give you valuable feedback. And you should be transparent about it.
I will always have a "knee-jerk" response to opt-out or mandatory telemetry or any other outbound connections I did not ask for being initiated automatically. In a corporate world I would have to block this and depending on what the telemetry is connecting to that could impact other outbound connections leading to contention without the org.
One of the optimal ways to do this would be to opt-in by setting an environment variable to enabled any combination of extra debugging, telemetry, stats, etc... Perhaps even different end-points using environment variables.
Are you saying that when you tell uv to install a package you aren't asking it to make outbound connections to download the package from PyPI? The telemetry in question is just setting an appropriate User-Agent header with only slightly more data than what browsers traditionally put there. It does not make extra network requests purely for the sake of reporting information.
If I understand the description of this „telemetry“ in fyns „MANIFESTO.md“ correctly, it does not make outbound connections you did not asked for. It sets the user agent http header to something that identifies your OS, CPU, python version and if you are running in Ci when communicating to the package registry. It does not send any of that to astral, not ist any of that highly personal.
Sure, it should not be there by default, especially OS & CPU imho. But it’s not really what I’d call „invasive telemetry“.
As someone shipping native Node addons, registry telemetry (OS, arch, platform) is one of the few ways I know which build targets to actually prioritize. Without it I'd be guessing whether anyone's even using linux-arm64-musl. I get the instinct to strip it, but for package maintainers it's genuinely useful data.
I suspect that my normal workflows might just have evolved to route around the pain that package management can be in python (or any other ecosystem really).
In what situations are uv most useful? Is it once you install machine learning packages and it pulls in more native stuff - ie is it more popular in some circles? Is there a killer feature that I'm missing?
If you have hundreds of different Python projects on your machine (as I do) the speed and developer experience improvements of uv make a big difference.
I love being able to cd into any folder and run "uv run pytest" without even having to think about virtual environments or package versions.
Same as running sudo pip install on everything circa 2012. We all did it, knew it was wrong, and most of us got away with it. The ones who didn't had memorable post-mortems.
I guess that could be useful. I don't have many standalone python scripts, and those that I do have are very basic. It would be really nice if that header could include sandboxing!
So much this! I've been bugging Astral about addressing the sandboxing challenge for a while, I wonder if that might take more priority now they're at OpenAI?
There was no "telemetry" in uv to begin with. They're just aiming for an emotional response. Read about the "telemetry" they removed and you'll find it funny.
The telemetry point seems overstated. Sending platform metadata to a package registry is fairly standard behavior — pip does it, cargo does it. Calling it "stripped telemetry" implies something more nefarious than what the evidence actually shows. As far as I can tell, this is mostly rebranding.
And the first two commits are "new fork" and "fork", where "new fork" is a nice (+28204 -39206) commit and "fork" is a cheeky (+23971 -23921) commit.
I think I'm good. And I would question the judgement of anyone jumping on this fork.