by smallmancontrov 4 days ago

I'm so glad git won the dvcs war. There was a solid decade where mercurial kept promoting itself as "faster than git*†‡" and every time I tried it wound up being dog slow (always) or broken (some of the time). Git is fugly but it's fast, reliable, and fugly, and I can work with that.

alwillis 3 days ago | [-7 more]

> I'm so glad git won the dvcs war. There was a solid decade where mercurial kept promoting itself as "faster than git".

It wasn't the Mercurial team saying it was faster than Git; that was Facebook after contributing a bunch of patches after testing Mercurial on their very large mono-repo in 2014 [1]:

For our repository, enabling Watchman integration has made Mercurial’s status command more than 5x faster than Git’s status command. Other commands that look for changed files–like diff, update, and commit—also became faster.

In fact they liked Mercurial so much they essentially cloned it to create their own dvcs, Sapling [2]. (An aside: Facebook did all of this because it was taking too long getting new engineers up to speed with Git. Shocker.)

Today, most of the core of Mercurial has been rewritten in Rust; when Facebook did their testing, Mercurial was nearly 100% Python. That's where the "Mercurial is slow" thing came from; launching a large Python 2.x app took a while back in the day.

I was messing with an old Mercurial repo recently… it was like a breath of fresh air. If I can push to GitHub using Mercurial… sign me up.

[1]: https://engineering.fb.com/2014/01/07/core-infra/scaling-mer...

[2]: https://sapling-scm.com/

Gabrys1 3 days ago | [-5 more]

You can push to GitHub using Sapling. I wish Sapling open source was given more love, as the experience for non-Facebookers is subpar. No bash completion outside the box, no distro packages, no good help pages, random issues interacting with a Git repo...

Ericson2314 3 days ago | [-1 more]

Sapling and JJ can sort it out, the outside world will only care for one of them.

alwillis 3 days ago | [-0 more]

> Sapling and JJ can sort it out, the outside world will only care for one of them.

I was immediately intrigued when I learned that JJ has revsets [1], just like Mercurial.

[1]: https://docs.jj-vcs.dev/latest/revsets/

withinboredom 3 days ago | [-2 more]

Sounds like what my teachers used to say: “a personal problem”. Literally nobody outside FB knows what they’re missing and until they fix that, literally nobody cares.

itsdesmond 3 days ago | [-1 more]

> Sounds like what my teachers used to say: “a personal problem”.

They don’t sound like a very good teacher.

withinboredom 3 days ago | [-0 more]

Judging by the amount of adults wandering around thinking their personal problems are everyone else’s problem… they were pretty good teachers.

smallmancontrov 3 days ago | [-0 more]

No, the "hg is fast" marketing claim that retreated to "hg is Big-O fast and you are dumb for caring about constant terms and factors even if they clearly dominate your use case" predates 2014 and the Facebook patches. These talking points were old in 2010. Mercurial was always dog slow and always gaslighting about it.

I'm glad BigCo made tools to serve their needs, but their needs aren't my needs or most peoples' needs.

> Mercurial has been rewritten in Rust

I'm glad they saw the light eventually! Ditto for the rest of the Rust Tooling Renaissance.

steveklabnik 4 days ago | [-20 more]

What is kind of funny here is that you're right locally. At the same time, the larger tech companies (Meta and Google, specifically) ended up building off of hg and not git because (at the time, especially) git cannot scale up to their use cases. So while the git CLI was super fast, and the hg CLI was slow, "performance" means more than just CLI speed.

I was never a fan of hg either, but now I can use jj, and get some of those benefits without actually using it directly.

landr0id 4 days ago | [-11 more]

>At the same time, the larger tech companies (Meta and Google, specifically) ended up building off of hg and not git because (at the time, especially) git cannot scale up to their use cases.

Fun story: I don't really know what Microsoft's server-side infra looked like when they migrated the OS repo to git (which, contrary to the name, contains more than just stuff related to the Windows OS), but after a few years they started to hit some object scaling limitations where the easiest solution was to just freeze the "os" repo and roll everyone over to "os2".

MASNeo 3 days ago | [-0 more]

“roll everyone over to os2”

The IBM crowd may feel vindicated at last.

miki123211 3 days ago | [-0 more]

So 30 odd years later, MS went from working on OS/2 to working on OS2?

I guess what's old is new again.

w0m 3 days ago | [-3 more]

didn't msft write an ~entire new file system specifically to scale git to the windows code base?

I have fuzzy memories on reading about it.

landr0id 3 days ago | [-1 more]

They wrote something that allowed them to virtualize Git -- can't remember the name of that. But it basically hydrated files on-demand when accessed in the filesystem.

The problem was I think something to do with like the number of git objects that it was scaling to causing crazy server load or something. I don't remember the technical details, but definitely something involving the scale of git objects.

kritr 2 days ago | [-0 more]

Unfortunately even with these improvements, working in the repo was quite slow.

Changes branches took an eternity, and people resorted to a more workspaces style solution.

If you’re planning on starting a big tech company, I wouldn’t recommend the approach.

jamesfinlayson 3 days ago | [-0 more]

I thought Microsoft made a number of improvements to git to allow it work with all of their internal repos.

kqr 3 days ago | [-4 more]

I have heard that the Google monorepo is called google3 but I don't know why. Maybe those things are common...

mike_hearn 3 days ago | [-0 more]

Probably a lot of Googlers don't know. It's ancient history, was called google3 even in 2006 when I first joined.

google1 = code written by Larry, Sergey and employee number 1 (Craig). A hacky pile of Python scripts, dumped fairly quickly.

google2 = the first properly engineered C++ codebase. Protobufs etc were in google2. But the build system was some jungle of custom Makefiles, or something like that. I never saw it directly.

google3 = the same code as google2 but with a new custom build system that used Python scripts to generate Makefiles. I suppose it required a new repository so they could port everything over in parallel with code being worked on in google2. P4 was apparently not that great at branches and google3 didn't use them. Later the same syntax for the build files was kept but turned into a new languages called Starlark and the Makefile generator went away in favor of Blaze, which directly interpreted them.

At least, that's the story I vaguely recall.

ongy 3 days ago | [-0 more]

It's the third attempt of building the mono repo.

But not the 3rd mono repo on the same technology to avoid some scaling limit.

roca 3 days ago | [-1 more]

It's not that.

vasco 3 days ago | [-0 more]

Thanks for explaining!

dijit 3 days ago | [-3 more]

Small nit: Googles monorepo is based on Perforce.

I think what happened is Google bought a license for source code and customised it.

steveklabnik 3 days ago | [-1 more]

Yes, the server is based on Perforce, called Piper, but the CLI is based on mercurial. So locally you're doing hg and then when you create a CL, it translates it into what p4 needs.

surajrmal 3 days ago | [-0 more]

Depends on what frontend tool you use. You can use either. These days you can also use jj. I'm not sure the backend resembles peforce any longer.

unmole 3 days ago | [-0 more]

> Google bought a license for source code and customised it.

That makes sense because vanilla Perforce is unbearably slow and impossible to scale.

Last I checked, it was bought by Private Equity firms and actual product development had more or less stopped.

smallmancontrov 4 days ago | [-3 more]

Right, and I'm glad there are projects serving The Cathedral, but I live in The Bazaar so I'm glad The Bazaar won.

The efforts to sell priest robes to fruit vendors were a little silly, but I'm glad they didn't catch on because if they had caught on they no longer would have been silly.

dwattttt 3 days ago | [-2 more]
hypeatei 3 days ago | [-1 more]
bonzini 3 days ago | [-0 more]

It's not a coincidence, it was called like that as a reference to facilitating distributed development.

littlecranky67 3 days ago | [-2 more]

I might be the outlier, but am I the only one who doesn't care much about the speed of git? I've been using git since 2011 as my main vcs for personal and professional work as a freelancer contractor. Whenever I "wait" for git, it is either limited by the bandwidth (git clone) or by the amount of commit hooks that I implemented for linting, verification etc. The percentage of time actually spent in git internal execution must be a tiny fraction of my day to day usage. What IS affecting me (and my the teams I work in) is usability and UX experience. I.e. if people would screw up stuff (no matter if in git or mercurial) we spent far more time fixing this - I don't think the impmentation speed would matter here.

The only case I can imagine is when doing a full checkout of a big repo, but even there, there is --depth which is quite practical.

windward 3 days ago | [-0 more]

Isn't it kind of like how you don't care much about the oxygen content of the air around you, but you'd miss it if it was gone? I've done development with Mercurial, simple processes were irritatingly slow, particularly if you stray from the better-supported opinionated path.

ak217 3 days ago | [-0 more]

I spent a long time educating teams of developers about git's usability quirks. I don't do that as much anymore - partly because the quirks have been worked out, partly because the developers have better guardrails and resources to learn from.

This whole time (the past 15 years) git has been getting faster without most of us noticing, because big companies have been investing in speeding it up. The reason you don't notice or care is that they work on a very different scale. Thousands of users, thousands of PRs per day, millions of CI/CD jobs all hitting the repo.

Now the cycle is repeating again because these numbers are shooting through the roof because of agentic coding.

eqvinox 4 days ago | [-2 more]

I remember using darcs, but the repos I was using it with were so small as to performance really not mattering…

riffraff 3 days ago | [-0 more]

I remember darcs fondly but even with tiny repos (maybe 5-6 people working on it) we hit the "exponential merge" issues.

It worked just fine 99% of the time and then 1% it became completely unusable.

dented42 3 days ago | [-0 more]

I definitely miss Darcs. I still use it very occasionally, but only with very small repos.

raincole 4 days ago | [-0 more]

This matches my experience 100%. I was about to write a similar comment before I see yours.

forrestthewoods 4 days ago | [-94 more]

Mercurial has a strictly superior API. The issue is solely that OG Mercurial was written in Python.

Git is super mid. It’s a shame that Git and GitHub are so dominant that VCS tooling has stagnated. It could be so so so much better!

windward 3 days ago | [-12 more]

Mercurial can't rebase without an extension, or force push. Are you using a definition of strictly superior that means it has fewer features?

qsera 3 days ago | [-9 more]

Mercurial's model is different from Git that these things you list does not make sense there.

Rebase does not make sense in Mercurial because it has the concept of fixed branches. A commit is permanently linked to the branch on which it was made. So you are supposed to use merges.

Same with force-pushing.

windward 3 days ago | [-5 more]

I know. It's an opinion about how to develop that a lot of people hold - a declining proportion, mind you, like Mecurial's declining market share - and it's one that they're able to represent in Git's model, with Git's features. They're even able to do it without exposing me to it. But the same isn't true in reverse. Strictly superior?

Believe me, I tried to have an open mind about it. Then one day I was getting ready to go on a work trip with a half-finished feature on my work laptop, and realised there was simply no in-model way for backing that wip up to the repo. If I lost my laptop, I lost the progress. mercurial-scm fails at SCM.

qsera 3 days ago | [-2 more]

>in-model way for backing that wip up to the repo.

That is because you have this notion of a "clean history", (which IIUC prevented you from making this permanent wip commit) which in reality does not have a lot of use. For most project, "useful history" or "real history" is better than a "clean" history.

That is what mercurial caters to.

windward 8 hours ago | [-1 more]

>For most project, "useful history" or "real history" is better than a "clean" history.

This is your opinion, so I'm compelled to point out it's not the consensus opinion.

qsera 2 hours ago | [-0 more]

Everything that I say is my opinion. I don't parrot consensus.

ezst 3 days ago | [-1 more]

> one that they're able to represent in Git's model, with Git's features. They're even able to do it without exposing me to it. But the same isn't true in reverse. Strictly superior?

not sure what you mean to say, but for thoroughness' sake, no: git and mercurial concepts are not interchangeable, with git having mostly an inferior model.

To give examples: git has no concept of branching (in the way every VCS but Git uses the term). A branch in git is merely a tag on the tip of a series meant to signify that all ancestors belong to the same lineage. This comes with the implication that this lineage information is totally lost when two branches merge (you can't tell which side of the merge corresponded to which lineage). The ugly and generalised workaround is to abuse commit message (e.g. "merge feat-ABC into main") to store an essential piece of the repository history that the VCS cannot take.

Another example is phasing: mercurial records at commit level whether it was exchanged with others or not. That draws a clean line between the history that's always safe to rewrite, and which that is subject to conflicting merges if the person you shared those commits with also happened to rewrite them on their end.

> Then one day I was getting ready to go on a work trip with a half-finished feature on my work laptop, and realised there was simply no in-model way for backing that wip up to the repo. If I lost my laptop, I lost the progress. mercurial-scm fails at SCM.

Sorry to be blunt, but that's a skill issue: hg is no different than every other VCS in that regard. If you want your WIP changes to leave your laptop, you've got to push them somewhere, just like you would in git.

windward 8 hours ago | [-0 more]

>If you want your WIP changes to leave your laptop, you've got to push them somewhere, just like you would in git.

Permanently, to a single branch in un-buildable form. So useful.

ezst 3 days ago | [-2 more]

I'd like to fill up some inaccuracies in your response:

- rebasing in Mercurial simply means chopping a subtree off of the history and re-attaching it to a different parent commit. In that sense, rebasing is a very useful and common history-rewriting operation. In fact, it's even simpler and more powerful/versatile than in git, because mercurial couldn't care less if the sub-tree you are rebasing belongs to a branch or not: it's just a DAG. It gets transplanted from A to B. A may or may not be your checked commit, or be the tip of a branch, doesn't matter.

- that mercurial requires a configuration toggle before rebasing can be used (i.e. that the user need to enable the extension explicitly) is a way to encourage interested users to learn their tool, and grow its capabilities together with their knowledge. It's opinionated, it may be too much hand-holding for some, but there is an elegant simplicity in keeping the help pages and autocomplete commands just as complex as the user can take it.

qsera 2 days ago | [-1 more]

> rebasing in Mercurial simply means chopping...

Sure, but since commits have a branch attribute attached to them, "rebasing" does not appear to be "first class". It is something that has to be bolted on with an extension.

> because mercurial couldn't care less if the sub-tree you are rebasing belongs to a branch or not

IIUC Git also does not care much about the rebase target being a "branch".

I agree that Mercurial provides more value out of the box than git because it preserves branch info in commits.

I can live with Git because Git is "enough" if used carefully and after coming to terms with the non-intutive UI.

ezst 21 hours ago | [-0 more]

> Sure, but since commits have a branch attribute attached to them, "rebasing" does not appear to be "first class".

Again, that's orthogonal: you may or may not use "named branches" (the kind of which persists at commit level), rebasing works either way consistently and predictably.

> It is something that has to be bolted on with an extension.

The extension ships in core, UX is why it's not enabled by default.

> IIUC Git also does not care much about the rebase target being a "branch".

Indeed, it's just that things likely get weird (for no good reason) when you don't (detached head, "unreachable" commits)

> I can live with Git because Git is "enough" if used carefully and after coming to terms with the non-intutive UI.

That's our sad state of affairs. JJ helps a bit, though.

saagarjha 3 days ago | [-1 more]

When I ask for this people like to explain that these are bad features nobody should want.

qsera 3 days ago | [-0 more]
worldsayshi 4 days ago | [-3 more]

Maybe forgejo has a shot?

ptx 3 days ago | [-0 more]
PeterStuer 3 days ago | [-1 more]

Unfortunatly out-of-the-box llm agents only focus on github support, creating friction.

worldsayshi 3 days ago | [-0 more]

So pi.dev + forgejo?

awesome_dude 4 days ago | [-37 more]

Whatever your opinion on one tool or another might be - it does seem weird that the "market" has been captured by what you are saying is a lesser product.

IOW, what do you know that nobody else does?

jorams 3 days ago | [-12 more]

So far you've only gotten responses to "how can a worse product win?", and they are valid, but honestly the problem here is that Mercurial is not a better product in at least one very important way: branches.

You can visit any resource about git and branches will have a prominent role. Git is very good at branches. Mercurial fans will counter by explaining one of the several different branching options it has available and how it is better than the one git has. They may very well be right. It also doesn't matter, because the fact that there's a discussion about what branching method to use really just means Mercurial doesn't solve branches. For close to 20 years the Mercurial website contained a guide that explained only how to have "branches" by having multiple copies of the repository on your system. It looks like the website has now been updated: it doesn't have any explanation about branches at all that I can find. Instead it links to several different external resources that don't focus on branches either. One of them mentions "topic", introduced in 2015. Maybe that's the answer to Git's branching model. I don't care enough to look into it. By 2015 Git had long since won.

Mercurial is a cool toolbox of stuff. Some of them are almost certainly better than git. It's not a better product.

LordDragonfang 3 days ago | [-5 more]

This is so strange, because, at a low level, a branch isn't even a "thing" in git. There is no branch object type in git, it's literally just a pointer to a commit, functionally no different from a tag except for the commands that interact with it.

fc417fc802 3 days ago | [-3 more]

Meanwhile mercurial has bookmarks. TBF I'm not sure when it got those but they've been around forever at this point. The purpose is served.

I think there are (or perhaps were) some product issues regarding the specifics of various workflows. But at least some of that is simply the inertia of entrenched workflows and where there are actual downsides the (IMO substantial) advantages need to be properly weighed against them.

Personally I think it just comes down to the status quo. Git is popular because it's popular, not because it's noticably superior.

ezst 3 days ago | [-2 more]

> I think there are (or perhaps were) some product issues regarding the specifics of various workflows.

I love jumping in discussions about git branching, because that's a very objective and practical area where git made the playing field worse. Less and less people feel it, because people old-enough to have used branch-powered VCSes have long forgotten about them, and those who didn't forget are under-represented in comparison to the newcomers who never have experienced anything else since git became a monopoly.

Anyhow, let's pick django as a project that was using a VCS with branches before moving to git/github, and have a look at the repo history: https://github.com/django/django/commits/stable/6.0.x

Yes, every commit is prefixed with the branch name. Because, unlike mercurial, git is incapable of storing this in its commit metadata. That's ridiculous, that's obscene, but that's the easiest way to do it with git.

jstimpfle a day ago | [-0 more]

Just because there is one project apparently using this in a way that indicates someone could perceive something as a weakness... It doesn't mean it's a real weakness (nor that it's serious).

You can just not move branches. But once you can do it, you will like it. And you are going to use

   git branch --contains COMMIT
which will tell you ALL the branches a commit is part of.

Git's model is clean and simple, and makes a whole lot of sense. IMHO.

jstimpfle a day ago | [-0 more]

> Less and less people feel it, because people old-enough to have used branch-powered VCSes have long forgotten about them, and those who didn't forget are under-represented in comparison to the newcomers who never have experienced anything else since git became a monopoly.

I'm old enough to have used SVN (and some CVS) and let me tell you branching was no fun, so much that we didn't really do it.

Tarq0n 3 days ago | [-0 more]

That's the definition of a tree though. Everything has a parent, no cycles allowed.

qsera 3 days ago | [-1 more]

To me mercurials branching is closer to the development process and preserves more information, because it records the original branch a commit was made.

Git does not have such concept. That is a trade off and that trade off works great for projects managed like Linux kernel. But for smaller projects where there is a limited number of people working, the information preserved by mercurial could be very valuable.

It also had some really interesting ideas like change set evolution, which enabled history re-writing after a branch has been published. Don't know its current status and how well it turned out to be..

awesome_dude 2 days ago | [-0 more]

Just FTR - git /can/ store that information, but it requires human input.

If you rebase the feature branch into the main branch THEN follow it up with the merge commit that records the branch name you store the branches (that have been made a part of main) and can see where they are in your log

Mercurial's notes can become cumbersome if there are a large number in the repository, but, obviously, humans can sort that out if it gets out of hand

xmcqdpt2 3 days ago | [-3 more]

It's interesting that branches, which is a marquee feature of git, became less important at the same time as git ate all the other vcs. Outside of OS projects, almost all development is trunk based with continuous releases.

Maybe branching was an important reason to adopt git but now we'd probably be ok with a vcs that doesn't even support them.

awesome_dude 2 days ago | [-0 more]

Trunk based development is still a hotly debated topic. I personally prefer branches at this point in time, trunk based development has caused me more trouble than it's claimed worth in the past, BUT that could be a me limitation rather than a limitation of the style

krick 3 days ago | [-1 more]

Not sure if it's true. I mean, I do agree with the core of it, but how do you even do PRs and resolve conflicts, if there are no branches and a developer cannot efficiently update his code against the last (remote) version of master branch?

awesome_dude 2 days ago | [-0 more]

Trunk based development has every developer in the company committing straight to main - no PRs, supposedly no merge conflicts (but reality is that main moves fast and if someone else is working in the same files as someone else, there will be merge conflicts)

A middle ground is small PRs where people are constantly rebasing to the tip of main to keep conflicts to a minimum

forrestthewoods 4 days ago | [-5 more]

Worse products win all the time. Inertia is almost impossible to overcome. VHS vs Betamax is a classic. iPod wasn’t the best mp3 player but being a better mp3 player wasn’t enough to claw market share.

Google and Meta don’t use Git and GitHub. Sapling and Phabricator much much better (when supported by a massive internal team)

aaronbrethorst 4 days ago | [-4 more]

What was the better mp3 player than the iPod?

mi_lk 3 days ago | [-0 more]

unironically Zune is goated in its own way

CrimsonRain 3 days ago | [-0 more]

anything from Cowon. Always has been

corndoge 4 days ago | [-0 more]

sansa clip+

codethief 3 days ago | [-0 more]

Anything from iriver.

guelo 4 days ago | [-9 more]

Network effects and marketing can easily prevent better tools from winning.

awesome_dude 4 days ago | [-8 more]

I mean, in the fickle world that is TECH, I am struggling to believe that that's what's happened.

I personally went from .latest.latest.latest.use.this (naming versions as latest) to tortoise SVN (which I struggled with) to Git (which I also was one of those "walk around with a few memorised commands" people that don't actually know how to use it) to reading the fine manual (well 2.5 chapters of it) to being an evangalist.

I've tried Mercurial, and, frankly, it was just as black magic as Git was to me.

That's network effects.

But my counter is - I've not found Mercurial to be any better, not at all.

I have made multiple attempts to use it, but it's just not doing what I want.

And that's why I'm asking, is it any better, or not.

WolfeReader 4 days ago | [-1 more]

Mercurial has a more consistent CLI, a really good default GUI (TortoiseHg), and the ability to remember what branch a commit was made on. It's a much easier tool to teach to new developers.

awesome_dude 4 days ago | [-0 more]

Hmm, that feels a bit subjective - I'm not going to say X is easier than Y when I've just finished saying that I found both tools to have a lot of black magic happening.

But what I will point out, for better or worse, people are now looking at LLMs as Git masters, which is effectively making the LLM the UI which is going to have the effect of removing any assumed advantage of whichever is the "superior" UX

I do wish to make absolutely clear that I personally am not yet ready to completely delegate VCS work to LLMs - as I have pointed out I have what I like to think of as an advanced understanding of the tools, which affords me the luxury of not having an LLM shoot me in the foot, that is soley reserved as my own doing :)

arw0n 3 days ago | [-4 more]

Networking effects are significantly strengthened by necessary user buy in. VC is hard, and every tool demands its users to spend a non-significant amount of time learning it. I would guess the time to move from black magic to understanding most of git is ~100h for most people.

The thing is, to understand which one is actually better, you would have to give the same amount of investment in the second tool, which is not something most people are willing to do if the first tool is "good enough". That's how Python became the default programming language; people don't miss features they do not understand.

Izkata 3 days ago | [-3 more]

A little over a decade ago, with only svn experience, I tried both mercurial and git. There was something about how mercurial handled branches that I found extremely confusing (don't remember what), while git clicked immediately - even without reading the manual.

So at least for me, git was clearly better.

ptx 3 days ago | [-2 more]

Mercurial later added bookmarks which work like Git branches. These make more sense to me as well.

qsera 3 days ago | [-1 more]

Did bookmarks moved as you made commits, like a branch pointer in git does?

ptx 3 days ago | [-0 more]
3 days ago | [-0 more]
[deleted]
kasey_junk 3 days ago | [-1 more]

GitHub had a business model where public repos were free. BitBucket didn’t.

That’s it. That’s why git won, you could put up open source libs with one for free and not the other.

Which is extra funny as the centralized service was the most important part of decentralized version control.

seniorThrowaway 3 days ago | [-0 more]

>the centralized service was the most important part of decentralized version control.

I've often thought this about github

dugmartin 3 days ago | [-0 more]
esafak 4 days ago | [-1 more]

That worse is better, and some people don't know better or care.

dwattttt 4 days ago | [-0 more]

"better" in that sentence is very specific. Worse is also worse, and if you're one of the people for whom the "better" side of a solution doesn't apply, you're left with a mess that people celebrate.

jrochkind1 4 days ago | [-2 more]

Welcome to VHS and Betamax. the superior product does not always win the market.

Per_Bothner 4 days ago | [-1 more]

Not always, but in this case the superior product (i.e. VHS) won. At initial release, Beta could only record an hour of content, while VHS could record 2 hours. Huge difference in functionality. The quality difference was there, but pretty modest.

jrochkind1 3 days ago | [-0 more]

I suppose one lesson could be that there are different dimensions of superiority, different products may be superior in different ways.

Of course, products also can win market dominance for reasons external to the product's quality itself (marketing, monopoly lock-in, other network effects, consumer preferences on something other than product quality itself, etc).

outworlder 4 days ago | [-38 more]

> The issue is solely that OG Mercurial was written in Python.

Are we back to "programming language X is slow" assertions? I thought those had died long ago.

Better algorithms win over 'better' programming languages every single time. Git is really simple and efficient. You could reimplement it in Python and I doubt it would see any significant slowness. Heck, git was originally implemented as a handful of low level binaries stitched together with shell scripts.

jmalicki 4 days ago | [-8 more]

Every time I've rewritten something from Python into Java, Scala, or Rust it has gotten around ~30x faster. Plus, now I can multithread too for even more speedups.

Python is absurdly slow - every method call is a string dict lookup (slots are way underused), everything is all dicts all the time, the bytecode doesn't specialize at all to observed types, it is a uniquely horrible slow language.

I love it, but python is almost uniquely a slow language.

Algorithms matter, but if you have good algorithms, or you're already linear time and just have a ton of data, rewriting something from a single-threaded Python program to a multithreaded rust program I've seen 500x speedups, where the algorithms were not improved at all.

It's the difference between a program running overnight vs. in 30 seconds. And if there are problems, the iteration speed from that is huge.

eru 4 days ago | [-5 more]

> [...], it is a uniquely horrible slow language.

To be fair, Python as implement today is horribly slow. You could leave the language the same but apply all the tricks and heroic efforts they used to make JavaScript fast. The language would be the same, but the implementations would be faster.

Of course, in practice the available implementations are very much part of the language and its ecosystems; especially for a language like Python which is so defined by its dominant implementation of CPython.

jmalicki 3 days ago | [-2 more]

Fair! I guess I didn't mean language as such, but as used.

But a lot of the monkey-patching kind of things and dynamism of python also means a lot of those sorts of things have to be re-checked often for correctness, so it does take a ton of optimizations off the table. (Of course, those are rare corner cases, so compilers like pypy have been able to optimize for the "happy case" and have a slow fall-back path - but pypy had a ton of incompatibility issues and now seems to be dying).

dtech 3 days ago | [-1 more]

Javascript has a lot of the same theoretical dynamism, yet V8 and WebkitCore were able to make it fast

eru 3 days ago | [-0 more]

Yes, with heroic effort. It's really a triumph of compiler / vm engineers over language designers.

mike_hearn 3 days ago | [-0 more]

Python has a JIT compiling version in GraalPy. If you have pure Python it works well. The problem is, a lot of Python code is just callouts to C++ ML libs these days and the Python/C interop boundary just assumes you're using CPython and requires other runtimes to emulate it.

xmcqdpt2 3 days ago | [-0 more]

You don't even need to go all V8, you could just build something like LuaJIT and get most of the way there. LuaJIT is like 10k LOCs and V8 is 3M LOC.

The real reason is that it is a deliberate choice by the CPython project to prefer extensibility and maintainability to performance. The result is that python is a much more hackable language, with much better C interop than V8 or JVM.

byroot 4 days ago | [-1 more]
jmalicki 3 days ago | [-0 more]

I think that's a new thing from like python 3.12+ or something after I stopped using Python as much.

It didn't used to.

EDIT: python 3.11+: https://peps.python.org/pep-0659/

kuschku 4 days ago | [-1 more]

I've rewritten a python tool in go, 1:1. And that turned something that was so slow that it was basically a toy, into something so fast that it became not just usable, but an essential asset.

Later on I also changed some of the algorithms to faster ones, but their impact was much lower than the language change.

bonesss 3 days ago | [-0 more]

I don’t know if people think this way anymore, but Python gained traction to some degree as a prototyping language. Verify the logic and structures, then implement the costly bits or performance sensitive bits in a more expense-to-produce more performant language.

Which is only to say: that rewrite away from python story can also work to show python doing its job. Risk reduction, scaffolding, MVP validation.

Diggsey 4 days ago | [-3 more]

> git was originally implemented as a handful of low level binaries stitched together with shell scripts.

A bunch of low level binaries stitched together with shell scripts is a lot faster than python, so not really sure what the point of this comparison is.

Python is an extremely versatile language, but if what you're doing is computing hashes and diffs, and generally doing entirely CPU-bound work, then it's objectively the wrong tool, unless you can delegate that to a fast, native kernel, in which case you're not actually using Python anymore.

eru 3 days ago | [-2 more]

Well, you can and people do use Python to stitch together low level C code. In that sense, you could go the early git approach, but use Python instead of shell as the glue.

saghm 3 days ago | [-1 more]

Their point was that by offloading the bottlenecks to C, you've essentially conceded that Python isn't fast enough for them, which was the original point made above

eru 3 days ago | [-0 more]

Fair point!

eru 4 days ago | [-0 more]

> Better algorithms win over 'better' programming languages every single time.

That's often true, but not "every single time".

ragall 4 days ago | [-0 more]

> I thought those had died long ago.

No, it's always been true. It's just that at some point people got bored and tired of pointing it out.

20k 4 days ago | [-0 more]

Python is by far the slowest programming language, an order of magnitude slower than other languages

One of the reason mercurial lost the dvcs battle is because of its performance - even the mercurial folks admitted that was at least in part because of python

bmitc 4 days ago | [-0 more]

You barely have to try to have Python be noticeably slow. It's the only language I have ever used where I was even aware that a programming language could be slow.

ezst 3 days ago | [-0 more]

> Are we back to "programming language X is slow" assertions? thought those had died long ago.

Yes we are? The slow paths of mercurial have been rewritten in C (and more recently in Rust) and improved the perf story substantially, without taking away from the wild modularity and extensibility hg always had.

saghm 3 days ago | [-0 more]

> You could reimplement it in Python and I doubt it would see any significant slowness

I doubt it wouldn't be significantly slower. I can't disprove it's possible to do this but it's totally possible for you to prove your claim, so I'd argue that the ball is in your court.

surajrmal 3 days ago | [-3 more]

You must belong to the club of folks who use hashmaps to store 100 objects. It's amazing how much we've brainwashed folks to focus on algorithms and lose sight of how to actually properly optimize code. Being aware of how your code interacts with cache is incredibly important. There are many cases of using slower algorithms to do work faster purely because it's more hardware friendly.

The reason that some more modern tools, like jj, really blow git out of the water in terms of performance is because they make good choices, such as doing a lot of transformations entirely in memory rather than via the filesystem. It's also because it's written in a language that can execute efficiently. Luckily, it's clear that modern tools like jj are heavily inspired by mercurial so we're not doomed to the ux and performance git binds us with.

inejge 3 days ago | [-2 more]

> You must belong to the club of folks who use hashmaps to store 100 objects.

Apparently I belong to the same club -- when I'm writing AWK scripts. (Arrays are hashmaps in a trenchcoat there.) Using hashmaps is not necessarily an indictment you apparently think it is, if the access pattern fits the problem and other constraints are not in play.

> It's amazing how much we've brainwashed folks to focus on algorithms and lose sight of how to actually properly optimize code. Being aware of how your code interacts with cache is incredibly important.

By the time you start worrying about cache locality you have left general algorithmic concerns far behind. Yes, it's important to recognize the problem, but for most programs, most of the time, that kind of problem simply doesn't appear.

It also doesn't pay to be dogmatic about rules, which is probably the core of your complaint, although unstated. You need to know them, and then you need to know when to break them.

jstimpfle 2 days ago | [-0 more]

Most code most people work on isn't about algorithms at all. The most straightforward algorithm will do. Maybe put some clever data structure somewhere in the core.But for the vast majority of code, there isn't any clear algorithmic improvement, and even if there was, it wouldn't make a difference for the typically small workloads that most pieces of code are processing.

I'll take it back a little bit, because there _is_ in fact a lot of algorithmically inefficient code out there, which slows down everything a lot. But after getting the most obvious algorithmic problems out of the way -- even a log-n algorithm isn't much of an improvement to a linear scan, if n < 1000. It's much more important to get that 100+x speedup by implementing the algorithm in a straightforward and cache friendly way.

surajrmal 3 days ago | [-0 more]

My core complaint is that folks repeat best practices without understanding them. It's simple to provide API semantics that appear like a map without resorting to using hashmap. I fear python style development has warped people's perception for the sake of simplifying the lives of developers. And all users end up suffering as a result.

forrestthewoods 4 days ago | [-10 more]

They died because everyone knows that Python is infact very very slow. And that’s just totally fine for a vast number of glue operations.

It’s amusing you call Git fast. It’s notoriously problematic for large repos such that virtually every BigTech company has made a custom rewrite at some point or another!

jstimpfle 4 days ago | [-9 more]

Now that is interesting too, because git is very fast for all I have ever done. It may not scale to Google monorepo size, it would ve the wrong tool for that. But if you are talking Linux kernel source scale, it asolutely, is fast enough even for that.

For everything I've ever done, git was practically instant (except network IO of course). It's one of the fastest and most reliable tools I know. If it isn't fast for you, chances are you are on a slow Windows filesysrem additionally impeded by a Virus scanner.

forrestthewoods 4 days ago | [-8 more]

The fact that Git has an extremely strong preference for storing full and complete history on every machine is a major annoyance! “Except for network IO” is not a valid excuse imho. Cloning the Linux kernel should take only a few seconds. It does not. This is slow and bad.

The mere fact that Git is unable to handle large binary files makes it an unusable tool for literally every project I have ever worked on in my entire career.

jstimpfle 3 days ago | [-3 more]

git clone --bare --depth=1 https://github.com/torvalds/linux

Takes 21 seconds on my work laptop, indeed a corporate Windows laptop with antivirus installed. Majority of that time is simply network I/O. The cloned repository is 276 MB large.

Actually checking the kernel out takes 90 seconds. This amounts to creating 99195 individual files, totaling 2 GB of data. Expect this to be ~10 times faster on a Linux file system.

So what's your problem?

forrestthewoods a day ago | [-2 more]

—-depth=1 is a hack and breaks assorted things. It’s irritating. No I can’t tell you what random rakes I’ve stepped on in the past because of this. Yes they still exist.

If you’d like to argue that version control should be centralized, shallow, and sparse by default then I agree.

jstimpfle 4 hours ago | [-0 more]

> If you’d like to argue that version control should be centralized, shallow, and sparse by default then I agree.

I get your sentiment, but I know how working with e.g. SVN feels. Just doing "svn log" was a pain when I had to do it. The "distributed" aspect of DVCS doesn't prevent you from keeping central what you need central. E.g. you can have github or your own hosting server that your team is exchanging through.

The main point of distributed is speed and self-sufficiency which is a huge plus. E.g. occasional network outages and general lack of bandwidth are still a thing in 2026 (and remain so to some extent for the foreseeable future).

Now, could git improve and allow some things to be staged/tiered/transparently cached better? Probably, and that's where some things like LFS come in. I don't have a large amount of experience in this field though, because what I work with is adequately served by the out-of-the-box git experience.

jstimpfle a day ago | [-0 more]

Then just do git pull --unshallow whenever you see fit. I normally don't do --depth 1 because cloning repositories is rarely my bottleneck. Just saying that when you need a relatively fast clone time, you can have it.

spockz 3 days ago | [-2 more]

Git-lfs exists for a while now. Does that fix your issue? Or do you mean that it doesn’t support binary diffs?

forrestthewoods 3 days ago | [-1 more]

Git LFS is a gross hack that results in pain and suffering. Effectively all games use Perforce because Git and GitLFS suck too much. It’s a necessary evil.

spockz 2 days ago | [-0 more]

We use git-lfs quite contentedly but we don’t require diffs on binaries. What pain and suffering are you eluding to specifically?

pabs3 3 days ago | [-0 more]

Git handles large text files and large directories fairly poorly too.

jstimpfle 4 days ago | [-1 more]

[flagged]

forrestthewoods 4 days ago | [-0 more]

[flagged]

bmitc 4 days ago | [-2 more]

Git is not remotely fast for large projects.

Cthulhu_ 3 days ago | [-1 more]

Define "large"; I've never ran into serious performance issues during the ~15 years I've used Git, which either means the projects I've worked in aren't actually large large, or Git is fast enough for most use cases.

ezst 3 days ago | [-0 more]

not OP, and indeed git is fast-enough in many cases, but git not cutting it at Google and Facebook scale, combined with the versatility of mercurial (monkeypatching and extensions system) was the reason why they both invested heavily in mercurial instead of git.

Among the tricks being used was remotefilelogs, which is a way to "hydrate" content locally on-demand, which was mimicked in git many years later with Microsoft's git-vfs. Same goes with binary/large files that git eventually got as git-lfs.

It's funny to think that a big reason for git to be "fast" today is by playing catch-up with mercurial, which carries this "forever stigma" of being slow.

Leynos 4 days ago | [-0 more]

I just used it because I preferred the UX.