Mercurial is great at many things, but surprisingly, it seems quite awkward to set up a shared repository on your network for multiple users. It doesn't come as standard with a simple, self-contained server, and they recommend against using a shared directory on a network drive directly, presumably because of the wicked data loss bugs they've had (or may still have, I haven't checked this recently).
Compare the Mercurial guidance on publishing repositories[1], which comes in at about 15 screens on my system, with the equivalents for say Git[2] or Bzr[3] that fit in a couple of screens, and you can see how striking the difference is.
The instructions you linked for git would basically work as stated for Mercurial, but without all the mucking about with making a bare repository.
Note that on the Mercurial wiki page you linked, it does mention ssh setup, although perhaps not as prominently as you'd like.
Also, hg /does/ come with a simple, self-contained server: hg serve. It doesn't support authentication, because there's a bunch of http servers that can do that for you better and with less bugs than we'd inevitably have.
The instructions you linked for git would basically work as stated for Mercurial, but without all the mucking about with making a bare repository.
Except that if you're running Windows rather than Linux, they don't. I have literally just tested it to make sure I'm not imagining things.
The difference is that with Git you probably also have Git Bash installed on Windows, which while somewhat clunky does at least provide a fairly standardised mechanism for setting up keys etc.
To my knowledge, there is no equivalent for Hg, and attempting to use hg with an ssh:// path to the repository seems to depend on what other software you have installed (Tortoise*, for example).
If you know better, my team and I would love to learn something. This has been bugging us for years and across multiple projects, and none of us has ever found a simple, effective way of doing it.
For the record, the mention of SSH setup on the page I linked to is literally just that: a mention, with no further details at all, and as noted above the obvious change of specifying an ssh:// path to a repository instead of a local one doesn't work by default on Windows. There's also a second entry for "Shared SSH", but that goes to a separate page describing half a dozen components that mostly aren't included with hg out of the box and again seem to lack much documentation in some cases.
[Edit: Yes, there is also hg serve, but as you point out it lacks even basic security checks, and even the main Mercurial web site doesn't recommend relying on hg serve for more than temporary purposes.]
The configurations I've seen are trying to use some sort of Linux-based server or NAS, and a variety of clients including some on Windows.
FWIW, I've just been told elsewhere in the thread that what we've been trying using SSH should have worked out of the box as long as Hg and TortoiseHg were both installed (and, I assume though we didn't state this, as long as Hg is installed and properly reachable on the server side).
So, while I can confirm that the simple SSH access doesn't work reliably here right now, it is starting to sound like we've hit some unfortunate case that isn't necessarily Hg's fault. If so, I apologise if my comments were unfairly harsh, though I would still suggest that Hg would be more user-friendly if it could handle SSH connections itself without relying on additional software that must be installed separately.
1. It does come with a simple self-contained server.
2. Why would you run anything off a network drive? Has no one learned from Visual Source Safe and the lessons of the definitely not glorious history of NFS file locking?
Now stop complaining, because I'm stuck with winzip and windiff as a VCS for the current thing I'm working on (an old and obsolete NT4 C++ behemoth that won't go away).
Yes. I use it daily, and so does my team, some of whom have also used it in other places as part of other teams. None of us knows a simple, effective, reliable way to set up a centralised server without going via one of the web server routes.
Literally all you have to read is:
No, it isn't. If you're going to be patronising, please at least have the courtesy to read your own links before posting. There is exactly one place in that document that refers to using direct SSH access, and as I've noted elsewhere in this thread, it doesn't work out of the box if you're using a Windows client rather than say Linux, a fact I have just personally confirmed before posting here.
Install VisualSVN and use TortoiseSVN if you want a centralised server running on Windows. Seriously. Nothing works properly on windows like this - you're going to end up hacking together UNIX things. Let someone else do it for you.
Thanks for the link. We'll check it out and see if there's anything in there we weren't already trying. The note about preregistering each server using plink before trying to connect to it using the hg client isn't one I remember seeing previously; maybe that explains the mystery delays/authorisation problems we've observed.
I don't think your proposal to use SVN instead is very helpful, though. We run a very heterogeneous network, working on a lot of different projects. "Just change your entire development platform" isn't exactly a constructive suggestion. We also routinely use Git on other projects, and we've never had any trouble setting up shared repositories in those cases. While it does rely on installing some UNIXy tools, and that is indeed more hassle than it needs to be, everything pretty much works once you've done that. Moving everyone onto real Linux workstations is a non-starter, because there are way too many professional software packages we use on Windows without anything in the same class available on Linux.
Agree entirely. My point was really directed at the situation we're all in regarding cross platform dev tools. We settled on SVN as its the only thing that's fairly easy to get running cross platform as we have both windows and Linux machines online. We also need a centralised repo due to the nature of our work that has strong authorisation against LDAP (our AD).
Now we're actually looking at git and TFS as a forward-looking solution.
Bugs, bugs, bugs galore. Not joking but I'm sure they have no tests. We regularly fall over trivial shit that should work but doesn't.
Page state problems everywhere. You eventually learn not to use the browser back button.
Upgrades are hell due to the tinkering you have to do with Java settings and the container to get it performing properly. we have to stick it behind an apache mod_proxy setup because it falls over when shifting SSL. In fact their documentation says they won't support it of you use SSL (seriously!!).
It needs an 8 core Xeon with 32Gb of RAM and 15k SAS disks to get reasonable performance out of for 100 users barely doing anything (WTF).
Set it up to use InnoDB as the schema type in MySQL and it doesn't even add FK constraints reliably. Some are added, some are not. This results in random key violation failures that you have to go and manually fix or the ORM in it falls over and takes the entire JIRA instance out.
Plugins that you rely on because the basic feature set is rubbish suddenly start costing lots of money when you upgrade. There is no notice of this. Basically pay up fuckers (at least $93/plugin/year) or lights out.
We have just over 105 users but that's over the so have to fish out for the 500 user version which costs twice as much as the 100 user version. And its not cheap. Basic JIRA one off installation with greenhopper/crucible/fisheye costs us $20000 up front and a bit less every year in maintenance for which we receive broken crap.
Crucible is so slow it takes nearly a week to index our repository which has to be done regularly because it craps itself reliably and corrupts the indexes. It doesn't even run as a service on windows reliably relying on some pile of crap documentation on Confluence that doesn't work.
Clean upgrades are a week-long project on average.
You have to reindex it regularly because minor process changes cause faults and anomalies everywhere. Reindexing (until recently) blocked the server entirely for up to an hour.
The crucible web interface is so slow it doesn't actually work properly. People have to wait up to a minute for a page hit on a good day. It has a giant lock inside it somewhere apparently that they can't get around.
You can't trust their OnDemand service either - they have admitted massive customer data loss from their previous platform. Google for reference.
The whole thing is a house of cards that I wouldn't go near.
To any Atlassian employees who will probably read this and start the marketing spiel: don't give me the "we're aware and are improving speech" because I've been promised that for 3 years and it hasn't happened. It's just got worse.
Also, to those who do the "it works for me": it worked for us long enough to get through the evaluation but it doesn't scale as promised, doesn't work at all well and is not fit for purpose.
To those who say "you've set it up wrong": we've had Atlassian on the case and they can't make it any faster.
Atlassian have a reality distortion field like Apple do as well I've experienced. They have great marketing but that is all.
To be honest, I'd directly compare the space they're in with JetBrains (we use Team City as well). Nothing we've had from JetBrains is like this - it's orders of magnitude better in every way. It just works. We haven't tried YouTrack from them to be honest but I'd start with them if you're going to evaluate a product in a similar space. Either that or trac which I've had precisely zero problems with 50 users on a SQLite database!
With TortoiseHG, ssh does work well out of the box on Windows. I only know the webserver routes, but I don't see the problem with that. We use nginx on a Windows server and host our repositories there, it's not too hard to set up. With no knowledge of the process, it might take an hour or two to set up.
I'm not familiar with setting up a git server, but i doubt it is any easier than mercurial.
With TortoiseHG, ssh does work well out of the box on Windows.
FWIW, we've had problems with (correct) passwords not being accepted via TortoisePlink when trying that, but we already had other Tortoise* software for different VCSes installed, so I can't rule out some sort of unfortunate conflict.
If SSH really does work for most people once TortoiseHg is installed and we've just been unlucky, then I partially withdraw my criticism, given that in practice I expect a lot of Windows users of Hg do install TortoiseHg as well anyway.
I only have TortoiseHG installed, but I use a standalone pageant (not the tortoise one) and TortoiseHG doesn't seem to have any problems using the keys stored in there.
Even if you just use the command line client, I think it's worth installing tortoise for the ssh support.
> but surprisingly, it is by far the most awkward version control system I have ever used if you want to set up a shared repository on your network for multiple users.
I take it you have never tried darcs; it's the ultimate user experience nightmare.
>Compare the Mercurial guidance on publishing repositories..
I don't understand what you are getting at. If you have ssh access to the machine, then you can push or pull from a repo in that machine. It is that simple.
In that page, it details ALL the possible ways you can publish your repos. And goes into much detail regarding the hgweb set up. Setting user permissions etc. That is why it is big. The other two pages does not goes into much details. The git page only describes the ssh method.
If you have ssh access to the machine, then you can push or pull from a repo in that machine. It is that simple.
Unfortunately, as I've been discussing with others elsewhere in this thread, it's not really that simple at all in some cases. I'm typing this on a Windows machine that has Hg installed, and I routinely SSH from this machine into various Linux servers that also have Hg installed, yet for reasons we've yet to determine, trying to access any sort of ssh:// repo using the hg command line client fails.
Have you set the configuration option that is used by mercurial to find the ssh program to use for ssh operations?
also,
HG requires the path to the repo as one relative to the home folder of the user. ie, if you are logging in as silhouette and your repo is located in /home/silhouette/projects/myproject then the command to push to this repo would be
As far as we can see, those things are all set up properly, but we aren't getting that far anyway. Something is failing at the authentication/authorisation stage when setting up the SSH connection, hence our current preference for the web-based alternatives.
> presumably because of the wicked data loss bugs they've had (or may still have, I haven't checked this recently).
I am not aware of any data loss bugs in that area, or those are related to buggy network filesystems (in which case other VCS would also be affected). The usual issue around shared on disk repo is with permissions (and since a long time mercurial tries to be smart when creating new directory/files in order to propagate the perms regardless of umask).
Cloning a repo hosted on a Linux-based NAS/server to another location on the same NAS/server from a Windows machine tends to set up the clone using hard links rather than a true copy by default.
That in itself is not a problem, but unfortunately the Windows hg client doesn't seem to detect this situation reliably (or at least didn't last time I checked, which is a few months ago now). That means if you then commit changes, you can be unintentionally affecting the common linked files rather than separating them on demand first. The same was true for TortoiseHg the last time I checked, this again being a few months ago.
This is a particularly wicked bug because it means even if you've cloned a repo elsewhere on your network drive with the intent of keeping an independent backup of everything, both versions will be corrupted, and the first you're likely to notice is if you run an hg verify and find the index data for your original repo (which as far you know is untouched) suddenly has errors in it.
Incidentally, if you are using this sort of scenario for whatever reason, there is an option you can set at cloning time to force a full copy to be made.
I've also heard of problems with file locking being unreliable if you're using NFS to access the server, but the only cases I've seen were set up a different way so I can't offer anything more than a general warning in that case.
If you're using NFS, CIFS or smbfs to host or synchronize repo data, You're Gonna Have A Bad Time (YGHABT?) regardless of which VCS you're using - it's only a matter of time.
Mercurial comes with a built-in "hg serve" command that you can use to serve repositories through http. It creates a web server that you can access through any web browser. Unless you need authentication or you have a lot of users you don't need to setup any external web server.
Otherwise setting up Apache + mercurial on windows is not very hard. If you need help please drop me a line.
Thanks for the offer of help, but we're OK with the web server side of things. I was just suggesting that one possible reason for Git's popularity compared to Hg's is that it isn't a walk in the park to set up a common repo for a team with Hg. If you've got someone who's familiar with setting up a web server anyway, you'll be OK, but with Git you don't need to do anything like that at all.
I don't sure I understand what is the problem with setting up a basic mercurial server. If you have TortoiseHg you just open your repository, click on "Repository / Start Web Server" and you are done. If you have bare mercurial just cd to your repository and execute "hg serve".
Perhaps you have some other requirement (e.g. authentication) that I did not take into account?
I suspect hg serve is fine for temporary use, but it's not really designed as stable, long-term solution. As you say, it lacks authentication, which isn't ideal (or allowed at all) in some circumstances. Also, it needs to be started manually, so it needs some sort of supervisor process/start-up script to be set up.
Obviously this isn't some horrific burden, but it's still more demanding than the basic server set-up for some other DVCSes. The original question was about the reasons for the relative popularity of different systems, and if we're talking about people who are making decisions about a DVCS for the first time, they're not experts already and this stuff probably does make a difference.
Compare the Mercurial guidance on publishing repositories[1], which comes in at about 15 screens on my system, with the equivalents for say Git[2] or Bzr[3] that fit in a couple of screens, and you can see how striking the difference is.
[1] http://mercurial.selenic.com/wiki/PublishingRepositories [2] http://git-scm.com/book/en/Git-on-the-Server-Setting-Up-the-... [3] http://doc.bazaar.canonical.com/bzr-0.11/server.htm
[Edit: Rephrase to avoid sounding unintentionally trollish. I'm trying to answer the question, not start a flame war.]