On Monday, March 11, 2013 at 6:51 PM, Rob Lanphier wrote:
On Sun, Mar 10, 2013 at 3:32 PM, Kevin Israel
<pleasestand(a)live.com (mailto:pleasestand@live.com)> wrote:
On 03/10/2013 06:03 PM, Bartosz DziewoĆski
wrote:
A shallow clone certainly shouldn't be as
large as a normal one.
Something's borked.
--depth 0 is what's broken. --depth 1 works fine.
$ git clone --depth 1
[...]
Receiving objects: 100% (2815/2815), 17.87 MiB |
1.16 MiB/s, done.
Yup, I'm seeing more or less the same thing. Importantly:
$ du -sh .git
19M .git
I was able to do the clone in 50 seconds over HTTPS. Most of that
time was spent in data transfer (which would be the same for a
snapshot).
Ori, have you tried this with --depth 1?
Rob
Rob, thanks for checking. I tried it yesterday and again just now, and in both
cases it took around 15 minutes:
vagrant@precise32:~$ time git clone --depth 1
https://gerrit.wikimedia.org/r/p/mediawiki/core.git
Cloning into 'core'...
remote: Counting objects: 46297, done
remote: Finding sources: 100% (46297/46297)
remote: Getting sizes: 100% (25843/25843)
remote: Compressing objects: 76% (19864/25833)
remote: Total 46297 (delta 33063), reused 26399 (delta 20010)
Receiving objects: 100% (46297/46297), 102.66 MiB | 194 KiB/s, done.
Resolving deltas: 100% (37898/37898), done.
real 15m14.500s
user 0m27.562s
sys 0m13.421s
The output of 'git config --list' is blank; this is vanilla git. 'Compressing
objects' took the longest.
For comparison:
vagrant@precise32:~$ time wget -q
https://github.com/wikimedia/mediawiki-core/archive/master.zip && unzip -x -q
master.zip
real 1m15.592s
user 0m0.184s
sys 0m3.480s
--
Ori Livneh