For commits with lots of files, Gerrit's diff interface is too broken
to be useful. It does not provide a compact overview of the change
which is essential for effective review.
Luckily, there are alternatives, specifically local git clients and
gitweb. However, these don't work when git's change model is broken by
the use of git commit --amend.
For commits with a small number of files, such changes are reviewable
by the use of the "patch history" table in the diff views. But when
there are a large number of files, it becomes difficult to find the
files which have changed, and if there are a lot of changed files, to
produce a compact combined diff.
So if there are no objections, I'm going to change [[Git/Workflow]] to
restrict the recommended applications of "git commit --amend", and to
recommend plain "git commit" as an alternative. A plain commit seems
to work just fine. It gives you a separate commit to analyse with
Gerrit, gitweb and client-side tools, and it provides a link to the
original change in the "dependencies" section of the change page.
-- Tim Starling
Hey, I know it’s been a while since we last talked. I’m currently working on jump starting the ArchiveLinks project over on the Wikimedia Foundation side. I was wondering what the status is on your side?
I understand that you've been waiting on some fixes and a feed from us. I plan on creating that feed by February 9th and will let you know when it is up.
I'm tracking my progress at http://www.mediawiki.org/wiki/User:Kevin_Brown/ArchiveLinks/status so you can keep informed.
Thanks,
Kevin Brown
Wikimedia GSoC Student 2011
I have been collecting list of issues at
http://www.mediawiki.org/wiki/Git/Conversion#Open_issues_after_migration
I'd like start discussion how to solve these issues and in what time frame.
== Code review process ==
I would very much like to have the full patch diffs back in emails so
that I can quickly scan for any i18n issues. Also Gerrit is just slow
enough that I must either waste time waiting for it to load the
interface, or do a context switch by reading next commit. Not to
mention the need to open the diffs for each file in tab.
I can somewhat work with this right now by skipping commits unlikely
to have anything relevant (though you never know, as not even the file
names are mentioned), but when the number of commits pick up the speed
it's not going to work.
== Emails ==
Related to above, I want to scan all (submitted?) changes for the
issues. Currently there is no easy way to subscribe to changes of all
repositories.
In theory I could do the same inside Gerrit, would it provide a easily
navigatable list which records what I have looked at.
== Unicode issue in Gerrit ==
This must be fixed (dataloss). See
https://gerrit.wikimedia.org/r/#change,3505 for example
== Local changes ==
How to handle permanent local changes? There have already been suggestions:
* use git stash (not fun to do for every push)
* use git review --no-rebase (no idea if this is a good idea)
* commit them to local development branch (but then how to rebase
changes-to-commit to master before pushing?)
== How to FIXME ==
We need a way to identify *merged* commits that need fixing. Those
commits should be our collective burden to fix. It must not rely on
the reporter of the issue fixing the issue him/herself or being
persistent enough to get someone to fix it.
I was suggested to use bugzilla, but it's a bit tedious to use and
doesn't as-is have the high visibility like FIXMES used to have.
-Niklas
--
Niklas Laxström
Hi,
We have dozens of repositories now. It's probably good that they are
separate, but many of them have similarities in configuration, that
should be synchronized.
One of these is .gitignore. The content of this file is probably very
similar in most repos. Did anybody think about synchronizing it
somehow?
--
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
http://aharoni.wordpress.com
“We're living in pieces,
I want to live in peace.” – T. Moore
Thank you very much for your feedback Jeblad.
I will immediately look into how this can be best implemented by extending
the Mediawiki API.
Do kindly let me know about my other ideas so that I can shape my proposal
well.
The mentor for ideas I am interested in is Oren Bochman. But I couldn't
track him on the irc.
I would love to interact with him or any other mentor and discuss my ideas
in detail.
I am recahable at
Email : karthikprasad008(a)gmail.com
SkypeID : prasadkarthik
Facebook: facebook.com/prasadkarthik
Google+ : gplus.to/karthikprasad
twitter : twitter.com/_karthikprasad
Date: Sat, 31 Mar 2012 12:05:00 +0200
> From: John Erling Blad <jeblad(a)gmail.com>
> To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
> Subject: Re: [Wikitech-l] GSOC 2012 - Text Processing and Data Mining
> Message-ID:
> <CAJcMX2=Pm-fCm4Dg33uwfcMYhy1RJ4HTE-gPD2mJBzuGzcd7wQ(a)mail.gmail.com
> >
> Content-Type: text/plain; charset=windows-1252
>
> Your point (a) "Implementing a wikiSumarizer widget which will give the
> summary of the page being read by the user" could be extremely usefull for
> a hover/ helpbubbles functionality where bubbles with a small explanations
> are created within external articles. Such functionality imply creating an
> extension to the Mediawiki API.
>
> Jeblad
>
> On Sat, Mar 31, 2012 at 11:09 AM, karthik prasad <
> karthikprasad008(a)gmail.com
> > wrote:
>
> > Hello,
> > I am Karthik from India - currently pursuing 3rd year Bachelors in
> Computer
> > Science and Engineering in PESIT, Bangalore.
> >
> > I am interested in some of the projects proposed for Google SOC 2012 and
> > would love to work and contribute the same to the open-source world.
> >
> > I am very attracted towards Text Processing and Data Mining. I have
> > undertaken course in Natural Language Processing. I am currently working
> on
> > a project "Automatic Essay Grader" - A system that automatically grades
> > English essays based on Spelling, Grammar and Structure, Coherence,
> > Frequent phrases and Vocabulary as weighted parameters. Realized by
> > implementing a self-designed algorithm ? studying the ?relation graph? of
> > words of the essay.
> >
> > I had also worked on "Sentiment Analysis on Web" - Extraction of reviews
> > about a gadget from tech-review forums, analysis of the Sentiments of the
> > reviews thus predicting the sentiment/opinion associated with that gadget
> > and then generation of appropriate Rating on the scale of 10.
> >
> > The following projects mentioned on the mediawiki's ideas page caught my
> > eye:
> > 1) Wikipedia Corpus Tools
> > 2) Lucene Lemma Analyzers based on Morphology Extraction from Wikipedia
> > Text
> > 3) Lucene Automatic Query Expansion from Wikipedia Text
> > 4) Translation spellchecking
> >
> > Apart from the above projects, I also had the following ideas which i
> feel
> > will be of great help if implemented.
> > a) Implementing a wikiSumarizer widget which will give the summary of the
> > page being read by the user.
> > b) An automatic coherence analyser which would make it easy to find out
> if
> > the article on a given page talks about the same topic
> > c) Details Aggregator for page.
> >
> > I would be grateful if you could kindly let me know about the specific
> > requirements of the projects and about your thoughts on my ideas so that
> I
> > can suitably write a proposal.
> >
> > Eagerly waiting for your response.
> >
> > Thanking you.
> >
> > Best Regards,
> > Karthik.
> > _______________________________________________
> > Wikitech-l mailing list
> > Wikitech-l(a)lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
>
>
Hello,
I am Karthik from India - currently pursuing 3rd year Bachelors in Computer
Science and Engineering in PESIT, Bangalore.
I am interested in some of the projects proposed for Google SOC 2012 and
would love to work and contribute the same to the open-source world.
I am very attracted towards Text Processing and Data Mining. I have
undertaken course in Natural Language Processing. I am currently working on
a project "Automatic Essay Grader" - A system that automatically grades
English essays based on Spelling, Grammar and Structure, Coherence,
Frequent phrases and Vocabulary as weighted parameters. Realized by
implementing a self-designed algorithm – studying the ‘relation graph’ of
words of the essay.
I had also worked on "Sentiment Analysis on Web" - Extraction of reviews
about a gadget from tech-review forums, analysis of the Sentiments of the
reviews thus predicting the sentiment/opinion associated with that gadget
and then generation of appropriate Rating on the scale of 10.
The following projects mentioned on the mediawiki's ideas page caught my
eye:
1) Wikipedia Corpus Tools
2) Lucene Lemma Analyzers based on Morphology Extraction from Wikipedia Text
3) Lucene Automatic Query Expansion from Wikipedia Text
4) Translation spellchecking
Apart from the above projects, I also had the following ideas which i feel
will be of great help if implemented.
a) Implementing a wikiSumarizer widget which will give the summary of the
page being read by the user.
b) An automatic coherence analyser which would make it easy to find out if
the article on a given page talks about the same topic
c) Details Aggregator for page.
I would be grateful if you could kindly let me know about the specific
requirements of the projects and about your thoughts on my ideas so that I
can suitably write a proposal.
Eagerly waiting for your response.
Thanking you.
Best Regards,
Karthik.
I put up an RFC earlier for native handling of entrypoints and 404
handling inside of MediaWiki:
https://www.mediawiki.org/wiki/Requests_for_comment/Entrypoint_Routing_and_…
I've begun the initial code in a branch:
https://github.com/dantman/mediawiki-core/compare/master...2012%2F404-routi…
I incorporated Extension:Special404's 404 special page for now and updated
my PathRouter and WebRequest code I implemented awhile ago to
differentiate between requests that are 404s and requests that should go
to the main page.
So it's already handling 404s.
Of course there are more things I plan to clean up, improve, and add to
this handling.
But right now I encourage people to try pulling this change in and trying
it out.
Set your 404 error handler to index.php or rewrite any nonexistent file to
index.php (remember you NEVER want a ?title=$1 in the /index.php for any
rewrite or error handler)
I would love to see if anyone manages to break it and make it serve a 404
page when it should actually be sending you to the main page.
--
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
This article was making its way around the testosphere this morning to much
approval. Although I question some of the numbers cited, I like the
overall description a lot, and since I have been having so many discussions
about QA on various projects, I thought it would be of interest to
wikitech: It's a short read, "The Confusion Around QA"
http://www.headspring.com/2012/03/the-confusion-around-qa-why-doesnt-the-in….
(Ian Baker in particular will I think recognize some of the points here
that we covered in our own discussions)
Hi everyone,
There's been some comments that the phrasing for a -1 vote in
Gerrit ("I'd prefer that you didn't submit this") is kind of personal
and we can do better.
I did some testing and this is totally configurable :) It won't change
for old comments that were already submitted, but we can pick
some nicer wording going forward.
I really don't have any good suggestions for this, so I'm opening
this up to the list for a bit of good old fashioned bikeshedding.
Thanks!
-Chad
Hi,
I made a little localization fix to the jQuery.ui datepicker, which is
used by the Upload Wizard. I submitted it upstream through GitHub and
it was merged there.
Krinkle says that jQuery is supposed to be only modified upstream, and
that is a Good Thing. What is our policy for actually merging upstream
jQuery changes to MW code?
--
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
http://aharoni.wordpress.com
“We're living in pieces,
I want to live in peace.” – T. Moore