Hi all.
"Recent changes" shows bytes added/removed in green/red. But "View history" only shows revision length in bytes, and "User contributions" shows no byte counts at all.
I think it would be nice for both "View history"[1] and "User contributions" to show bytes added/removed. This would make it easier to distinguish between small contributions from big ones: between multiple-sentence additions and small typo fixes.
What do you think?
All the best, -Jason
^ [1]. You can already get bytes added/removed to history revisions using a gadget. Just add the following line to your vector.js: importScript('fr:MediaWiki:Gadget-HistoryNumDiff.js');
On 07/28/2010 04:57 AM, Jason Spiro wrote:
I think it would be nice for both "View history"[1] and "User contributions" to show bytes added/removed. This would make it easier to distinguish between small contributions from big ones: between multiple-sentence additions and small typo fixes.
I'm not sure we should even show byte counts by default. It must be very confusing for newbies (especially if they don't know what a byte is). And it clutters up the UI. Perhaps make it optional and disable by default? It's mostly targeted at experienced users anyway. If we'd make it optional, I don't think your proposal would be any problem (and as a Wikipedian I'd love to have that feature!).
-- Tobias (User:Church of emacs)
On Wed, Jul 28, 2010 at 2:37 AM, church.of.emacs.ml church.of.emacs.ml@googlemail.com wrote:
On 07/28/2010 04:57 AM, Jason Spiro wrote:
I think it would be nice for both "View history"[1] and "User contributions" to show bytes added/removed. This would make it easier to distinguish between small contributions from big ones: between multiple-sentence additions and small typo fixes.
I'm not sure we should even show byte counts by default. It must be very confusing for newbies (especially if they don't know what a byte is). And it clutters up the UI. Perhaps make it optional and disable by default? It's mostly targeted at experienced users anyway. If we'd make it optional, I don't think your proposal would be any problem (and as a Wikipedian I'd love to have that feature!).
-- Tobias (User:Church of emacs)
Newbies know what characters are, and the byte counts are really just character counts. If someone wants to see page history, then they probably also would benefit from knowing which edits are text additions and which are text removals, no?
Has anyone ever done usability studies of newbies -- new Internet users, experienced Internet users who are non-editors, or new editors? Have the study conductors watched how they play with the history tools?
Maybe you and I should each ask our moms to try the history tools and see how they react to seeing the history screens and the byte counts on those screens.
By the way, why does page history say "12,345 bytes" and not "12,345 characters"?
On Mon, Aug 2, 2010 at 4:59 PM, Jason A. Spiro jasonspiro4@gmail.com wrote:
Has anyone ever done usability studies of newbies -- new Internet users, experienced Internet users who are non-editors, or new editors?
Yep, that's what the Usability Initiative does.
Have the study conductors watched how they play with the history tools?
That I don't know. I don't know if descriptions of the Usability Initiative's studies are all public, or what. Maybe one of them could fill us in. My personal guess is that the best usability for newbies would be to hide as many things as possible to make it less intimidating.
By the way, why does page history say "12,345 bytes" and not "12,345 characters"?
Because it's 12,345 bytes, not 12,345 characters. :)
On Mon, Aug 2, 2010 at 5:18 PM, Aryeh Gregor Simetrical+wikilist@gmail.com wrote:
On Mon, Aug 2, 2010 at 4:59 PM, Jason A. Spiro jasonspiro4@gmail.com wrote:
Has anyone ever done usability studies of newbies -- new Internet users, experienced Internet users who are non-editors, or new editors?
Yep, that's what the Usability Initiative does.
Ah, I just took a look at their website now: http://usability.wikimedia.org/wiki/Main_Page
Have the study conductors watched how they play with the history tools?
That I don't know. I don't know if descriptions of the Usability Initiative's studies are all public, or what. Maybe one of them could fill us in. My personal guess is that the best usability for newbies would be to hide as many things as possible to make it less intimidating.
By the way, why does page history say "12,345 bytes" and not "12,345 characters"?
Because it's 12,345 bytes, not 12,345 characters. :)
Does the difference really matter so much that we must really use the more-obscure and more-technical term "bytes"?
On Mon, Aug 2, 2010 at 5:28 PM, Jason A. Spiro jasonspiro4@gmail.com wrote:
Does the difference really matter so much that we must really use the more-obscure and more-technical term "bytes"?
In English, maybe not. In a lot of languages, they'll differ by a somewhat unpredictable factor that can be as high as three. The sane thing would be to just make the counts be in characters rather than bytes to begin with, of course -- it's hardly difficult. I imagine Chinese people are puzzled when RC reports +3 and there was only one character added.
Στις 02-08-2010, ημέρα Δευ, και ώρα 17:36 -0400, ο/η Aryeh Gregor έγραψε:
On Mon, Aug 2, 2010 at 5:28 PM, Jason A. Spiro jasonspiro4@gmail.com wrote:
Does the difference really matter so much that we must really use the more-obscure and more-technical term "bytes"?
In English, maybe not. In a lot of languages, they'll differ by a somewhat unpredictable factor that can be as high as three. The sane thing would be to just make the counts be in characters rather than bytes to begin with, of course -- it's hardly difficult. I imagine Chinese people are puzzled when RC reports +3 and there was only one character added.
I would love it if the indicator was in characters instead of bytes. That's more meaningful for almost every project. Readers are looking at text after all, not at raw strings.
Ariel
On 08/03/2010 01:48 AM, Ariel T. Glenn wrote:
I would love it if the indicator was in characters instead of bytes. That's more meaningful for almost every project. Readers are looking at text after all, not at raw strings.
Ariel
That would require introduction of another field to revision table, since byte count is not convertible to characher count in UTF-8.
--vvv
On Mon, Aug 2, 2010 at 6:45 PM, Victor Vasiliev vasilvv@gmail.com wrote:
That would require introduction of another field to revision table, since byte count is not convertible to characher count in UTF-8.
No, we'd just have to repurpose rev_len to mean "characters" instead of "bytes", and update all the old rows. We don't actually need the byte count for anything, do we?
On 8/3/10, Aryeh Gregor Simetrical+wikilist@gmail.com wrote:
No, we'd just have to repurpose rev_len to mean "characters" instead of "bytes", and update all the old rows. We don't actually need the byte count for anything, do we?
Byte count is used. For example in Chinese Wikipedia, one of the criteria of "Did you know" articles is ">= 3000 bytes".
This is a policy requirement, not a technical requirement, and can surely be adjusted.
Am 03.08.2010 07:14, schrieb Liangent:
On 8/3/10, Aryeh Gregor Simetrical+wikilist@gmail.com wrote:
No, we'd just have to repurpose rev_len to mean "characters" instead of "bytes", and update all the old rows. We don't actually need the byte count for anything, do we?
Byte count is used. For example in Chinese Wikipedia, one of the criteria of "Did you know" articles is ">= 3000 bytes".
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 8/3/10, ChrisiPK chrisipk@gmail.com wrote:
This is a policy requirement, not a technical requirement, and can surely be adjusted.
It seems 1 zh char = 3 bytes gives a kind of proper weight among characters. Obviously, zh chars look more important (when counting the amount of content) than en chars, which are usually wikisyntax, in zh.wp...
Ahem.
The revision size (and page size, meaning that of last revision) in bytes, is available in the API. If you change the definition there is no telling what you will break. Essentially you can't.
A character count would have to be another field.
best, Robert
On Tue, Aug 3, 2010 at 9:53 AM, ChrisiPK chrisipk@gmail.com wrote:
This is a policy requirement, not a technical requirement, and can surely be adjusted.
Am 03.08.2010 07:14, schrieb Liangent:
On 8/3/10, Aryeh Gregor Simetrical+wikilist@gmail.com wrote:
No, we'd just have to repurpose rev_len to mean "characters" instead of "bytes", and update all the old rows. We don't actually need the byte count for anything, do we?
Byte count is used. For example in Chinese Wikipedia, one of the criteria of "Did you know" articles is ">= 3000 bytes".
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Tue, Aug 3, 2010 at 1:14 AM, Liangent liangent@gmail.com wrote:
Byte count is used. For example in Chinese Wikipedia, one of the criteria of "Did you know" articles is ">= 3000 bytes".
I mean, is byte count used for anything where character count couldn't be used just about as well? Like is there some code that uses rev_len to figure out whether an article can fit into a field limited to X bytes, or whatever? (That's probably unsafe anyway.)
On Tue, Aug 3, 2010 at 3:48 AM, Robert Ullmann rlullmann@gmail.com wrote:
The revision size (and page size, meaning that of last revision) in bytes, is available in the API. If you change the definition there is no telling what you will break.
The same could be said of practically any user-visible change. I mean, maybe if we add a new special page we'll break some script that was screen-scraping Special:SpecialPages. We can either freeze MediaWiki and never change anything for fear that we'll break something, or we can evaluate each potential change on the basis of how likely it is to break anything. I can't see anything breaking too badly if rev_len is reported in characters instead of bytes -- the only place it's likely to be useful is in heuristics, and by their nature, those won't break too badly if the numbers they're based on change somewhat.
Just butting in here, if I recall correctly, both the PHP-native mb_strlen() and the MediaWiki fallback mb_strlen() functions are considerably slower (1.5 to 5 times as slow). Unless there's another way to count characters for multibyte UTF strings, this would not be a feasible idea.
-X!
On Tue, Aug 3, 2010 at 10:59 AM, soxred93 soxred93@gmail.com wrote:
Just butting in here, if I recall correctly, both the PHP-native mb_strlen() and the MediaWiki fallback mb_strlen() functions are considerably slower (1.5 to 5 times as slow).
They only have to be run once, when the revision is saved. It's not likely to be a noticeable cost.
Aryeh Gregor wrote:
On Tue, Aug 3, 2010 at 10:59 AM, soxred93 soxred93@gmail.com wrote:
Just butting in here, if I recall correctly, both the PHP-native mb_strlen() and the MediaWiki fallback mb_strlen() functions are considerably slower (1.5 to 5 times as slow).
They only have to be run once, when the revision is saved. It's not likely to be a noticeable cost.
Yup, though we might as well remember that not everyone has mb_ functions installed. MediaWiki is intended to be functional both with, and without mb_ functions. That's another point towards storing both and falling back to bytes when the char field isn't populated.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
On Tue, Aug 3, 2010 at 5:09 PM, Daniel Friesen lists@nadir-seen-fire.com wrote:
Yup, though we might as well remember that not everyone has mb_ functions installed.
if ( !function_exists( 'mb_strlen' ) ) { /** * Fallback implementation of mb_strlen, hardcoded to UTF-8. * @param string $str * @param string $enc optional encoding; ignored * @return int */ function mb_strlen( $str, $enc="" ) { $counts = count_chars( $str ); $total = 0;
// Count ASCII bytes for( $i = 0; $i < 0x80; $i++ ) { $total += $counts[$i]; }
// Count multibyte sequence heads for( $i = 0xc0; $i < 0xff; $i++ ) { $total += $counts[$i]; } return $total; } }
(just remember that it's 1.5 to 5 times slower, like I said earlier. Whether or not that's an issue will have to be decided by higher powers)
On Aug 3, 2010, at 5:54 PM, Aryeh Gregor wrote:
On Tue, Aug 3, 2010 at 5:09 PM, Daniel Friesen lists@nadir-seen-fire.com wrote:
Yup, though we might as well remember that not everyone has mb_ functions installed.
if ( !function_exists( 'mb_strlen' ) ) { /** * Fallback implementation of mb_strlen, hardcoded to UTF-8. * @param string $str * @param string $enc optional encoding; ignored * @return int */ function mb_strlen( $str, $enc="" ) { $counts = count_chars( $str ); $total = 0;
// Count ASCII bytes for( $i = 0; $i < 0x80; $i++ ) { $total += $counts[$i]; } // Count multibyte sequence heads for( $i = 0xc0; $i < 0xff; $i++ ) { $total += $counts[$i]; } return $total; }
}
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Tue, Aug 3, 2010 at 8:12 PM, soxred93 soxred93@gmail.com wrote:
(just remember that it's 1.5 to 5 times slower, like I said earlier. Whether or not that's an issue will have to be decided by higher powers)
This is not some question that has to be decided by specially-appointed performance gurus -- just do some quick testing. Like so:
$ echo '<?php $str = str_repeat( "aאπ", 200000000 ); $start = microtime( true ); mb_strlen( $str ); var_dump( microtime( true ) - $start );' | php float(1.1920928955078E-5)
Note that this string is one *billion* bytes long, and the mb_strlen() still takes only about 10 *microseconds*. If you look at our own mb_strlen() implementation, the only non-O(1) part is count_chars(), and for that we find:
$ echo '<?php $str = str_repeat( "aאπ", 200000000 ); $start = microtime( true ); count_chars( $str ); var_dump( microtime( true ) - $start );' | php float(1.8740479946136)
I.e., less than two seconds for a one-billion-byte string. This is about 100,000 times worse than native mb_strlen(), and about 200,000 times worse than strlen(), but on a sub-megabyte article, it's still only a millisecond or so in absolute terms.
In the future, remember that you can run this kind of order-of-magnitude performance assessment yourself very easily. You *have* to, to write code that performs decently -- you can't just push all performance considerations off to reviewers. Thankfully, it's easy to answer this kind of performance question. Things that involve nontrivial scalability, like database operations, are considerably harder, and you do need to develop specific expertise to easily estimate what performance will be like, but that's not the case here.
Aryeh Gregor wrote:
The same could be said of practically any user-visible change. I mean, maybe if we add a new special page we'll break some script that was screen-scraping Special:SpecialPages. We can either freeze MediaWiki and never change anything for fear that we'll break something, or we can evaluate each potential change on the basis of how likely it is to break anything. I can't see anything breaking too badly if rev_len is reported in characters instead of bytes -- the only place it's likely to be useful is in heuristics, and by their nature, those won't break too badly if the numbers they're based on change somewhat.
This is problematic logic for a few reasons. I see a change to the rev_len logic as being similar to a change in article count logic. The same arguments work in both places, specifically the "step problem" that will cause nasty jumps in graphs.[1]
In some cases, as you've noted, we're talking about a change by a factor of three. Plenty of scripts rely on hard-coded values to determine size thresholds for certain behaviors. While these scripts may not have the best implementations, I don't think it's fair to say that they're worth breaking.
The comparison to screen-scraping seems pretty spurious as well. The reason it's acceptable to break screen-scraping scripts is that there's a functioning API alternative that is designed for bots and scripts. One of the design principles is consistency. Altering a metric by up to a factor of three (and even worse, doing so in an unpredictable manner) breaks this consistency needlessly.
Is it worth the cost to add 300 million+ rows to easily have character count? I don't know. Personally, I don't mind rev_len being in bytes; it makes more sense from a database and technical perspective to me. Admittedly, though, I deal mostly with English sites.
MZMcBride
Στις 04-08-2010, ημέρα Τετ, και ώρα 04:17 +0000, ο/η MZMcBride έγραψε:
Aryeh Gregor wrote:
The same could be said of practically any user-visible change. I mean, maybe if we add a new special page we'll break some script that was screen-scraping Special:SpecialPages. We can either freeze MediaWiki and never change anything for fear that we'll break something, or we can evaluate each potential change on the basis of how likely it is to break anything. I can't see anything breaking too badly if rev_len is reported in characters instead of bytes -- the only place it's likely to be useful is in heuristics, and by their nature, those won't break too badly if the numbers they're based on change somewhat.
This is problematic logic for a few reasons. I see a change to the rev_len logic as being similar to a change in article count logic. The same arguments work in both places, specifically the "step problem" that will cause nasty jumps in graphs.[1]
In some cases, as you've noted, we're talking about a change by a factor of three. Plenty of scripts rely on hard-coded values to determine size thresholds for certain behaviors. While these scripts may not have the best implementations, I don't think it's fair to say that they're worth breaking.
The comparison to screen-scraping seems pretty spurious as well. The reason it's acceptable to break screen-scraping scripts is that there's a functioning API alternative that is designed for bots and scripts. One of the design principles is consistency. Altering a metric by up to a factor of three (and even worse, doing so in an unpredictable manner) breaks this consistency needlessly.
Is it worth the cost to add 300 million+ rows to easily have character count? I don't know. Personally, I don't mind rev_len being in bytes; it makes more sense from a database and technical perspective to me. Admittedly, though, I deal mostly with English sites.
I"m all for the change, but it would have to be announced well in advance of rollout and coordinated with other folks. For example, I have a check against rev_len (in bytes) when writing out XML dumps, in order to avoid rev id and rev content out of sync errors that we have run into multiple times in the past. That code would need to be changed to count characters of the text being used for prefetch instead of bytes.
Ariel
Ariel T. Glenn wrote:
I"m all for the change, but it would have to be announced well in
advance of rollout and coordinated with other folks. For example,
I
have a check against rev_len (in bytes) when writing out XML dumps,
in
order to avoid rev id and rev content out of sync errors that we
have
run into multiple times in the past. That code would need to be
changed
to count characters of the text being used for prefetch
instead of
bytes.
Are character counts between programming languages generally consistent? And is there a performance concern with counting characters vs. counting bytes? Another post in this thread suggested that it might be up to five times slower when counting characters. I've no idea if this is accurate, but even a small increase could have a nasty impact on dump-processing scripts (as opposed to the negligible impact on revision table inserts).
MZMcBride
On Wed, Aug 4, 2010 at 7:38 AM, MZMcBride z@mzmcbride.com wrote:
Ariel T. Glenn wrote:
I"m all for the change, but it would have to be announced well in
advance of rollout and coordinated with other folks. For example,
I
have a check against rev_len (in bytes) when writing out XML dumps,
in
order to avoid rev id and rev content out of sync errors that we
have
run into multiple times in the past. That code would need to be
changed
to count characters of the text being used for prefetch
instead of
bytes.
Are character counts between programming languages generally consistent? And is there a performance concern with counting characters vs. counting bytes? Another post in this thread suggested that it might be up to five times slower when counting characters. I've no idea if this is accurate, but even a small increase could have a nasty impact on dump-processing scripts (as opposed to the negligible impact on revision table inserts).
Well, why not add the field, but don't populate it for old entries? This way new revisions have both char and byte counts while old revisions only have bytecount populated. Then, when you edit a page, MW checks: has the now-old revision the charcount field populated? If not, update the data, if yes, just save the new revision.
Marco
On Wed, Aug 4, 2010 at 12:17 AM, MZMcBride z@mzmcbride.com wrote:
This is problematic logic for a few reasons. I see a change to the rev_len logic as being similar to a change in article count logic. The same arguments work in both places, specifically the "step problem" that will cause nasty jumps in graphs.[1]
We'd presumably change the rev_len's that are already in the database, so the charts would just have to be regenerated using a new dump. Article count is different, because we don't store historical article counts in a way that we could retroactively change the way they're computed.
In some cases, as you've noted, we're talking about a change by a factor of three. Plenty of scripts rely on hard-coded values to determine size thresholds for certain behaviors. While these scripts may not have the best implementations, I don't think it's fair to say that they're worth breaking.
They wouldn't break, though. They'd just work a bit differently (the cutoff being somewhat lower than expected).
The comparison to screen-scraping seems pretty spurious as well. The reason it's acceptable to break screen-scraping scripts is that there's a functioning API alternative that is designed for bots and scripts.
It's not acceptable to break screen-scraping bots, actually. Otherwise we'd probably just stop emitting well-formed XML. But we don't worry about them where we don't have specific reason to suspect there's an actual problem.
On Wed, Aug 4, 2010 at 1:29 AM, Ariel T. Glenn ariel@wikimedia.org wrote:
For example, I have a check against rev_len (in bytes) when writing out XML dumps, in order to avoid rev id and rev content out of sync errors that we have run into multiple times in the past. That code would need to be changed to count characters of the text being used for prefetch instead of bytes.
That's an interesting use-case. Okay, so it looks like people are really relying on the current semantics, and we'd have to be careful changing them.
On Wed, Aug 4, 2010 at 1:38 AM, MZMcBride z@mzmcbride.com wrote:
Are character counts between programming languages generally consistent?
If you equate "character" to "code point in NFC", then yes.
And is there a performance concern with counting characters vs. counting bytes?
No, not realistically. Counting characters is a lot slower, relatively speaking, but counting bytes is so ridiculously fast in absolute terms that this makes no difference in a practical sense for our purposes. If you're dealing with so many articles that the strlen()s add up to a lot in absolute terms, as dump processes might, you'll be bottlenecked by disk reads anyway, so it will make no difference. You can do the strlen() on the current pages while you wait for the next ones to be read off disk, and you lose no time even if the strlen() takes a hundred times longer. As noted in the other thread, I just found that our home-brewed mb_strlen() takes ~100,000 times as long as the native one for at least some sample input, and it's still a trivial amount when applied to things the length of actual articles.
On Mon, Aug 2, 2010 at 5:48 PM, Ariel T. Glenn ariel@wikimedia.org wrote:
Στις 02-08-2010, ημέρα Δευ, και ώρα 17:36 -0400, ο/η Aryeh Gregor έγραψε:
On Mon, Aug 2, 2010 at 5:28 PM, Jason A. Spiro jasonspiro4@gmail.com wrote:
Does the difference really matter so much that we must really use the more-obscure and more-technical term "bytes"?
In English, maybe not. In a lot of languages, they'll differ by a somewhat unpredictable factor that can be as high as three. The sane thing would be to just make the counts be in characters rather than bytes to begin with, of course -- it's hardly difficult. I imagine Chinese people are puzzled when RC reports +3 and there was only one character added.
I would love it if the indicator was in characters instead of bytes. That's more meaningful for almost every project. Readers are looking at text after all, not at raw strings.
I've just reported your mutual wish at https://bugzilla.wikimedia.org/show_bug.cgi?id=25198 Ariel and Aryeh.
And at https://bugzilla.wikimedia.org/show_bug.cgi?id=25199 I've reported my original idea of showing the number of added or removed characters on more pages.
To all who replied, thank you for your feedback. I am now unsubscribing from wikitech-l. Please CC me on all replies.
On Mon, Aug 2, 2010 at 5:36 PM, Aryeh Gregor Simetrical+wikilist@gmail.com wrote:
On Mon, Aug 2, 2010 at 5:28 PM, Jason A. Spiro jasonspiro4@gmail.com wrote:
Does the difference really matter so much that we must really use the more-obscure and more-technical term "bytes"?
In English, maybe not. In a lot of languages, they'll differ by a somewhat unpredictable factor that can be as high as three. The sane thing would be to just make the counts be in characters rather than bytes to begin with, of course -- it's hardly difficult. I imagine Chinese people are puzzled when RC reports +3 and there was only one character added.
A question for the non-English wiki contributors out there: Do you honestly care that MediaWiki shows byte counts and not character counts? If so, why do you care?
Best regards, -Jason
On Tue, Aug 3, 2010 at 2:53 PM, Jason A. Spiro jasonspiro4@gmail.com wrote:
A question for the non-English wiki contributors out there: Do you honestly care that MediaWiki shows byte counts and not character counts? If so, why do you care?
If the count itself is useful (I don't think it is), then it is probably way more useful when it's remotely accurate.
Of course, if the inaccuracy doesn't matter, then perhaps we could just display random numbers next to the changes. That might be just as helpful, and will save us a lot of trouble.
2010/8/2 Aryeh Gregor Simetrical+wikilist@gmail.com:
That I don't know. I don't know if descriptions of the Usability Initiative's studies are all public, or what. Maybe one of them could fill us in.
There are videos around, yes, but I'm not sure we have reports. Digging around on usabilitywiki should turn stuff up, or maybe someone closer to these tests (both geographically and in terms of expertise) can provide more exact links.
The tests were specifically focused on editing and general navigation, and did not test the history view AFAIK.
Roan Kattouw (Catrope)
Roan Kattouw wrote:
2010/8/2 Aryeh Gregor:
That I don't know. I don't know if descriptions of the Usability Initiative's studies are all public, or what. Maybe one of them could fill us in.
There are videos around, yes, but I'm not sure we have reports. Digging around on usabilitywiki should turn stuff up, or maybe someone closer to these tests (both geographically and in terms of expertise) can provide more exact links.
The tests were specifically focused on editing and general navigation, and did not test the history view AFAIK.
Roan Kattouw (Catrope)
Were they asked to "Make this page appear under this different name"?
I think that's something whose usability *decreased* with vector. Maybe a tab is not the best interface, but even having it on the sidebar would have been preferable.
2010/8/3 Platonides Platonides@gmail.com:
Were they asked to "Make this page appear under this different name"?
No, they were not. As I said, the focus was editing and general site navigation, not viewing history, moving pages or a zillion other things that, while they may appear elementary to us, are rather advanced actions from a new user's perspective. I don't doubt that the usability of all those things could be improved, but then looking for features needing usability improvement in MediaWiki is about as hard as looking for guns at an NRA rally, a Starbucks in central London, things named after Robert C. Byrd in West Virginia or islands you never knew existed in the South Pacific: they're everywhere you look. The usability initiative had to limit its scope.
I can't really offer an informed opinion on where the move link belongs usability-wise, I'll leave that to the people that actually know stuff about user experience and user interface design. What I can do is point out that the studies were limited to asking people to find a page (both in terms of navigation and search) and edit it (both in terms of finding the edit button and doing various things on the edit page). If an action doesn't take place on the edit page and isn't one that is commonly used to get to an edit page, it probably wasn't tested.
Roan Kattouw (Catrope)
wikitech-l@lists.wikimedia.org