I'm not 100% sure that a MediaWiki extension is the best way to go, but it seems clear that many of the features of MediaWiki would also be needed for a world-editable question bank platform. In particular, users need the ability to note issues with questions (including explanations of correct solutions), make changes to problematic questions, revert incorrect changes to questions, and discuss controversial changes to questions with interested parties. There also seems to be a need to curate questions (to help select the most interesting or useful) and remove redundant or very poor questions. There is a ton of adaptive technology that can be brought to bear on which questions are presented in what order based on student models (essentially trying to maintain a challenging but not impossible level of difficulty). There's also a need to be able to get summary statistics about one's own performance, including estimates of aptitude in various topic areas - this might require rating questions for difficulty within topic areas, or difficulty might be inferred using IRT. Statistics would also be needed about individual questions and topic areas to help evaluate how to improve them. Recruiting would be necessary to get critical mass going, and there would need to be incentives to contribute content (I imagine perhaps a reputation system in which people receive more points for questions with more upvotes). And naturally all the response data would be made publicly available under CC0 for analysis.
In short - there is a lot of interesting stuff to do here, but I think a mere MediaWiki extension would face some daunting limitations in what is possible (and on the UI), which is why ideally I'm considering some kind of standalone site that could be integrated with Wikiversity or any other site. One way to do this is have the question bank site host modules corresponding to units/subtopics in courses that use it, which are specified either using a specific list of questions, or using lists of tags.
Just some thoughts. :-)
-Derrick
On Thu, Mar 7, 2013 at 5:14 AM, James Salsman jsalsman@gmail.com wrote:
Derrick, it looks like Tim Hunt, one of the Moodle core developers, is planning to try to get something similar going for Moodle during this year's Google Summer of Code:
http://docs.moodle.org/dev/Projects_for_new_developers#Self-assessment_activ...
Moodle has 65 million unique users per year, which sounds impressive compared to Wikiversity's roughly 800,000 unique visitors per month (Comscore January 2010 adjusted by page views) but it's really not, because almost all Moodle users are in very structured course situations where instructors are unlikely to add non-core modules such as a question bank. So getting an extension going for Mediawiki and opening up a global shared question bank on Wikiversity would be *far* superior, and a much larger good.
Please do go for it! Use Moodle's GIFT question format for interoperability.
On Thu, Mar 7, 2013 at 12:44 AM, Derrick Coetzee dc@moonflare.com wrote:
On Wed, Feb 27, 2013 at 1:01 PM, James Salsman jsalsman@gmail.com
wrote:
In short, PeerWise is an automated self-study, low-stakes assessment system where both questions and answers are edited and reviewed by anyone (with access; in practice this usually means anyone enrolled in a course or major at an institution) very similarly to textual content in a wiki. It is already being used successfully at hundreds of higher education and other institutions. But sadly it's closed source. I have since 2009 been trying to encourage the Foundation to build an open source version of such a system.
Is there anyone else interested in this?
@James: I'm intrigued by this system, and I've talked to a Wikipedian in Auckland who used it and liked it. However, I think blanking the
database at
the beginning of each course is a big mistake, as is limiting it to a
small
class audience. I imagine building a similar system that is monolithic (a single database for all topics), accepts contributions from the general public, accumulates over time like Wikipedia, and is moderated by experienced users using tags and/or a hierarchy. A sort of "Wikipedia of assessment" if you will. In principle it's even possible to incorporate short answer and essay questions by leveraging some mixture of machine learning and peer review - positive and negative examples could then be highlighted with comments to help provide feedback to others.
I'm equipped to prototype a system like this and it would mesh well with
my
research, but I'd like to know your thoughts, as well as if there are
other
interested parties you might recruit. Let me know. :-)
-- Derrick Coetzee http://www.cs.berkeley.edu/~dcoetzee/