Developing a metric for determining student impact on Wikipedia processes.
1. Identify classes that have been involved on English Wikipedia. Divide
them into five groups: A) Classes that participated within the USA/Canada
framework who had a campus or online ambassador. B) Classes that
participated within the USA/Canada framework who had an instructor who had
extensive editing experience on Wikipedia. C) Classes that participated
within the USA/Canada framework who had a zero guidance and the who did not
use an ambassador. D) Classes that participated independently where their
work was clearly structured around an instructor user page or some other
instructor created space outside of the programme. E) Classes that neither
utilized the programme, nor utilized other space.
These groups will be used for comparisons to measure the relative success
of each group.
2. Amongst these five groups, identify if a class was involved in any of
the following processes: In The News, Did You Know, Good Article, Featured
Article, Featured Picture, Peer Review, in Wikiproject assessment, Articles
for Creation and Articles for Deletion. In these categories, do the
following:
A) For instructors:
i) Get the instructor's instructional objective and lesson plans
specifically as they pertain to this assessment task. This includes
criteria used for measuring this objective. Analyze specific instructional
objective for how it aligns with the objectives of the assessment process.
How well do they align? Compare the differences across all five groups.
ii) Get the instructor's syllabus, the whole course objectives and as
possibly the curriculum standards for the course. Analyze the instructor's
instructional objectives for the assessment process as it relates to the
overall syllabus and curriculum standards. How well do they align?
Compare the differences across all five groups.
iii) Find instructor's Wikipedia account. Track the volume of instructor
edits during the period when their course was live, after and before
overall. Track the number of edits made by instructors in the assessment
processes, how many were made to their student related pages and to other
pages.
iv) Survey the instructor asking how the they felt English Wikipedia
assisted them in meeting core instructional objectives for their course.
Also ask about their editing experiences in assessment processes.
v) Chart how instructors were involved with student work that was involved
with assessment. How often did the ambassador edit the articles? Did they
review a GA/DYK? If yes, what was the pass/fail rate by the assessment
type? Was the instructor overturned? (GA pass taken to GAN. DYK ending
up rejected. C class taken down to start. Tags removed by an instructor
put back.)
vi) Chart how often instructor voted in things related to student work and
how often this supported or opposed the final consensus view. (AfD, Merge,
etc.)
SORT RESPONSES BY FIVE CLASSROOM TYPES.
B) For students:
i) Get all the support materials students were given prior to being
required to work on an assessment related task from the instructor. Ask
students what they were given.
ii) Track student edits before, during and after the course.
iii) How many total edits did a student make to their user page, to article
specific talk page, and to article before submitting it for the assessment.
iv) Track the success percentage of students going through an assessment
process. (Did their DYK appear on the main page? Did their GA pass?) If
failed the assessment process, identify the cause. For example: Asssment
process malformed, article had copyvios, article was not long or new
enough, article not fully source, article not reliably sourced, article not
notable enough.
SORT RESPONSES BY FIVE CLASSROOM TYPES.
C) For ambassadors:
i) Graph their edits to various assessment processes before, during and
after the course.
ii) Chart how ambassadors were involved with student work that was involved
with assessment. How often did the ambassador edit the articles? How many
comments did they make to a student's talk page? How many comments did
they make to an article talk page? Did they review a GA/DYK? If yes, what
was the pass/fail rate by the assessment type? Was the instructor
overturned? (GA pass taken to GAN. DYK ending up rejected. C class taken
down to start. Tags removed by an ambassador put back.)
iii) Chart how often ambassadors voted in things related to student work
and how often this supported or opposed the final consensus view. (AfD,
Merge, etc.)
iv) Survey ambassadors for their views on the various assessment processes,
how often they participated prior to the class.
v) Collect all materials the ambassador were given before and during the
course by the instructor to help the ambassador support the class.
vi) Ask ambassadors if they believe the student work helped students meet
the stated course objectives. Ask ambassadors what percentage of student
contributions they feel worked towards Wikipedia's ideals for content
improvement.
SORT RESPONSES BY FIVE CLASSROOM TYPES.
D) For people involved with assessment processes:
i) Get a list of people involved in an assessment area at the time a class
was active. Find out which percentage of these editors were involved in
classroom work. Find out the editing patterns for people involved in an
assesment process: Which percentage of their assessment work involved
students? What was the percentage before the class was involved? What was
the percentage afterwards? What were the edit counts in their main
contribution periods before, during and after a class was active? This is
trying to determine the impact of student involvement on normal editing
processes. (Did they neglect others because of students? Did they
contribute less because of student supervision? Did they decrease editing
as a result?)
ii) Survey people people involved with assessment and ask their feelings
about being involved with coursework. Survey what they feel like it did to
their other editing. Ask if about their motives and if it changed because
of possibility of a student being given a grade for the assessor's work.
iii) Determine how often the person passed/failed a student's work. Track
the reasons why it they did not pass a student's work.
iv) Compare the assessor's student pass/fail rate to the assessor's
non-student pass/fail rate. Track the reasons they did not pass a
contributor's work.
SORT RESPONSES BY FIVE CLASSROOM TYPES.
E) Other contributors:
i) Identify contributors to articles used in the assessment process by
students. Track the edits by those contributors to those articles before,
during and after course involvement. Purpose is to determine local article
specific editing changes.
ii) Track these contributors overall edit count totals to all articles
before, during and after a course for contributors who had edited now
student being worked on articles. Purpose is to measure how this impacted
on their overall editing.
iii) Survey these contributors to ask how a class working on the article
impacted their willingess to edit the article.
iv) Ask contributors where they would find information on student
coursework if a contributor had questions about what was happening to an
article.
SORT RESPONSES BY FIVE CLASSROOM TYPES.
F) The assessment space:
i) Identify the volume of contributions to an assessment before, during and
after an assessment process for totals. What percentage was student work?
ii) Identify lengths of times for assessment for student work and
non-student work. How long before a work was assessed and by whom was it
assessed? How long did the assessment take from start to finish?
iii) Identify at the overall pass/fail rate for articles before, during and
after for student versus non-student work.
3. Analyze the above by comparing the five different groups.
This will give an idea on if students are disruptive, how they are
potentially disruptive, which groups are the least disruptive, how normal
assessment compares to assessment done of student work, and how this
impacts other contributor edits.
--
twitter: purplepopple
blog:
ozziesport.com