Pursuant to prior discussions about the need for a research
policy on Wikipedia, WikiProject Research is drafting a
policy regarding the recruitment of Wikipedia users to
participate in studies.
At this time, we have a proposed policy, and an accompanying
group that would facilitate recruitment of subjects in much
the same way that the Bot Approvals Group approves bots.
The policy proposal can be found at:
http://en.wikipedia.org/wiki/Wikipedia:Research
The Subject Recruitment Approvals Group mentioned in the proposal
is being described at:
http://en.wikipedia.org/wiki/Wikipedia:Subject_Recruitment_Approvals_Group
Before we move forward with seeking approval from the Wikipedia
community, we would like additional input about the proposal,
and would welcome additional help improving it.
Also, please consider participating in WikiProject Research at:
http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Research
--
Bryan Song
GroupLens Research
University of Minnesota
Hi, I'm a student planning on doing GSoC this year on mediawiki.
Specifically, I'd like to work on data dumps.
I'm writing this to gauge what would be useful to the research
community. Several ideas thrown about include:
1. JSON Dumps
2. Sqlite Dumps
3. Daily dumps of revisions in last 24 hours
4. Dumps optimized for very fast import into various external storage
and smaller size (diffs)
5. JSON/CSV for Special:Import and Special:Export
Would any of these be useful? Or is there anything else that I'm
missing, that you would consider much more useful?
Feedback would be invaluable :)
Thanks :)
--
Yuvi Panda T
http://yuvi.in/blog
Hi Reid,
I'm involved with AcaWiki, so I'll start answers to your questions here. Hopefully others will comment, too.
<Resending from the subscribed address...>
On 23 Mar 2011, at 23:49, Reid Priedhorsky wrote:
> On 3/22/11 4:28 PM, Chitu Okoli wrote:
>>
>> Reid wrote:
>>>
>>> There also appear to be various options for Semantic MediaWiki hosting:
>>> Wikia, Referata, etc. It would be nice to not have to deal with the
>>> sysadmin aspects of the project.
>>
>> I agree that going with a reliable host would be the way to go. I think
>> that for the nature of our project, choosing a paid Referata plan would
>> probably be better than going for Wikia. I for one could probably easily
>> find grant funding to keep it going.
>
> Sure. If nothing else I'd be happy to chip in personally. I could also
> ask around for funding here at IBM, but I'm quite pessimistic on that.
>
> Paid plans run from $240 to $960/year, and we could certainly get
> started for free (http://www.referata.com/wiki/Referata:Features).
>
> I'm not ready to write off AcaWiki, but I have a number of significant
> concerns. Some of these I've mentioned before. I'd really like someone
> from that project to comment on these.
>
> * Is the project dead? The mailing list is pretty much empty and the
> amount of real editing activity in the past 30 days is pretty low.
Definitely not dead!
>
> * It appears that the project self-hosts - this means that the project
> has to do its own sysadmin work,
Neeru & Mike, can you comment on who's doing sysadmin work now?
> which appears to have been a problem
> (e.g., the domain expired earlier this month and no one noticed until
> the site went down!).
>
> * Is the target audience correct? I think we want to specifically target
> our annotated bibliography to researchers, but AcaWiki appears to be
> targeting laypeople as well as researchers (and IMO it would be very
> tricky to do both well).
The main interest, from my perspective (others may be able to add their own), is in making research more accessible. Several AcaWiki users are grad students who are writing summaries in order to consolidate their own knowledge or prepare for qualifier exams.
Asking on the
>
> * I don't think the focus on "summaries" is right. I think we need a
> structured infobox plus semi-structured text (e.g. sections for
> contributions, evidence, weaknesses, questions).
I agree! Right now there's some structured information, but that could be readily changed. I'm definitely open to restructuring AcaWiki, so do propose this on the mailing list (acawiki-general(a)lists.ibiblio.org), and we can discuss further.
One ongoing issue is the best way to handle bibliographic information--which has subtle complexities which we're only partly handling now.
>
> * It doesn't look like a MediaWiki. Since the MW software is so
> dominant, that means pretty much everyone who knows about editing wikis
> knows how to use MW - and not looking like MW means there's no immediate
> "aha! I can edit this". There's a lot of value in familiarity.
Actually, AcaWiki uses MediaWiki -- specifically Semantic Media Wiki. For full details, see
http://acawiki.org/Special:Version
>
> I will post an invitation on the AcaWiki mailing to come here and
> participate.
>
>>> One final note on bibliographic software: many of these claim to do
>>> automatic import of a reference simply by pointing the software at the
>>> publisher's web page for the references. But I have never seen this work
>>> correctly; always, the imported data needs significant cleanup, enough
>>> that personally I'd rather type it in manually anyway. For example,
>>> titles of ACM papers aren't even correctly cased on the official ACM
>>> pages (e.g.,http://dx.doi.org/10.1145/1753326.1753615)!
>>
>> My only experience with "scraping" pages is with Zotero, and it does it
>> beautifully. I assume (but don't know) that the current generation of
>> other bibliography software would also do a good job. Anyway, Zotero has
>> a huge support community, and scrapers for major sources (including
>> Google Scholar for articles and Amazon for books) are kept very well up
>> to date for the most part.
>
> Perhaps I'm just unlucky, then - I've only ever tried it on ACM papers
> (which it failed to do well, so I stopped).
Zotero used to scrape quite well from the ACM digital library -- now that they've changed their site again the scraper needs to be updated (not hard to do). Last time I tried, Zotero scraped ok from certain ACM pages (item pages) but not from search results: YMMV.
-Jodi
>
>>> Bi-directional synchronization is hard to get right, particularly when
>>> the two sides have different data models. I think we are much
>>> better off declaring one or the other to be the master and the rest
>>> should remain read-only (i.e. export rather than synchronization).
>>
>> I like this idea; with SMW as the primary, editable source, a read-only
>> Zotero library imported from the SMW would work well. The problem,
>> though, is that duplicate detection would need to prevent imports from
>> adding existing articles. A complete overwrite would not work, since
>> this would break article IDs for word processor integration. Zotero has
>> been slow on implementing duplicate detection, but they finally have a
>> very impressive solution in alpha
>> (http://www.zotero.org/blog/new-release-multilingual-zotero-with-duplicates-…).
>
> I don't know anything about how article IDs works in Zotero, but how to
> build a unique ID for each is an interesting, subtle, and important
> problem. Others have suggested using opaque IDs such as DOI. I think
> this is a mistake, because it means that they are utterly meaningless to
> people when creating citations. For example, consider the following two
> citations that I might put in my LaTeX code.
>
> \cite{10.1145/1753326.1753615}
> \cite{Panciera2010Lurking}
>
> The first means nothing to me, but the second is a useful reminder as to
> the paper I'm citing. That's what CiteULike does, and it's built from
> first author, year, first meaningful word of title. In the tiny
> percentage of cases where this is not unique, a disambiguation digit
> could be added.
>
> I don't know how citation works in Word et al., but I would hope you're
> not stuck with opaque numeric IDs and/or that Zotero doesn't force you
> to use integers or something like that.
>
> Reid
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
On the advice of one wikipedia admin I've already recruited, I'm
emailing this list to seek a few others to help me with a research
experiment I'm hoping to undertake. Basically, I'm interested in
trying to graft Wikipedia's highly effective consensus-editing model
onto some currently-jammed political discourse.
I've become interested in (the lack of) productive political discourse
on the web---the polarization and flame wars that seem to be the
norm---and I'm beginning to think about ways to improve the situation.
In particular, I'm thinking about social tools that can encourage and
help groups to understand their differences and reach agreement _at
least_ on the underlying facts of an issue even if they draw different
conclusions.
A particular example that I find quite interesting is on Wikipedia.
There is an article about the Obama health care plan and, more
interesting, an article about the debate about the plan. The latter
allows users to "go meta" and present some highly non-NPOV opinions
about the plan by writing neutrally about the fact that some people hold
those opinions. This seems like a really good model for how people who
strongly disagree could nonetheless work in good faith to map out the
issues of the disagreement.
Recently I met the founder of politifact---http://politifact.com/ ---a
cool site that "fact checks" statements by politicians. Apparently,
each time they publish one, they get a slew of angry responses on
twitter/facebook about how wrong they are---from _both sides_ of the
discussion.
Since this is a clear case of an underlying _fact_ that is being
checked, it seems like an obvious target for reaching consensus.
So we brainstormed an experiment. We recruit those angry folk who think
politifact got it wrong, and see if they can work together to figure out
the "right" answer. Ultimately, we imagine creating some tools to help
do this. But we wanted to start with some experiments to understand how
the discussion process might play out. So we though to start simply by
locking all participants in a room---ie, a wiki page---and letting them
hash things out there. Obviously that's going to require some ground
rules---which we can lift straight out of Wikipedia, taking the page
about debate about Obama's plan as a model. But equally obviously,
it's going to require some _enforcement_ of those ground rules. Since
Wikipedia admins are familiar with the ground rules, the enforcement
process, and the tools supporting it, it seemed natural to see if we
could recruit some of you as the policemen to walk that beat.
So that's the story. If you're interested in participating, please
contact me.
thanks
--
David Karger
Professor, EECS
MIT Computer Science and Artificial Intelligence Lab
32 Vassar St.
Cambridge, MA 02138
http://people.csail.mit.edu/karger
Hi all - I have poked around on toolserver but I don't seem to see a
tool that generates a list/count of user edits for a particular date range.
Does anyone know if such a tool exists?
i.e.: Show me andicat's edits/stats for july 2005 through september 2007.
(Unless I missed something, none of these do it:
http://en.wikipedia.org/wiki/Wikipedia:WikiProject_edit_counters)
-andrea
--
:: Andrea Forte
:: Assistant Professor
:: College of Information Science and Technology, Drexel University
:: http://www.andreaforte.net
Hi everyone,
This is related to the Wikipedia literature review that Chitu Okoli described earlier. Part of our study is to identify the data collection methods and data sets used for Wikipedia studies. We are aware of available tools to download wikipedia dumps such as wp-download and other tools from https://wiki.toolserver.org. Nevertheless, we are wondering if there exists a list of pre-compiled data sets of wikipedia articles that you know about. If no such list exists, we would also appreciate it if you can send us names or references to such data sets.
Thank you,
Mohamad
Hi all, after following these links:
> * Top Tier and 2nd tier conferences from
> http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/ConfRanking.html
> * A-ranked conferences in Information and Computing Sciences from
> http://lamp.infosys.deakin.edu.au/era/?pageÏorsel10
I discovered that CHI is not a top-tier HCI conference. Or even 2nd or
3rd. :) Neither is CSCW, Group, etc.
(For those not in CS/HCI, CHI is *the* top tier conference with
acceptance rates lower than most journals in the area and the others I
listed are nearly as competitive. And venues like WikiSym that have
higher acceptance rates attract top-tier work as well - many papers of
equivalent quality have been published there.)
So... I would suggest that, to review the Wikipedia literature, you
need to choose a field (or set of fields) you know and become deeply
knowledgeable about the literature in that field by reading it and
following citations, etc. Searching for every paper ever written that
mentions "Wikipedia" in the title/abstract will catch you lots of
peripheral work that is not *about* Wikipedia, but that uses Wikipedia
as a context for studying something else. Machines aren't good at
literature reviews. :)
That said, it would be incredibly useful to have a common repository
of citations that can be annotated, discussed etc. Reid Priedhorsky,
Phoebe Ayers, Brent Hecht, Darren Gergle, and Mako Hill and I have
been talking about doing a literature review as well and have come to
the conclusions that A) there's too much to cover in just one paper
and B) a place to collaboratively assemble knowledge about the
literature is a prerequisite for such an endeavor. We were thinking a
MediaWiki with templates to structure citation data for export would
be better than any of the bib software out there. But actually... I
remember reading that Tiki Wiki now explicitly supports citation, I
think?
The bottom line is, B) Is something this community could really do a
great job of developing and it would be mutually beneficial. No one of
us is going to cover all the Wikipedia literature alone, unless we
make it a full time job. :)
FYI, we chose Travis's citation filtering method as a starting point
and came up with the list below as our common starter reading list and
planned to divy things up from there.
Andrea
== Top Citations in Goog Scholar ==
* '''449 cites''' Fernanda B. Viegas, Martin Wattenberg, and Kushal
Dave. 2004. [http://alumni.media.mit.edu/~fviegas/papers/history_flow.pdf
Studying cooperation and conflict between authors with history flow
visualizations]. In Proceedings of the SIGCHI conference on Human
factors in computing systems (CHI '04). ACM, New York, NY, USA,
575-582.
* '''288 cites''' Bryant, Susan, Andrea Forte and Amy Bruckman.
(2005). [http://www.andreaforte.net/BryantForteBruckBecomingWikipedian.pdf
Becoming Wikipedian: transformation of participation in a
collaborative online encyclopedia]. Proceedings of GROUP International
Conference on Supporting Group Work, Sanibel Island, FL, pp. 1-10.
** recent counterpoint: Katherine Panciera, Aaron Halfaker, and Loren
Terveen. 2009. [http://www.grouplens.org/system/files/Group09WikipediansPanciera.pdf
Wikipedians are born, not made: a study of power editors on
Wikipedia]. In Proceedings of the ACM 2009 international conference on
Supporting group work (GROUP '09). ACM, New York, NY, USA, 51-60.
* '''193 cites''' Voß, J.
[http://hapticity.net/pdf/nime2006_180-works_cited/MeasuringWikipedia2005.pdf
Measuring Wikipedia]. Proceedings of 10th International Conference of
the International Society for Scientometrics and Informetrics,
(Stockholm, Sweden), 2005.
* '''187 cites''' Lih, A.
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.9104&rep=rep1&…
Wikipedia as Participatory journalism: reliable sources? metrics for
evaluating collaborative media as a news resource]. Proceedings of
Fifth International Symposium on Online Journalism, April 16-17, 2004,
(Austin, TX), 2004.
* '''145 cites''' Fernanda B. Viegas, Martin Wattenberg, Jesse Kriss,
Frank van Ham, [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.6907&rep=rep1&t…
Talk Before You Type: Coordination in Wikipedia] hicss, pp.78a, 40th
Annual Hawaii International Conference on System Sciences (HICSS'07),
2007
* '''140 cites''' Kittur, A.; Chi, E. H. ; Pendleton, B. A. ; Suh, B.
; Mytkowicz, T. Power of the few vs. wisdom of the crowd: Wikipedia
and the rise of the bourgeoisie. Alt.CHI at CHI 2007; 2007 April 28 -
May 3; San Jose, CA.
* '''138 cites''' Aniket Kittur, Bongwon Suh, Bryan A. Pendleton, and
Ed H. Chi. 2007. He says, she says: conflict and coordination in
Wikipedia. In Proceedings of the SIGCHI conference on Human factors in
computing systems (CHI '07). ACM, New York, NY, USA, 453-462. '
* '''101 cites''' Reid Priedhorsky, Jilin Chen, Shyong (Tony) K. Lam,
Katherine Panciera, Loren Terveen, and John Riedl. 2007.
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.7456&rep=rep1&…
Creating, destroying, and restoring value in wikipedia]. In
Proceedings of the 2007 international ACM conference on Supporting
group work (GROUP '07). ACM, New York, NY, USA, 259-268.
== Other Significant Papers ==
* '''Best Paper Award''' Ivan Beschastnikh, Travis Kriplean and David
W. McDonald [http://www.aaai.org/Papers/ICWSM/2008/ICWSM08-011.pdf
Wikipedian Self-Governance in Action: Motivating the Policy
Lens]International Conference on Weblogs and Social Media, 2008.
* '''Honorable Mention''' Travis Kriplean, Ivan Beschastnikh and David
W. McDonald [http://dub.washington.edu/djangosite/media/papers/tmpZ77p1r.pdf
Articulations of WikiWork: Uncovering Valued Work in Wikipedia through
Barnstars] Computer Supported Cooperative Work, 2008.
== Relevant Scholarly Books ==
Reagle's book, Lih's book, Sunstein, Benkler
On Wed, Mar 16, 2011 at 7:07 PM, Chitu Okoli <Chitu.Okoli(a)concordia.ca> wrote:
> Hi Jack,
>
> Actually, the reason we're talking about top-tier is based on the same
> reason we talk about peer-reviewed versus non-peer-reviewed. No one can
> argue that non-peer-reviewed work (such as working papers) often have
> completely novel ideas. The problem is that someone has to wade through tens
> of thousands of works of hugely varying quality to find a few pearls. The
> peer-review process does this wading; while it might miss a few novel items,
> it would probably get most of the high-quality ones. Similarly, there are at
> least 2,000 Wikipedia studies. Since we can't go through all of them, we
> hope that most of the high-quality novel ideas do appear in publication
> outlets that are universally recognized to be of higher quality than
> average.
>
> Thanks,
> Chitu
>
>
> -------- Message original --------
> Sujet: Re: [Wiki-research-l] Wikipedia literature review - include or
> exclude conference articles (was Request to verify articles for Wikipedia
> literature review)
> De : Jack Park <jackpark(a)gmail.com>
> Pour : Research into Wikimedia content and communities
> <wiki-research-l(a)lists.wikimedia.org>
> Date : 15/03/2011 5:26 PM
>
> When you consider a "top tier conference", how do you know you are not
> excluding contributions that might be not just novel but also truly
> important?
>
> It seems that page rank plays the role of beauty contest in the sense
> that top-ranked pages are those already in the view of others. I have
> seen comments that this filters against novelty, possibly crucial
> novelty.
>
> Jack
>
> On Tue, Mar 15, 2011 at 11:56 AM, Chitu Okoli <Chitu.Okoli(a)concordia.ca>
> wrote:
>
> We considered including top-tier conferences, but the question is, what is a
> "top conference"? In trying to answer this, we looked at a couple of
> sources:
> * Top Tier and 2nd tier conferences from
> http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/ConfRanking.html
> * A-ranked conferences in Information and Computing Sciences from
> http://lamp.infosys.deakin.edu.au/era/?pageÏorsel10
> * We also considered including all WikiSym articles on Wikipedia
>
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
>
--
:: Andrea Forte
:: Assistant Professor
:: College of Information Science and Technology, Drexel University
:: http://www.andreaforte.net
--
:: Andrea Forte
:: Assistant Professor
:: College of Information Science and Technology, Drexel University
:: http://www.andreaforte.net
Somehow I lost this thread - this is great, Finn, I agree that a
shared bibliographic resource need not be restricted to conferences,
journals, etc, although specific meta-reviews might be.
The main obstacle for this problem of reviewing WP lit seems to be
agreeing on a common method for assembling our disparate efforts into
something bigger. In another thread I echoed Reid's ideas about using
a wiki to accomplish this, a mediawiki instance would be ideal.
Andrea
On Fri, Mar 18, 2011 at 10:37 AM, Finn Aarup Nielsen <fn(a)imm.dtu.dk> wrote:
>
>
>
>> 1. Create a public Mediawiki instance.
>> 2. Decide on a relatively standardized format of reviewing each paper
>> (metadata formats, an infobox, how to write reviews of each, etc.)
>> 3. Upload your existing Zotero database into this new wiki (I would be
>> happy to write a script to do this).
>> 4. Proceed with paper readings, with the goal that every single paper is
>> looked at by human eyes.
>> 5. Use this content to produce one or more review articles.
>
> There has been some talk of a wiki for papers - also on this list as far
> as I remember. There is Bibdex (http://www.bibdex.com/), AcaWiki
> (http://acawiki.org) and I have the "Brede Wiki"
> (http://neuro.imm.dtu.dk/wiki/). The AcaWiki use Semantic Mediawiki
> (AFAIK) and I use MediaWiki templates. You can see an example here:
>
> http://neuro.imm.dtu.dk/wiki/Putting_Wikipedia_to_the_test:_a_case_study
>
> There is an infobox with citation information and sections on "related
> studies" and "critique".
>
> It is a question though whether such more general targeted wikis are
> appropriate for composing a collaborative paper.
>
>
> I have also begun a small Wikipedia review that I upload to our server
> yesterday:
>
> http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/6012/pdf/imm6012.pdf
>
> I think I will never be able to do an exhaustive review of all papers, but
> my idea was to give an overview of as many aspect as possible. I think
> that some research published outside journals and conferences are
> interesting, e.g., surveys and some of the statistics performed by Erik
> Zachte. I don't think that Pew's survey has be peer-reviewed, so "just"
> including journal and conference papers is in my opinion not quite
> enough to give a complete picture.
>
>
> /Finn
>
> ___________________________________________________________________
>
> Finn Aarup Nielsen, DTU Informatics, Denmark
> Lundbeck Foundation Center for Integrated Molecular Brain Imaging
> http://www.imm.dtu.dk/~fn/http://nru.dk/staff/fnielsen/
> ___________________________________________________________________
>
>
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
--
:: Andrea Forte
:: Assistant Professor
:: College of Information Science and Technology, Drexel University
:: http://www.andreaforte.net
--
:: Andrea Forte
:: Assistant Professor
:: College of Information Science and Technology, Drexel University
:: http://www.andreaforte.net
Hi everyone,
We are a research group conducting a systematic literature review on Wikipedia-related peer-reviewed academic studies published in the English language. (Although there are many excellent studies in other languages, we unfortunately do not have the resources to systematically review these at any kind of acceptable scholarly level. Also, our study is about Wikipedia only, not about other Wikimedia Foundation projects. However, we do include studies about other language Wikipedias, as long as the studies are published in English.) We have completed a search using many major databases of scholarly research. In a separate thread, we will also talk about research questions related to our review.
As of the end of November 2010, when we stopped searching, we had identified over 2,100 peer-reviewed studies that have "wikipedia", "wikipedian", or "wikipedians" in their title, abstract or keywords. As this number of studies is far too large for conducting a review synthesis, we have decided to focus only on peer-reviewed journal publications and doctoral theses; we identified 625 such studies. In addition, we identified around 1,500 peer-reviewed conference articles; we will discuss these in a separate thread.
In addition to the scholarly databases that we searched, we have very carefully compared the lists of studies from the following Wikimedia pages to verify what we may have missed:
* http://en.wikipedia.org/wiki/Wikipedia:Academic_studies_of_Wikipedia
* http://meta.wikimedia.org/wiki/Wiki_Research_Bibliography
* http://en.wikipedia.org/wiki/Academic_studies_about_Wikipedia
* http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_in_research
* http://meta.wikimedia.org/wiki/Research
From these pages, we identified an additional 13 journal articles and 3 doctoral theses that we had not previously identified. These were either articles published after November 2010, articles in journals indexed in very few scholarly databases, a few European journals, and doctoral theses from outside North America. After adding these, we have identified a total of 638 publications, of which 610 journal articles and 28 doctoral theses. (However, as we begin to read these, we will remove some from our lists if we find that they are really not about Wikipedia.)
We have now updated the following page with the peer-reviewed journal articles and doctoral theses we have identified: http://en.wikipedia.org/wiki/Wikipedia:Academic_studies_of_Wikipedia. Please note that we have only updated the sections on peer-reviewed journal articles and on theses; we have not updated other sections with newly identified studies, except for correcting some misclassified items.
To help us in identifying all eligible studies, we would really appreciate it if you could look at the sections on peer-reviewed journal articles and theses in http://en.wikipedia.org/wiki/Wikipedia:Academic_studies_of_Wikipedia, and send us any citations (by yourself or others) that you know are missing. In particular, please inform us of:
* Doctoral theses conducted outside North America
* Peer-reviewed articles in journals not well indexed by North American databases
* Peer-reviewed journal articles and doctoral theses published or accepted and forthcoming after November 2010.
Thanks for your help.
Chitu Okoli, Concordia University, Montreal, Canada
(http://chitu.okoli.org/professional/open-content/wikipedia-and-open-content…)
Arto Lanamäki, University of Agder, Kristiansand, Norway
Mohamad Mehdi, Concordia University, Montreal, Canada
Mostafa Mesgari, Concordia University, Montreal, Canada