Thank you for your questions, Jan.
Is this on questions on Wikipedia Articles which ask
for an
estimate of good, neutral or bad assertions (or generally
sentiments) about a subject?
After the Signpost ran a blurb last month on research successfully
predicting company stock price changes using pageviews (confirming
similar work from 2013), I tried to find anyone using the textual
substance of edits to do the same thing. I found this:
http://community.wolfram.com/groups/-/m/t/882612
It produces small but consistently positive correlations between
companies' article edit summaries classified by the text sentiment
model which ships with Wolfram Mathematica and their daily stock price
changes. The significance is low, in part because using sentiment of
edit summaries is a very naive approach. So I wonder if anyone has
tried to train a sentiment analysis model to address the task directly
with full diffs.
Or are you more interested in the subject of lobbyism
and
company directed edits and the like?
I'm more interested in identifying organized advocacy, and I suspect
such models would help with that, too, especially if brand product
articles are included along with companies.
2016-12-01 4:12 GMT+01:00 James Salsman <jsalsman(a)gmail.com>om>:
Who, if anyone, is examining crowdsource survey
questions such as, "Look at the text added or
removed in this edit to [Company]'s Wikipedia
article. Was the editor saying [ ] good things, [ ]
bad things, or [ ] was neutral about [Company]'s
financial prospects?"?