Hi Robert Stojnic and Gautham Shankar
I wanted to let Gautham that he has written a great proposal and thank you
for the feedback as well.
I wanted to point out that in my point of view the main goal of this
multilingual wordnet isn't queary expansion, but rather means for ever
greater cross language capabilites in search and content analytics. A
wordnet seme can be further disambiguated using a topic map algorithm run
which would consider all the contexts like you suggest. But this is planned
latter and so the wordnet would be a milestone.
To further clarify Gautham's integration will place a XrossLanguage-seme
Word Net tokens during indexing for words it recognises - allow the ranking
algorithm to use knowldege drawn from all the wikipedia articles.
(For example one part of the ranking would peek into featured article in
German on "A" rank it >> then "B" featured in Hungarian and use
them as
oracles to rank A >> B >> ... in English where the picture might now be X
> Y >> Z >> ... B >> A ...)
I mention in passing that I have began to develop dataset for use with open
relavance to sytematicly review and evaluate dramatic changes to relevance
due to changes in the search engine. I will post on this in due course as
it matures - since I am working on a number of smaller projects i'd like to
demo at WikiMania.)
On Fri, Apr 6, 2012 at 6:01 PM, Gautham Shankar <
gautham.shankar(a)hiveusers.com> wrote:
Robert Stojnic <rainmansr <at> gmail.com>
writes:
Hi Gautham,
I think mining wiktionary is an interesting project. However, about the
more practical Lucene part: at some point I tried using wordnet to
expand queries however I found that it introduces too many false
positives. The most challenging part I think it *context-based*
expansion. I.e. a simple synonym-based expansion is of no use because it
introduces too many meanings that the user didn't quite have in mind.
However, if we could somehow use the words in the query to find a
meaning from a set of possible meanings that could be really helpful.
You can look into existing lucene-search source to see how I used
wordnet. I think in the end I ended up using it only for very obvious
stuff (e.g. 11 = eleven, UK = United Kingdom, etc..).
Cheers, r.
On 06/04/12 01:58, Gautham Shankar wrote:
Hello,
Based on the feedback i received i have updated my proposal page.
https://www.mediawiki.org/wiki/User:Gautham_shankar/Gsoc
There is about 20 Hrs for the deadline and any final feedback would be
useful.
I have also submitted the proposal at the GSOC page.
Regards,
Gautham Shankar
_______________________________________________
Wikitech-l mailing list
Wikitech-l <at>
lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hi Robert,
Thank you for your feedback.
Like you pointed out, query expansion using the wordnet data directly,
reduces
the quality of the search.
I found this research paper very interesting.
www.sftw.umac.mo/~fstzgg/dexa2005.pdf<http://www.sftw.umac.mo/%7Efstzgg/…
They have built a TSN (Term Semantic Network) for the given query based on
the
usage of words in the documents. The expansion words obtained from the
wordnet
are then filtered out based on the TSN data.
I did not add this detail to my proposal since i thought it deals more
with the
creation of the wordnet. I would love to implement the TSN concept once the
wordnet is complete.
Regards,
Gautham Shankar
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hi again
--
Oren Bochman
Office tel. 061 4921492
Mobile +36 30 866 6706
skype id: orenbochman
e-mail: oren(a)romai-horizon.com
site
http://www.riverport.hu