The 2nd International Conference on Applications of Intelligent Systems,
*APPIS 2019* <http://appis.webhosting.rug.nl/2019/> will be held on
*7-12 January 2019* in *Las Palmas de Gran Canaria*, Spain.
APPIS 2019 is organized by the University of Groningen and the
University of Las Palmas de Gran Canaria, and includes a *Winter School
on Machine Learning (WISMAL 2019)*.
APPIS 2019 welcomes (but is not limited to) contributions related to the
following topics:
Images, videos and time-series analysis
Machine learning and representation learning
Statistical and structural pattern recognition
Data visualization and dimensionality reduction
Robotics
Intelligent systems in health and medicine
Cyber computing and security
Bio-informatics
Data mining
Cognitive discovery
Algorithms for embedded and real-time systems
Semantic technologies
Intelligent buildings
Intelligent sensors and sensor networks
Augmented reality
Adaptive systems
Fuzzy systems
Human-machine interaction
Natural language processing
Situation awareness systems
Recommender systems
=============================================================
Conference proceedings will be published by *ACM International
Conference Proceedings Series.*
The registration fee includes conference materials, tutorials,
conference dinner, welcome reception, another social event (a guided
tour of old town Las Palmas or a visit to the only coffee plantage in
the EU) and daily coffee breaks. The prices are:
Early registration: *250 Euro*
Late registration: *325 Euro*
The costs for each additional paper are *100 Euro.
*=============================================================|
*Winter School on Machine Learning - WISMAL 2019*
APPIS includes a short winter school consists of several tutorials that
present different techniques of Machine Learning. Please find more
information at the WISMAL 2019 page
<http://appis.webhosting.rug.nl/2019/tutorials-appis-2019/>.
The participation in the winter school is free of charge for registered
participants in APPIS 2019. The number of participants in the winter
school is limited to 100 and early registration is encouraged.
=============================================================
Paper submission – *Oct 26, 2018*
Paper acceptance notification – *Dec 1, 2018*
Early registration – *Dec 8, 2018*
Camera ready – *Dec 15, 2018*
Conference – *7-9 Jan 2019*
Winter school on Machine learning – *10-12 Jan 2019
*Please find more information on the conference website
<http://appis.webhosting.rug.nl/2019/>.*
*the conference co-chairs
/Nicolai Petkov//
//Nicola Strisciuglio//
//Carlos Travieso-Gonzalez//
/
I don't have enough knowledge about neural nets to evaluate the email
below, but I'm forwarding it in case it's of interest to others on two
relevant lists.
Pine
( https://meta.wikimedia.org/wiki/User:Pine )
---------- Forwarded message ---------
From: John Erling Blad <jeblad(a)gmail.com>
Date: Wed, Sep 26, 2018 at 6:23 PM
Subject: [Wikimedia-l] Captioning Wikidata items?
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Just a weird idea.
It is very interesting how neural nets can caption images. Quite
interesting. It is done by building a state-model of the image, that is
feed into a kind of neural net (RNN) and that net (a black box) will
transform the state-model into running text. In some cases the neural net
is steered. That is called an attention control, and it creates
relationship between parts in the image.
Swap out the image wit an item, and a virtually identical setup can
generate captions for items. The caption for an item is whats called the
description in Wikidata. It is also the first sentence with a lead-in in
Wikipedia articles. It is possible to steer the attention, that is to tell
the network what items should be used, and thus the later sentences will be
meaningful.
What that means is that we could create meaningful stub entries for the
article placeholder, that is the "AboutTopic" special page. We can't
automate this for very small projects, but somewhere between small and mid
sized languages it will start to make sense.
To make this work we need some very special knowledge, which we probably
don't have, like how to turn an item into a state-model by using the highly
specialized rdf2vec algorithm (hello Copenhagen) and verifying the stateful
language model (hello Helsinki and Tromsø).
I wonder if the only real problems are what do the community want, and what
is the acceptable error limit.
John Erling Blad
/jeblad
_______________________________________________
Wikimedia-l mailing list, guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
https://meta.wikimedia.org/wiki/Wikimedia-l
New messages to: Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>