That's very cool! To get an idea, how big is your dataset?
On Tue Sep 30 2014 at 12:06:56 PM Daniel Kinzler < daniel.kinzler@wikimedia.de> wrote:
What makes it so slow?
Note that you can use wbeditentity to perform complex edits with a single api call. It's not as streight forward to use as, say, wbaddclaim, but much more powerfull and efficient.
-- daniel
Am 30.09.2014 19:00, schrieb Andra Waagmeester:
Hi All,
I have joined the development team of the ProteinBoxBot
(https://www.wikidata.org/wiki/User:ProteinBoxBot) . Our goal is to make Wikidata the canonical resource for referencing and translating
identifiers for
genes and proteins from different species.
Currently adding all genes from the human genome and their related
identifiers
to Wikidata takes more then a month to complete. With the objective to
add other
species, as well as having frequent updates for each of the genomes, it
would be
convenient if we could increase this throughput.
Would it be accepted if we increase the throughput by running multiple
instances
of ProteinBoxBot in parallel. If so, what would be an accepted number of parallel instances of a bot to run? We can run multiple instances from
different
geographical locations if necessary.
Kind regards,
Andra
Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-l
-- Daniel Kinzler Senior Software Developer
Wikimedia Deutschland Gesellschaft zur Förderung Freien Wissens e.V.
Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-l