Are WikiPedia images blocked from being downloaded? I tried transferring one of mine by "Save image as" but when I try to upload it back up into my wiki, I get an error: "Upload warning The file is corrupt or has an incorrect extension. Please check the file and upload again." I get an "unknow file type" if I try to open the image in Photo Editor.
James Birkholz admin, Posen-L mailing list and website http://www.Posen-L.com
On Dec 30, 2004, at 11:27 PM, =James Birkholz= wrote:
Are WikiPedia images blocked from being downloaded?
No.
I tried transferring one of mine by "Save image as" but when I try to upload it back up into my wiki, I get an error: "Upload warning The file is corrupt or has an incorrect extension. Please check the file and upload again." I get an "unknow file type" if I try to open the image in Photo Editor.
Did you try downloading it again? Did you look at the file to confirm it is corrupt or correct or missing or zero-length etc?
-- brion vibber (brion @ pobox.com)
I was trying to download from the article pages, but had success when I downloaded from the image's page, as per previous answer.
At 06:18 PM 1/2/05, you wrote:
On Dec 30, 2004, at 11:27 PM, =James Birkholz= wrote:
I tried transferring one of mine by "Save image as" but when I try to upload it back up into my wiki, I get an error: "Upload warning The file is corrupt or has an incorrect extension. Please check the file and upload again." I get an "unknow file type" if I try to open the image in Photo Editor.
Did you try downloading it again? Did you look at the file to confirm it is corrupt or correct or missing or zero-length etc?
James Birkholz admin, Posen-L mailing list and website http://www.Posen-L.com
On Jan 2, 2005, at 6:35 PM, =James Birkholz= wrote:
I was trying to download from the article pages, but had success when I downloaded from the image's page, as per previous answer.
You probably downloaded the image page's HTML instead of the image, by using "Save Linked File As..." or equivalent instead of "Save Image As..." or equivalent.
-- brion vibber (brion @ pobox.com)
I've been really pleased with my progress as a newbie, especially with incorporating my database into displayed tables using the extensions that this group told me about.
Here are some additional needs/questions that I would appreciate pointers to from those who've gone before...
1) I'll want to merge the registration for the wiki portion with the non-wiki existing site. I'm thinking that I need to have all users go through the wiki registration and then ask them if they want to record the additional info required for the non-wiki portions. So I'll need to modify the non-wiki to recognize users that have logged in to the wiki.
2) What is the best way to modify the pages templates? For example, if I wanted to add a "Donations" link likes WikiPedia's? Or links to the non-wiki sections? Or display the time in Timbuctoo in the bottom left corner? Or have a second image randomly selected that morphs out of the main logo image? Obviously I'm looking for general pointers, not specific solutions here. (That may come later :] )
3) I've been transferring articles by hand from WikiPedia to my GenWiki, but am wondering if there's a better way. I've looked at the WikiPedia's data dump downloads instruction pages, but frankly they don't make it seem too easy, especially if I'm using a remote web server that I don't admin. Is this the case? Besides, even if I could magically transfer the data, I'd need to weed out most of it. Also, the storage space would be a problem until I weeded. I doubt that it exists, but what would best serve my needs would be some spider that given a start page, crawls my GenWiki looking for empty articles and then goes to WikiPedia (using approved behavior of one page per second, etc) and copies the edit box material and pastes it back into my GenWiki. It should be configurable to limit the levels of linkage so that it only drills down to that many links. Given enough time, I can create such a spider in Perl (but I'm not set up right now, nor am I proficient in Perl. I played with a couple of years ago.), but wonder if someone has a better, more immediate solution to point me to.
TIA,
James Birkholz admin, Posen-L mailing list and website http://www.Posen-L.com
On Sun, 02 Jan 2005 22:09:32 -0600, =James Birkholz= j.birchwood@verizon.net wrote:
- I'll want to merge the registration for the wiki portion with the
non-wiki existing site.
You're not the first to consider this, but I can't remember off-hand anyone reporting that (and how) they achieved it successfully. You could try searching the archives (Google site:mail.wikimedia.org +your query), but I guess the easiest approach is to hack up includes/SpecialUserlogin.php to access the extra bits of database and whatever that the other parts of the site use. I think there may be an authentication plugin system "on the way" which could be useful in this regard, but don't quote me on that.
- What is the best way to modify the pages templates? For example, if I
wanted to add a "Donations" link likes WikiPedia's? Or links to the non-wiki sections? Or display the time in Timbuctoo in the bottom left corner? Or have a second image randomly selected that morphs out of the main logo image? Obviously I'm looking for general pointers, not specific solutions here. (That may come later :] )
Well, there's several solutions: * the PHPTAL templates (*.pt) are theoretically there to make things like this easy, but in practise everyone that tries seems to just break their wiki :( * as of 1.4, you can add things to the sidebar by editing the $wgNavigationLinks variable in LocalSettings.php (you may have to manually copy the default section from includes/DefaultSettings.php, I'm not sure) * to be really adventurous, you can write a Skin.php sub-class of your own; in fact, 1.4 includes a version of MonoBook written entirely in PHP, which may in fact be *easier* to hack up than the PHPTAL one.
- I've been transferring articles by hand from WikiPedia to my GenWiki,
but am wondering if there's a better way. I've looked at the WikiPedia's data dump downloads instruction pages, but frankly they don't make it seem too easy, especially if I'm using a remote web server that I don't admin. Is this the case? [...] I doubt that it exists, but what would best serve my needs would be some spider that given a start page, crawls my GenWiki looking for empty articles and then goes to WikiPedia (using approved behavior of one page per second, etc) and copies the edit box material and pastes it back into my GenWiki.
Well, I can point you to several things that might make this job a little easier * Special:Export can be used to grab an XMLified version of a page, optionally including history; unfortunately, the equivalent Special:Import has never quite been finished. * You can get the raw wikitext of a page by using "pagename?action=raw" / "?title=pagename&action=raw" * Best of all, there is a bot framework, written in Python, specifically for interacting with Wikipedia (and, by extension, and MediaWiki site), so hopefully you needn't write a whole spider from scratch: see http://pywikipediabot.sourceforge.net/
HTH, and good luck!
Rowan Collins wrote:
On Sun, 02 Jan 2005 22:09:32 -0600, =James Birkholz= j.birchwood@verizon.net wrote:
- I'll want to merge the registration for the wiki portion with the
non-wiki existing site.
You're not the first to consider this, but I can't remember off-hand anyone reporting that (and how) they achieved it successfully. You could try searching the archives (Google site:mail.wikimedia.org +your query), but I guess the easiest approach is to hack up includes/SpecialUserlogin.php to access the extra bits of database and whatever that the other parts of the site use. I think there may be an authentication plugin system "on the way" which could be useful in this regard, but don't quote me on that.
Unified login will be extremly hard, as it requires changes to lots of functions.
In the future we should have a way for scripts to 'talk' to each other and syncronize login information. I'm working on something for my own seperate project, which wont be done for a while.
- What is the best way to modify the pages templates? For example, if I
wanted to add a "Donations" link likes WikiPedia's? Or links to the non-wiki sections? Or display the time in Timbuctoo in the bottom left corner? Or have a second image randomly selected that morphs out of the main logo image? Obviously I'm looking for general pointers, not specific solutions here. (That may come later :] )
Well, there's several solutions:
- the PHPTAL templates (*.pt) are theoretically there to make things
like this easy, but in practise everyone that tries seems to just break their wiki :(
- as of 1.4, you can add things to the sidebar by editing the
$wgNavigationLinks variable in LocalSettings.php (you may have to manually copy the default section from includes/DefaultSettings.php, I'm not sure)
- to be really adventurous, you can write a Skin.php sub-class of your
own; in fact, 1.4 includes a version of MonoBook written entirely in PHP, which may in fact be *easier* to hack up than the PHPTAL one.
We should really make it easier to modify templates without breaking everything.
- I've been transferring articles by hand from WikiPedia to my GenWiki,
but am wondering if there's a better way. I've looked at the WikiPedia's data dump downloads instruction pages, but frankly they don't make it seem too easy, especially if I'm using a remote web server that I don't admin. Is this the case? [...] I doubt that it exists, but what would best serve my needs would be some spider that given a start page, crawls my GenWiki looking for empty articles and then goes to WikiPedia (using approved behavior of one page per second, etc) and copies the edit box material and pastes it back into my GenWiki.
Well, I can point you to several things that might make this job a little easier
- Special:Export can be used to grab an XMLified version of a page,
optionally including history; unfortunately, the equivalent Special:Import has never quite been finished.
- You can get the raw wikitext of a page by using
"pagename?action=raw" / "?title=pagename&action=raw"
- Best of all, there is a bot framework, written in Python,
specifically for interacting with Wikipedia (and, by extension, and MediaWiki site), so hopefully you needn't write a whole spider from scratch: see http://pywikipediabot.sourceforge.net/
Wikipedia specifically recommends NOT spidering the site. I would grab the dump, install it into a local MySQL, then just grab the ones you want. Or, you can try loading the entire database into your webserver via PHPMyAdmin. But that may take a while... or overload your server... or just eat up all your disk quota.
HTH, and good luck!
On Wed, Jan 05, 2005 at 03:05:01PM -0500, David Wendt wrote: # Rowan Collins wrote: # # >On Sun, 02 Jan 2005 22:09:32 -0600, =James Birkholz= # >j.birchwood@verizon.net wrote: # > # > # >>1) I'll want to merge the registration for the wiki portion with the # >>non-wiki existing site. # >> # >> # > # >You're not the first to consider this, but I can't remember off-hand # >anyone reporting that (and how) they achieved it successfully. You # >could try searching the archives (Google site:mail.wikimedia.org +your # >query), but I guess the easiest approach is to hack up # >includes/SpecialUserlogin.php to access the extra bits of database and # >whatever that the other parts of the site use. I think there may be an # >authentication plugin system "on the way" which could be useful in # >this regard, but don't quote me on that. # > # > # > # Unified login will be extremly hard, as it requires changes to lots of # functions. # # In the future we should have a way for scripts to 'talk' to each other # and syncronize login information. I'm working on something for my own # seperate project, which wont be done for a while.
Isn't authentication to a directory server (via LDAP) on its way? Couldn't that solve this problem?
-Sam
Sam Rowe wrote:
On Wed, Jan 05, 2005 at 03:05:01PM -0500, David Wendt wrote: # Rowan Collins wrote: # # >On Sun, 02 Jan 2005 22:09:32 -0600, =James Birkholz= # >j.birchwood@verizon.net wrote: # > # > # >>1) I'll want to merge the registration for the wiki portion with the # >>non-wiki existing site. # >> # >> # > # >You're not the first to consider this, but I can't remember off-hand # >anyone reporting that (and how) they achieved it successfully. You # >could try searching the archives (Google site:mail.wikimedia.org +your # >query), but I guess the easiest approach is to hack up # >includes/SpecialUserlogin.php to access the extra bits of database and # >whatever that the other parts of the site use. I think there may be an # >authentication plugin system "on the way" which could be useful in # >this regard, but don't quote me on that. # > # > # > # Unified login will be extremly hard, as it requires changes to lots of # functions. # # In the future we should have a way for scripts to 'talk' to each other # and syncronize login information. I'm working on something for my own # seperate project, which wont be done for a while.
Isn't authentication to a directory server (via LDAP) on its way? Couldn't that solve this problem?
-Sam _______________________________________________ MediaWiki-l mailing list MediaWiki-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/mediawiki-l
Probably. Although you would need to set up a directory server, on a different port, meaning it wouldnt work for anything other than a colocation.
Interesting idea though.
On Wed, Jan 05, 2005 at 03:53:56PM -0500, David Wendt wrote: # Probably. Although you would need to set up a directory server, on a # different port, meaning it wouldnt work for anything other than a # colocation. # # Interesting idea though.
A directory server is another service, so naturally it would require a different port. I'm sorry, but I don't understand what your sentence means after the word 'port.' Could you please explain?
Thanks, Sam
Sam Rowe wrote:
On Wed, Jan 05, 2005 at 03:53:56PM -0500, David Wendt wrote: # Probably. Although you would need to set up a directory server, on a # different port, meaning it wouldnt work for anything other than a # colocation. # # Interesting idea though.
A directory server is another service, so naturally it would require a different port. I'm sorry, but I don't understand what your sentence means after the word 'port.' Could you please explain?
Thanks, Sam _______________________________________________ MediaWiki-l mailing list MediaWiki-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/mediawiki-l
You would need a host with LDAP services or your own server, as you cant install an LDAP yourself on a shared server, and bind to the required port.
- I've been transferring articles by hand from WikiPedia to my GenWiki,
but am wondering if there's a better way. I've looked at the WikiPedia's data dump downloads instruction pages, but frankly they don't make it seem too easy, especially if I'm using a remote web server that I don't admin.
[*snip*]
I've looked around checked the FAQ, Googled, and this is the closest I've seen to a question I have. Is there an easy way (ie programatically) to import a lot of information INTO a wiki from another source? I promise to document the answer in the metawiki. :)
I have a wiki set up internally for company documentation and I'd like to grab the data from the current documentation site, and plug it into the Wiki. Special:Import has been hard to understand how this might work. I'm fine with spidering my existing site with wget or something similar, but how to get those files into the Wiki easily has escaped me.
I'm running 1.3.7 on an internal network on a Linux box.
TIA.
-Jot
On Wed, 5 Jan 2005 13:50:38 -0700, Jot Powers misc@bofh.com wrote:
I've looked around checked the FAQ, Googled, and this is the closest I've seen to a question I have. Is there an easy way (ie programatically) to import a lot of information INTO a wiki from another source? I promise to document the answer in the metawiki. :)
Well, there's two ways, basically: 1) using pywikipediabot (or a bot you write from scratch) to "pretend to be a user" - i.e. access "...&action=edit" pages, fill in the contents you want to be there, and submit the form back. 2) using some *server-side* scripts to push the data into the database directly. Basically (in 1.3.x), you need to create each page as an entry in the 'cur' table, but for things to behave properly you need to observe the other things the "normal" page editing code does - off the top of my head, that includes moving links *to* a new page from 'brokenlinks' to 'links' (so they don't appear as redlinks any more) and checking the new text for links *from* the page, to correct the various links tables (e.g. for "What links here" listings)(that's one of the jobs of Parser::preSaveTransform() I think).
In short, for the moment at least, running a bot is probably the only sane way to do it. (Be sure to run it under a user with the 'bot' user_rights flag, so the flood can be hidden from RecentChanges - see http://meta.wikimedia.org/wiki/Setting_user_rights_in_MediaWiki)
On WikiPedia, this:
#redirect [[foo]]
gives a nice bent arrow graphic and a hyperlinkied "foo"
On my wiki, it numerical list item "1. redirect foo"
What am I doing wrong?
On Wed, 05 Jan 2005 22:00:26 -0600, =James Birkholz= j.birchwood@verizon.net wrote:
On WikiPedia, this:
#redirect [[foo]]
gives a nice bent arrow graphic and a hyperlinkied "foo"
On my wiki, it numerical list item "1. redirect foo"
What am I doing wrong?
Probably using a version other than 1.4 Beta
I believe it does that if the target pages does not exist as well. Check on that ...
#redirect [[foo]] gives a nice bent arrow graphic and a hyperlinkied "foo" On my wiki, it numerical list item "1. redirect foo"
At 11:10 PM 1/5/05, you wrote:
I believe it does that if the target pages does not exist as well. Check on that ...
#redirect [[foo]] gives a nice bent arrow graphic and a hyperlinkied "foo" On my wiki, it (displays) numerical list item "1. redirect foo"
No, I can click on the "foo" link and it goes to the article to which it is supposed to go. I didn't check the action before, but just did and it works ok, just not as "purty" as the WikiPedia. Does WikiPedia run off *beta* versions? The arrow graphic seems new, Dori must be right.
it has to be uppercase.
#REDIRECT [[foo]]
otherwise, it's a one-item numbered list.
On Wed, 05 Jan 2005 22:00:26 -0600, =James Birkholz= j.birchwood@verizon.net wrote:
On WikiPedia, this:
#redirect [[foo]]
gives a nice bent arrow graphic and a hyperlinkied "foo"
On my wiki, it numerical list item "1. redirect foo"
What am I doing wrong?
MediaWiki-l mailing list MediaWiki-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/mediawiki-l
On Thu, 6 Jan 2005 07:36:56 -0500, Jamie Bliss astronouth7303@gmail.com wrote:
it has to be uppercase.
#REDIRECT [[foo]]
otherwise, it's a one-item numbered list.
No, that's an urban myth; the check for "#redirect" is case insensitive. Dori has it right, this is new to v1.4 (currently in late beta).
While we're on the subject, though, there is something that annoys me about that feature: because it's been placed as special case code in the article handling, rather than just added to the "parser", redirects still show up as numbered lists when you do a preview. This rather defeats what would otherwise be a big advantage of the feature: the easy ability to spot if you've typed the redirect in correctly. If I get round to it, I'll write an alternative implementation where it's in the parser, where it should have been all along (as soon as a "don't follow redirect" condition is detected, the page should just be rendered as it is, with "#redirect" being rendered as the arrow).
Nope, lowercase also works on WikiPedia, and uppercase doesn't fix my wiki.
See: http://en.wikipedia.org/w/index.php?title=Greater_Poland_Voivodship&oldi... and http://www.birchy.com/GenWiki/index.php?title=Southern_Prussia&redirect=...
I'm wondering if there isn't a config setting, or it's just a version
At 07:36 AM 1/6/05, you wrote:
it has to be uppercase.
#REDIRECT [[foo]]
otherwise, it's a one-item numbered list.
On Wed, 05 Jan 2005 22:00:26 -0600, =James Birkholz= j.birchwood@verizon.net wrote:
On WikiPedia, this:
#redirect [[foo]]
gives a nice bent arrow graphic and a hyperlinkied "foo"
On my wiki, it numerical list item "1. redirect foo"
What am I doing wrong?
MediaWiki-l mailing list MediaWiki-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/mediawiki-l
--
http://endeavour.zapto.org/astro73/ Thank you to JosephM for inviting me to Gmail! _______________________________________________ MediaWiki-l mailing list MediaWiki-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/mediawiki-l
James Birkholz admin, Posen-L mailing list and website http://www.Posen-L.com
On Thu, 06 Jan 2005 17:33:58 -0600, =James Birkholz= j.birchwood@verizon.net wrote:
Nope, lowercase also works on WikiPedia, and uppercase doesn't fix my wiki.
http://www.birchy.com/GenWiki/index.php?title=Special:Version "... MediaWiki: 1.3.9 ..." http://en.wikipedia.org/w/index.php?title=Special:Version "... MediaWiki: 1.4beta3 ..."
You can't see the feature, because you're running a version that predates it. Simple as that.
Redirects were around in 1.3. See http://endeavour.zapto.org/furc/Wiki/index.php?title=Compiling_without_Visua...
On Thu, 6 Jan 2005 23:37:18 +0000, Rowan Collins rowan.collins@gmail.com wrote:
On Thu, 06 Jan 2005 17:33:58 -0600, =James Birkholz= j.birchwood@verizon.net wrote:
Nope, lowercase also works on WikiPedia, and uppercase doesn't fix my wiki.
http://www.birchy.com/GenWiki/index.php?title=Special:Version "... MediaWiki: 1.3.9 ..." http://en.wikipedia.org/w/index.php?title=Special:Version "... MediaWiki: 1.4beta3 ..."
You can't see the feature, because you're running a version that predates it. Simple as that.
-- Rowan Collins BSc [IMSoP] _______________________________________________ MediaWiki-l mailing list MediaWiki-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/mediawiki-l
mediawiki-l@lists.wikimedia.org