Hello,
I'm not sure whether *Htmldoc* is installed on the shared server i'm having
as i don't have root access. I installed Extension:Pdf Export by using the
codes in the link given below
http://www.mediawiki.org/w/index.php?title=Extension:Pdf_Export/Source_Code…
When I go to Special:Version page in my wiki page it's showing
***PdfExport<http://www.mediawiki.org/wiki/Extension:Pdf_Export>
* under Installed extensions section.
But when I click Make PDF after entering http://www.google.com on
Special:PdfPrint page i'm getting file has been damaged error from Adobe
Reader. I'm not sure whether i'm doing the right procedures to generate pdf
file and what's the problem with the generated pdf file
Also i didn't found Print as PDF link under toolbox.
Please let me know how to fix these issues.
Any help will be appreciated.
Thanks
~Vineeth
Hi,
Thank you or reading my post.
I am wondering if there exists a "grammar" for the "Wikicode"/"Wikitext"
language (or an *exhaustive* (and formal) set of rules about how is
constructed
a "Wikitext").
I've looked for such a grammar/set of rules on the Web but I couldn't find
one...
I need to extract automatically the first paragraph of a Wiki article...
I did it from the HTML version of a Wiki article (because
I noticed the first paragraph was the first <p> element
child of a <div> element which id is "bodyContent"...)
but I need to work with the "Wikitext" itself...
- Is a grammar available somewhere?
- Do you have any idea how to extract the first paragaph of a Wiki article?
- Any advice?
- Does a Java "Wikitext" "parser" exists which would do it?
Thank you for your help.
All the best,
--
Lmhelp
--
View this message in context: http://old.nabble.com/Wikitext-grammar-tp29350471p29350471.html
Sent from the WikiMedia General mailing list archive at Nabble.com.
I'm going to be attempting to make MediaWiki read Harvard
authentication. The Harvard system handles login and then passes a
verified, user-unique token back to the caller. It's the caller's
responsibility to look up the token in whatever their own authorization
system is, and say, "Oh, yeah, that's this user, he has these rights"
and proceed accordingly.
The idea here is that a user comes to the MediaWiki top URL, gets passed
to Harvard auth, back to the wiki, which does the right thing and logs
him in.
I realize there's going to need to be some PHP work here, and that's
fine - but I'd rather try to hack an existing auth extension that's
semi-close to what I'm trying to do than start from scratch.
Unfortunately I'm sort of lost in the thick woods of all the auth
extensions out there. Would anyone care to make recommendations for what
I should use as a starting point?
Hello,
I have error when I use $wgUseInstantCommons or $wgForeignFileRepos[]
Content: Warning: curl_setopt_array() [function.curl-setopt-array]:
CURLOPT_FOLLOWLOCATION cannot be activated when in safe_mode or an
open_basedir is set in
/home/____/domains/____/public_html/w/includes/HttpFunctions.php on
line 751
Line 751-752 of includes/HttpFunctions.php:
751. $curlHandle = curl_init( $this->url );
752. curl_setopt_array( $curlHandle, $this->curlOptions );
I have safe_mode off
The error is displaing only, if I search for example
File:PD-icon-info.png or I editing/viewing this page
MediaWiki:1.16.0beta3 (r67278)
PHP: 5.2.13 (cgi-fcgi)
MySQL: 5.1.47
Server: Apache
SSH access to server
Extensions:
CheckUser, Flagged Revisions, Renameuser, HTMLets, ParserFunctions,
Google Analytics Integration, Liquid Threads, SecurePoll,
UsabilityInitiative, Vector
Best regards,
Marcin Łukasz Kiejzik
Greetings,
I'm new to this list. We keep the user and internal documentation for
our software product in Mediawiki. It works very well, thanks. What we
would like to do is take a snapshot of the user doc tree rooted at a
specific Mediawiki page. We would like the snapshot to be a complete
self-contained set of HTML pages we could ship with the product --
possibly with a Mediawiki reading tool if necessary. The user could
point his browser at the snapshot and get the same pages we see. I
realize that there could be many difficulties creating such a snapshot,
but I'm hoping the tool would have constraining options (e.g., specify a
list of URL prefixes - only pages that have one of these prefixes are
included in the snapshot, others are listed/noted as warnings by the
tool).
Is there an existing tool or script that does this?
Thanks.
-Sam
I have a wiki that uses the AWC Forum extension which, in turn,
creates new tables in the wiki database.
I'd like to create a family of such wikis but the database changes in
particular make me assume that family creation might be more complex
than the examples at http://www.mediawiki.org/wiki/Manual:Wiki_family.
Is anyone else using AWC Forum in a wiki family perhaps? And, if so,
which family creation mechanism did you use please?
Thanks for any clues you might have.
Kevin
I'm wanting to make an article link, much like Wikipedia makes to its
own articles, but instead of using [[whatever]] like would be done to
an article on the same site, I want to make it be a link over to the
real en.wikipedia.org site, for the article there. The <nowiki> tags
with HTML to do it just escape the HTML out to be displayed (e.g. its
nowiki and nohtml combined).
Example. I have the term Postfix in a local wiki page. If I wrote a
page about Postfix on my local wiki, I could just use [[Postfix]] and
be done. But I don't want to have to load all the templates from
Wikipedia, or maintain that article here. I just want to link over to
Wikipedia and let people read it there.
--
sHiFt HaPpEnS!
Greetings,
So I've recently been hitting occasional deadlocks on the Recent changes
table when running multiple simultaneous import jobs into a MW 1.15.1
installation. The import jobs are going through a custom extension that
executes Article::doEdit() to import the pages one by one -- and
unfortunately, the extension runs once for every page imported.
1) Would adding EDIT_DEFER_UPDATES to the doEdit arguments make any
difference? (I presume not, since index.php is getting executed for every
page.)
2) If I set EDIT_SUPPRESS_RC for the edits, is there any way I can run the
updates as a batch after the import job is finished?
3) Any other ideas for reducing load on the system? (Other than doing a
'real' XML bulk import, that is...)
4) The actual deadlock seems to be because of conflicting gap locks on the
RC table. Has anybody seen this before (MySQL 5.1.49), and any ideas for
ways around it? It's a fairly beefy DB cluster and the load just isn't
that high.
[2010-08-04 13:27:28 +1000] FATAL - RuntimeError: API error: code
'internal_api_error_DBQueryError', info 'Exception Caught: A database
error has occurred
Query: INSERT INTO `recentchanges`
(rc_timestamp,rc_cur_time,rc_namespace,rc_title,rc_type,rc_minor,rc_cur_id,rc_user,rc_user_text,rc_comment,rc_this_oldid,rc_last_oldid,rc_bot,rc_moved_to_ns,rc_moved_to_title,rc_ip,rc_patrolled,rc_new,rc_old_len,rc_new_len,rc_deleted,rc_logid,rc_log_type,rc_log_action,rc_params,rc_id)
VALUES
('20100804032731','20100804032731','100','Vancouver_4/Background/History/Cinema_Tv','3','0','0','1','Atlasmw','content
was: \'The film industry has a starring role in Vancouvers Hollywood
North economy, and the city ranks third in North American film production
(behind the obvio \' (and the only contributor was
\'[[Special:Contributions/Atlasmw|Atlasmw]]\')','0','0','0','0','','10.61.53.254','1','0',NULL,NULL,'0','298386','delete','delete','',NULL)
Function: RecentChange::save
Error: 1213 Deadlock found when trying to get lock; try restarting
transaction (dbmaster.mediawiki.load)
And from MySQL:
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 26771 n bits 448 index `rc_cur_id` of
table `mediawiki`.`recentchanges` trx id 0 1432732 lock_mode X locks gap
before rec insert intention waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 2; compact format; info
bits 0 0: len 4; hex 000245e7; asc E ;; 1: len 4; hex 8003c873; asc
s;;
Cheers,
-jani