David Friedland wrote:
Anyhow, it seems that just using the HTML entities for the Unicode IPA extensions is not an acceptable solution because it leaves IE users with lovely but useless rectangles where there ought to be IPA characters. There is a LaTeX extension called TIPA that allows the complete set of IPA characters and diacritics. If this were installed into the TeX math extensions, then a similar syntax could be used to generate images of the IPA from LaTeX input. I see the following possible solutions (in the order that I think is
good):
1.) Auto-detect the browser and send IPA Unicode to browsers that support it and TIPA LaTeX images to those that don't. (Pros: attractive display of IPA for all users. Cons: lots of programming)
2.) Just send TIPA LaTeX images (Pros: attractive display of IPA. Cons: Uses images in text when for some users embedded IPA Unicode would look better)
3.) Store the IPA in a special format or in a special tag, auto-detect the browser and send IPA Unicode to browsers that support it and SAMPA to the rest. (Pros: doesn't require inserting images or using TeX. Cons: SAMPA is ugly and hard to read)
4.) Render IPA into GIFs or PNGs and just insert them as images. (Pros: compatible with everything. Cons: time-consuming, and difficult to
change)
5.) Devise a Wikipedia-specific pronunciation scheme and just use that (blech!) (Pros: no coding required. Cons: YAAHPS (Yet Another Ad Hoc Pronunciation Scheme))
6.) Do nothing and continue to allow people to use ad-hoc pronunciation schemes (BLECH!!) (Pros: no action required. Cons: maintains status quo harms as described above)
I was just thinking of this problem, and the idea I came up with was to have an option in user preferences of something like "Display pronunciations in: o Unicode IPA o SAMPA" and then anything in an article which begins with "SAMPA " would be detected and displayed correctly (converting SAMPA to IPA if necessary), similarly to the idea with the magic ISBNs. I think this is probably the simplest solution to get working quickly, and it can be easily expanded to include additional ASCII IPA schemes (there are several) or auto-generated IPA images if someone implements that. Also, someone using IE but who has the correct fonts installed would be able to see IPA.
You malign ad hoc pronunciation schemes, but they do have *some* redeeming value. You can use a single ad-hoc system to represent different dialects more easily than you can use IPA for the same purpose, since users will read their own dialect into the pronunciation guide for the ad-hoc system. Still, I can't imagine making up an ad-hoc scheme for wikipedia; IPA is probably best for us.
I'm digging around the code to see how this could be done (and learning PHP), but in the meantime, any comments?.
(Anything more on this should probably go to wikitech-l.)
-- Adam Raizen
Adam Raizen wrote in part:
You malign ad hoc pronunciation schemes, but they do have *some* redeeming value. You can use a single ad-hoc system to represent different dialects more easily than you can use IPA for the same purpose, since users will read their own dialect into the pronunciation guide for the ad-hoc system. Still, I can't imagine making up an ad-hoc scheme for wikipedia; IPA is probably best for us.
This is what morphophones are all about -- a scheme where all dialects read in their own sound. We don't have to invent our own ad-hoc scheme, since linguists have been studying morphophones, and quite often in the context of English, since 1962. (IPA, in contrast, does phonemes, or even lower-level structures.)
The "Webster's Dictionary" systems often seen in US dictionaries are roughly morphophonic, but not very sophisticated linguistically. (But Merriam-Webster's current system is phonemic, despite it's old-fashioned non-IPA, Webster's-ish look. Therefore the worst of them all, IMO.)
-- Toby
Toby Bartels wrote:
This is what morphophones are all about -- a scheme where all dialects read in their own sound. We don't have to invent our own ad-hoc scheme, since linguists have been studying morphophones, and quite often in the context of English, since 1962. (IPA, in contrast, does phonemes, or even lower-level structures.)
The "Webster's Dictionary" systems often seen in US dictionaries are roughly morphophonic, but not very sophisticated linguistically. (But Merriam-Webster's current system is phonemic, despite it's old-fashioned non-IPA, Webster's-ish look. Therefore the worst of them all, IMO.)
The American Heritage Dictionary gives the following explanation of their pronunciation scheme:
"For most words a single set of symbols can represent the pronunciation found in each regional variety of American English. You will supply those features of your own regional speech that are called forth by the pronunciation key in this Dictionary"
And it seems like a panacea for the pronunciation problem. But it's not, because some words simply have different underlying representations in different dialects, and the system only works for dialects that are roughly the same except for a few sound changes. It fails for wildly or even mildly divergent dialects. The American Heritage Dictionary system sweeps this problem under the rug by saying "The pronunciations are exclusively those of educated speech", which, to my mind, is a cop-out, and not a satisfactory solution for Wikipedia.
However, the question of dialect remains. Obviously listing pronunciations in all possible dialects is not a reasonable solution, and indeed, nor are any of the systems used in American dictionaries. I recognize that the general task of specifying a pronunciation that speakers of any dialect will automatically speak in their dialect is not ideally handled by IPA. However, I have do not know of any system advocated by linguists other than what phonologists call "broad transcription" using IPA. Can you point me to a book or paper, written by linguists, that specifies such a system for English, and advocates its use by and for general (non-academic) readers?
I have never encoutered such a system, and I doubt that one exists. Barring the existence of a standard system, I don't really see that Wikipedia has any other options besides IPA for specifying pronunciations. Certainly I hope no one thinks Wikipedia should invent its own system. When it comes to standards, it should be our job to follow them and describe them, not create them.
So I advocate having IPA transcriptions for standard dialects (like Standard American English and Received Pronunciation), and having special pages describing how the various nonstardard dialects differ both phonetically and phonemically from the standards. I don't know much about morphophones and I'm not sure it's a concept widely accepted by linguists.
PS: I have made a page on meta called [[Pronunciations]] and am going through the list archives and posting links to relevant discussions there. I'm not sure what the policy should be regarding where further discussion should occur, so if you want to respond, do so either here or on the list.
-- David [[User:Nohat]]
David Friedland wrote about morphophones:
And it seems like a panacea for the pronunciation problem. But it's not, because some words simply have different underlying representations in different dialects, and the system only works for dialects that are roughly the same except for a few sound changes. It fails for wildly or even mildly divergent dialects. The American Heritage Dictionary system sweeps this problem under the rug by saying "The pronunciations are exclusively those of educated speech", which, to my mind, is a cop-out, and not a satisfactory solution for Wikipedia.
How do you mean that morphophones fail for mildly divergent dialects? What is your reason for thinking such a thing? Surely not that the American Heritage Dictionary didn't take much effort? I already said that these dictionaries have unsophisticated systems. The AHD states its limitations: educated American speech only. This allows them to cut corners on their implementation.
However, I do not know of any system advocated by linguists other than what phonologists call "broad transcription" using IPA. Can you point me to a book or paper, written by linguists, that specifies such a system for English, and advocates its use by and for general (non-academic) readers?
I've cited the original 1962 paper introducing morphophones before; I'd have to look up the citation in the archives to repeat it, but you're already going through those so I'll refrain for now. But that was an academic paper; what I should do now is try to track down a more recent (1980s) book that I've read, written by linguists, which advocates its use outside academic settings.
I have never encoutered such a system, and I doubt that one exists. Barring the existence of a standard system, I don't really see that Wikipedia has any other options besides IPA for specifying pronunciations. Certainly I hope no one thinks Wikipedia should invent its own system. When it comes to standards, it should be our job to follow them and describe them, not create them.
I'm not sure to what extent there is a /single/ standard system. There certainly is at least one system in use by linguists. Probably with variations due to improved understanding over time, but whether these are coordinated by a single standards body I don't know. I will try to track this down too.
PS: I have made a page on meta called [[Pronunciations]] and am going through the list archives and posting links to relevant discussions there. I'm not sure what the policy should be regarding where further discussion should occur, so if you want to respond, do so either here or on the list.
OK, I'll watch it.
-- Toby
Toby Bartels wrote:
David Friedland wrote about morphophones:
And it seems like a panacea for the pronunciation problem. But it's not, because some words simply have different underlying representations in different dialects, and the system only works for dialects that are roughly the same except for a few sound changes. It fails for wildly or even mildly divergent dialects. The American Heritage Dictionary system sweeps this problem under the rug by saying "The pronunciations are exclusively those of educated speech", which, to my mind, is a cop-out, and not a satisfactory solution for Wikipedia.
How do you mean that morphophones fail for mildly divergent dialects? What is your reason for thinking such a thing? Surely not that the American Heritage Dictionary didn't take much effort? I already said that these dictionaries have unsophisticated systems. The AHD states its limitations: educated American speech only. This allows them to cut corners on their implementation.
The reasoning behind morphophones is that even though people speak with different regional dialects, how the pronunciations are stored in each person's internal lexicon in their brain is the same, or can be representented symbolically in ways that are equivalent. The morphophonic system taps into this internal consistency between different dialects and thus a single symbolic form can represent the different (but equivalent) pronunciations for speakers of different dialects.
For example, in such system we would have a single symbol for the sound represented by the final "er" in the word "runner". A speaker of a non-rhotic Boston dialect, for example, would then always produce this sound as a plain schwa, and a speaker of, say, standard American would produce it as a rhoticized schwa. In the morphophonic system, only a single pronunciation would be needeed to specify the two different pronunciations in result.
The problem with this system is that the fundamental assumption that internal representations of pronunciations are equivalent is false. This is what I meant by "mildly divergent" dialects. Besides regular sound change, dialects also differ in some cases in how pronunciations are represented in the lexicon. It is simply the case that some dialects have fundamentally different internal representations for the pronunciations of some words.
If you don't agree, then how would you specify a single pronunciation using a morphophonic system for the words "almond", "apricot", "aunt", "controversy", "clerk", "creek", "florida", "garage", "greasy", "lieutenant", "mayonnaise", "mischievous", "pecan", and "tour", just for starters? I just don't see how a simple system could capture all these variants with a single representation. You're not advocating a system that has a symbol that corresponds to /u/ in AmE and /Ef/ in BrE so that "lieutenant" is represented with one set of symbols, are you?
However, I do not know of any system advocated by linguists other than what phonologists call "broad transcription" using IPA. Can you point me to a book or paper, written by linguists, that specifies such a system for English, and advocates its use by and for general (non-academic) readers?
I've cited the original 1962 paper introducing morphophones before; I'd have to look up the citation in the archives to repeat it, but you're already going through those so I'll refrain for now. But that was an academic paper; what I should do now is try to track down a more recent (1980s) book that I've read, written by linguists, which advocates its use outside academic settings.
OK. I'd be really interested to learn how the above problem is solved.
I have never encoutered such a system, and I doubt that one exists. Barring the existence of a standard system, I don't really see that Wikipedia has any other options besides IPA for specifying pronunciations. Certainly I hope no one thinks Wikipedia should invent its own system. When it comes to standards, it should be our job to follow them and describe them, not create them.
I'm not sure to what extent there is a /single/ standard system. There certainly is at least one system in use by linguists. Probably with variations due to improved understanding over time, but whether these are coordinated by a single standards body I don't know. I will try to track this down too.
- David [[User:Nohat]]
wikitech-l@lists.wikimedia.org