On 2015-07-23, at 8:49 PM, Jon Robson wrote:
This sounds
like a problem with process, not a problem with Vector.
And this is the crux of the matter in my opinion and what I am asking.
How do people think we should improve this process? We do a lot of
lamenting and defending on this list but never seem to offer action
items... any bold offers about how we reverse this anti-pattern?
We need the *process* to be more obvious, and the *principles* behind the changes
agreed-upon. I'll elaborate, but first, some context…
On 2015-07-24, at 1:40 AM, Ryan Lane wrote:
What I'm saying is that there should be a process
to make an interface
change directed at readers, with stated test results, A/B tested, and
adopted if testing meets the criteria of the test results. The editor
community should have little to no say in the process, except to suggest
experiments or question obviously incorrect test results.
The basic idea is that through proper testing of features you should be able
to know an experience is better for the readers without them having a direct
voice.
An example: Make search more discoverable. Add a feature or make an
interface change to test this. A/B test it. See if the frequency of search
usage increased. See if it adversely affected other metrics. If it helped
search usage and didn't negatively affect other metrics, adopt the change.
The issue is that there will be a vocal minority of people who absolutely
hate this change, no matter what it is. These people should be ignored.
This is *exactly* the sort of issue that leads to conflict. Some parts emphasized:
The editor community should have little to no say in
the process
or
a vocal minority
or, worst,
These people should be ignored.
A/B testing is one thing, but our problems are *social*, are *political*, and that's
precisely what I see above. This is not a productive approach, because it pits
stakeholders against one another. Wikipedia is not a *competition*, it's supposed to
be a *collaboration*. It's even worse when it's framed in the otherwise reasonable
context of A/B testing, because that conceals the part of it that has one particular
subset of stakeholders decide what metrics (e.g. search) are important. While I do
disagree, I don't mean to argue specifically against Ryan Lane's position
here—I'm just using it as an example of positions that exacerbate the social problems.
It doesn't matter in what ways he or I are right or wrong on the approach if it's
going to lead to another conflict.
If we ignore people, or worse, specifically disenfranchise them, that's sure to lead
to conflict when the interested stakeholders pursue their interests and thus become that
"vocal minority". Rather, we need an obvious process, backed by principles that
most of everyone can agree on, so that we don't hit catches like one-sided priorities.
Yes, we do need to figure out how to make sure that reader interests are represented in
those principles. If the shared process and shared principles lead us to something that
some people don't agree with, *then* there might be a justification to tell that
minority to stuff it in the name of progress.
I'll leave off there, because the next thing I intuitively want to go onto involve my
personal views, and those aren't relevant to this point (they can wait for later).
Instead: a question: what *principles* ought to underpin designs moving forward from
Vector? If we can't work through disagreements there, we're going to see
objections once an unbalanced set of principles are implemented in design patterns.
Nihiltres