Nihiltres <nihiltres@...> writes:
An example:
Make search more discoverable. Add a feature or make an
interface change to test this. A/B test it. See if the frequency of search
usage increased. See if it adversely affected other metrics. If it helped
search usage and didn't negatively affect other metrics, adopt the change.
The issue is that there will be a vocal minority of people who absolutely
hate this change, no matter what it is. These people should be ignored.
This is *exactly* the sort of issue that leads to conflict. Some parts
emphasized:
The editor community should have little to no say
in the process
or
a vocal minority
or, worst,
These people should be ignored.
A/B testing is one thing, but our problems are *social*, are *political*,
and
that's precisely what I see
above. This is not a productive approach, because it
pits stakeholders
against one another. Wikipedia is
not a *competition*, it's supposed to be a
*collaboration*. It's even
worse when it's framed in the
otherwise reasonable context of A/B testing, because
that conceals the
part of it that has one particular
subset of stakeholders decide what metrics (e.g.
search) are important.
While I do disagree, I don't mean
to argue specifically against Ryan Lane's position
here—I'm just using it
as an example of positions
that exacerbate the social problems. It doesn't
matter in what ways he or
I are right or wrong on the
approach if it's going to lead to another
conflict.
The idea is to remove the social or political problems from the process.
Define the goals and feature sets (this is the part of the process that
requires community interaction), implement and test the changes, review the
results. The data is the voice of the community. It's what proves if an idea
is good or bad.
As I said before, though, there's always some vocal minority that will hate
change, even when it's presented with data proving it to be good. These
people should be ignored at this stage of the process. They can continue to
provide input to future changes, but the data should be authoritative.
If we ignore people, or worse, specifically
disenfranchise them, that's
sure to lead to conflict when the
interested stakeholders pursue their interests and
thus become that "vocal
minority". Rather, we need
an obvious process, backed by principles that most of
everyone can agree
on, so that we don't hit catches
like one-sided priorities. Yes, we do need to figure
out how to make sure
that reader interests are
represented in those principles. If the shared process
and shared
principles lead us to something that
some people don't agree with, *then* there might
be a justification to
tell that minority to stuff it in the
name of progress.
I'll leave off there, because the next thing I intuitively want to go onto
involve my personal views, and
those aren't relevant to this point (they can wait
for later). Instead: a
question: what *principles*
ought to underpin designs moving forward from Vector?
If we can't work
through disagreements there,
we're going to see objections once an unbalanced
set of principles are
implemented in design patterns.
There's not really a lack of principles, there's a lack of reasonable
process. What's wrong with change guided by data science? We know the
scientific process works. The current process is design by a committee
that's comprised mostly of people untrained in the field, with no data
proving anyone's case. Even when there is data it's often ignored in favor
of consensus of the editor community.
- Ryan