[Foundation-l] Fundraising Letter Feb 2012

Yaroslav M. Blanter putevod at mccme.ru
Fri Feb 10 11:35:24 UTC 2012


Florence, I think you gave a great description of the process, and I agree
that we should aim at the degree of transparency achieved in it. Actually,
if I would be in charge of setting such a review panel, this is close to
how I would do it. However, I also have similar personal experience. I am
an academic researcher, and I also participated in such panels on many
occasions - as a panel member, as a referee, and, obviously, as an
applicant. I do not claim I fully understand all the details, which also
vary case-to-case, but there are a couple of things I learned from my
participation, and it is probably good to list them here.

1) If I am a panel member and I want to kill a proposal (for instance, for
personal reasons), it is fairly easy. I just need to find an issue which
experts did not comment on and represent it like a very serious issue. If
there is another panel member who wants to kill the same proposal, the
proposal is dead. You do not even need to conspire.

2) If I want to promote a proposal, it is very difficult, since everybody
assumes I am somehow personally involved. The most efficient way to promote
a borderline proposal is to kill the competitors, see above. 

3) If I see that there are panel members with personal agenda, trying to
kill good proposals, it is usually very difficult to withstand them. The
majority of the panel do not care, and if a panel member is killing a
proposal, and another panel member is defending it, the proposal is most
likely dead. 

4) You mentioned that there are "other issues" like opening a new lab
which come on top of the proposal quality and can change ranking. My
experience is that these issues often tend to dominate, and they are the
easiest to use by panel members who want to change the ranking with respect
to the expert evaluation. 

Even such an  open procedure can be subject to manipulation, and one has
to be extra careful here. I hope this helps. 

Cheers
Yaroslav

On Thu, 09 Feb 2012 23:52:20 +0100, Florence Devouard <anthere9 at yahoo.com>
wrote:
> I wanted to share an experience with regards to a future FDC.
> 
> During two years, I was a member of the "comité de pilotage" (which I 
> will here translate in "steering committee") of the ANR (National 
> Research Agency in France).
> 
> The ANR distributes every year about 1000 M€ to support research in
> France.
> 
> The ANR programmatic activity is divided in 6 clearly defined themes + 1

> unspecific area. Some themes are further divided for more granularity.
> For example, I was in the steering committee of CONTINT, which is one of

> the four programs of the main theme "information and communication 
> technologies". My program was about "production and sharing of content 
> and knowledge (creation, edition, search, interface, use, trust, 
> reality, social network, futur of the internet), associated services and

> robotics"
> 
> Every year, the steering committee of each group define the strategic 
> goals of the year and list keywords to better refine the description of 
> what could be covered or not covered.
> 
> Then a public call for projects is made. People have 2 months to present

> their project. From memory, the projects received by CONTINT were 
> possibly 200.
> 
> The projects are peer-reviewed by community members (just as research 
> articles are reviewed by peers) and annotation/recommandation for 
> support or not are provided by the peers. There is no administrative 
> filter at this point.
> 
> Then a committee constituted of peers review all the projects and their 
> annotation/comments and rank them in three groups. C rejected. B why 
> not. A proposed. Still not administrative filtering at this point.
> 
> The steering committee, about 20 people made of community members 
> (volunteers) and ANR staff review the A and B. Steering committee is 
> kindly ask to try to keep A projects in A list and B projects in B list.

> However, various considerations will make it so that some projects are 
> pushed up and others pushed down. It may range from "this lab is great, 
> they need funding to continue a long-going research" to "damned, we did 
> not fund any robotic project this year even though it is within our 
> priorities; what would be the best one to push up ?" or "if we push down

> this rather costly project, we could fund these three smaller ones". We 
> may also make recommandation to a project team to rework its budget if 
> we think it was a little bit too costly compared to the impact expected.
> 
> At the end of the session, we have a brand new list of A followed by B. 
> All projects are ranked. At this point, the budget is only an 
> approximation, so we usually know that all A will be covered but 0 to a 
> few Bs may be.
> 
> The budget is known slightly later and the exact list of projects funded

> published.
> 
> How do we make sure what we fund is the best choice ?
> Not by administrative decision.
> But by two rounds of independent peer-review who can estimate the 
> quality of the project proposed and the chance for the organisations to 
> do it well.
> And by a further round through which we know that all projects are 
> interesting and feasible, but will be selected according to strategic 
> goals defined a year earlier.
> 
> There are also "special calls" if there is a budget to support a highly 
> specific issue. Projects leaders have to decide if their project is 
> related to a "regular theme", or a "white" or a "special call".
> 
> The idea behind this is also that they have to make the effort to 
> articulate their needs clearly and show what would be the outcome.
> 
> The staff do not really make decisions. The staff is here to make sure 
> all the process work smoothly, to receive the propositions and make sure

> they fit the basic requirements, to recruit peer for the reviews (upon 
> suggestion made... by steering committee or other peers), to organise 
> the meetings, to publish the results, and so on. Of course, some of them

> do impact the process because of their strong inner knowledge of all the

> actors involved. The staff is overall 30 people.
> 
> How do we evaluate afterwards that we made the good choice and funded 
> the right ones ?
> First because as in any funding research, there are some deliverables;
> Second because once a year, there is a sort of conference where all 
> funded organizations participate and show their results. If an 
> organization does not respect the game or repeatedly fail to produce
> results, they inevitably fall in the C range at some point in the 
> peer-review process.
> 
> I present a simplify process, but that generally is it. I am not saying 
> either that it is a perfect system, it is not. But according to what I 
> hear, the system is working fairly well and is not manipulated as much 
> as other funding system may be ;)
> 
> Last, members of the steering committee may only do a 2 years mandate. 
> No more. There is a due COI agreement to sign and respect as well.
> Thanks to the various steps in the process and thanks to the good 
> (heavy) work provided by the staff, the workload of volunteers is 
> totally acceptable.
> 
> Note that this is governement money but the government does not review 
> each proposition. The governement set up a process in which there is 
> enough trust (through the peer-review system and through the themes and 
> keyword methodology) to delegate the decision-making. The program is a 3

> years-program defined by the board of the organization. The majority of 
> the board are high level members of the governement (Education, Budget, 
> Research, Industry etc.). This board does define the program and the 
> dispatching of the budget between the various themes. But the board does

> not make the decision-making with regards to which programs are accepted

> or not.
> 
> Florence
> 
> 





More information about the wikimedia-l mailing list