[Date Prev][Date Next][Thread Prev][Thread Next][Chronological Index][Thread Index]

short additional remarks



Hi -- 

Sorry to take up so much bandwidth, here are a few more short ideas...


>
> - political issues:
>    -what to do with laboratories not performing well?
>        -problem with European situation: several countries with
> various research agenda in different fields
>       -competitive approach in cooperative European framework?

Speaking as an outsider, and someone is still learning about the
cooperative European framework, I'd like to respectully submit the
following remarks:

It's true that the research community in North America in LE can be
described as existing in a "competitive" framework.  Different labs do
compete for grants, publication space, good people and general fame.  

Competitive evaluation, however, has produced something that never existed
before:  an identifiable and highly cooperative *research community*.
Since the beginning of organized evaluations we've seen a huge increase in
the amount sharing and interchange of scientific ideas, technical ideas,
data, and people.  At the regular evaluation meetings of this community
people who live and work thousands of miles apart have actually learned how
to work together to solve problems.  "Competitive collaboration" may seem
like an oxymoron, but I've been a part of it for many years and can report
that overall it's a very positive experience.  I go out of my way to remain
a peripheral part of the North American evaluation scene while working here
in France.  Several European labs (including my current lab, LIMSI)
participate as volunteers for the same reasons.

I'd like to stress that it's not *laboratories* that are evaluated, but
*implemented techniques*.  It's true that, due to poor performance, some
laboratories (including the one I came from in the US) have been forced to
drop certain lines of research in the context of evaluations.  But those
same labs have been able to pick up on better-performing techniques,
reorient themselves, and subsequently do very well in follow-on efforts.
It seems to me that this a good thing -- methods that work can be sorted
out from methods that don't work.  This is, after all, our goal as
scientists, is it not?

There is always the danger that some promising but immature technique will
never have a chance to prove itself against an entrenched behemoth.  This
is especially dangerous if all available funding becomes tied to evaluation
results.  But it seems to me that the danger of this happening in Europe is
low, as many researchers have their salaries covered by direct grants from
their own countries.  This guaranteed minimum level of funding is rare in
the US, where the complaint that evaluation stifles innovation is often
heard. The Europeans have a distinct advantage on this count.

Thanks for listening,
Lin Chase
chase@limsi.fr