We use, we bake and we eat cookies. By browsing our site you agree to our use of cookies. Okay!

Google’s Panda and the research assessment

Search engine optimization is a fascinating field. I was playing around with the concept, looking for holes in search algorithms or browsers source code a long time ago but I’m trying to catch up with developments a few times a year. Most of Google users haven’t noticed its new algorithm for scoring pages called Panda. It is based on extensive manual assessment of sample websites and quite detailed questionnaire covering such aspects as trust, authority, presence of ads or quality of writing. Looking at the approach and complains it generated on many SEO-related forums I couldn’t help but laugh recalling this cartoon from XKCD:

Now, a cool follow up to this story would be an attempt to use the search results rank to evaluate research manuscript. Imagine listing all keywords for which your papers hit first 10 pages (100 results) of Google Scholar search. As an example – my own papers score 2,8,21 and 22 for “trimeric autotransporter adhesins”.

Arbitrary? Of course – like all other metrics. It’s not going to do wonders, but it has a nice feature – because Google’s algorithm doesn’t seem to care much about citations (my most cited paper has rank 21 for the phrase above, which I would say is quite accurate as the crystal structure had appeared since then and our work is obsolete), we avoid rich-get-richer effect. Other good features are that such measure is harder to gamble than others and it provides rather accurate context-specific description of one’s research area.

But seriously, the beauty of the idea lies elsewhere. If average scientists care about minimal differences between them (whether it’s an impact factor of a journal, or article-level metrics), let’s push the field of research assessment further towards absurdity. Let’s exhaust the field with such ideas, so enough people loose patience, implement whatever can be functional today, and move forward. How many years one can discuss the question “what’s wrong with [scholarly communication/research assessment/science funding] today?”.

To understand fully the reasoning of this post please note that it has been posted in Memetics notebook.