We use, we bake and we eat cookies. By browsing our site you agree to our use of cookies. Okay!

In celebration of Open Access Week a few days ago I led a workshop entitled “Open Science. What is in for me?”. Workshop was organized by Kaunas University of Technology & The Lithuanian Society of Young Researchers and took a place in the beatiful city of Kaunas. The slides are below:

 

I’ve started with a brief overview of what Open Science is and how it relates to the way we conduct and communicate research. In principle, I’ve argued that Open Science is basically an intrinsic feature of science, codified by a set of good research practices. Thefore, it’s hard to be ‘fundamentally against’ Open Science, however not all of the practices of Open Science make sense to everybody and everyone. As many of such practices are the journey off the beaten path (in some fields there’s virtually no social or technical infrastructure for certain sharing practices), in our region (CEE) it makes a lot of sense to be pragmatic open scientist – embrace only those activities which don’t jeopardize one’s scientific career.

Participants of the workshop were young scientists (pre- and post-doctoral) and largely were not previously exposed to such a wide definition of Open Science (although were quite aware of what is Open Access and what are it’s benefits). Therefore I asked them first to brainstorm questions to which “Open Science” or more precise term is an answer for. For example, the valid question would be “What is the mechanism ensuring wide availability of scholarly works?” and Open Access is surely one of good answers. Then, we have collectively created set of personal recommendations for young Lithuanian researchers based on the questions that were brainstormed. We took a question one by one, and tried to form a good research practice that would address the problem in the question, but trying to make the recommendation as achievable as possible. Finally, I asked them individually to choose which one of them would be the most easy for them to actually implement.

Given that most of the participants had a very vague idea of what Open Science is I was very satisfied with the final outcome. See the list of recommendation that they have developed:

  • Submit published works to OA repository
  • Suggest and embrace use of plagiarism checking software at research institutions [this was to provide additional leverage to promote OA; without OA plagiarism checking has limited functionality]
  • Use Open Source to develop new products and services
  • Use social networks, subject repositories, teaching materials and press releases [for distributing one's research outcomes]
  • Use CC licences and contribute to open networks [this related to maximization of added value of research outcomes]
  • Build professional profile on social networks, actively get in touch with other researchers, conferences (including online)
  • Embrace alternative metrics
  • Communicate ideas through as many channels as possible
  • Earn a trust of online scientists and ask them to comment on your paper [this related to improvement of scholarly works before final publication]
This was all their work – I only helped to put their ideas into words.
It is very practical list of activities that substantially advance Open Science in transparent and energy-efficient manner. Openness has a bright future if young scientists are thinking like that.
Enhanced by Zemanta

Search Engine Optimisation had officially entered into academia when publishers started to provide guides to SEO (here’s example of such guide from Wiley). Of course, authors of scholarly literature have used SEO (often under other names) for much longer, although now it’s official and almost recommended.

I’m interested in SEO for more than 10 years, often experimenting with various techniques (for example, if you look for CLANS, Java software for clustering of protein sequences, my blog post describing the software ranks in Google higher than CLANS’ homepage for majority of keywords). Most of the tricks people use, like the ones described in the Wiley’s guide, are pretty trivial. Scientists are usually smarter than that.

I wonder what will be the level of sophistication of techniques developed by researchers to make their research more visible? Traditional way of getting published in the most prominent journals isn’t for everyone (getting in does not depend on the quality of one’s research, that’s clear). Alternative metrics relying on the clicks, visits, downloads are easy to manipulate, however rarely anyone uses them to reach for a paper. There are few other things to try but ultimately artificial hype has the most potential.

There are already a few examples of people that have built their high position in academia using smart marketing strategy in addition to quite good research output. As much as I can say, in all these cases it was rather catching the opportunity of already growing hype, than engineering it from the scratch. I haven’t seen anyone in science building a hype from the scratch, but there are already examples from industry. But it will happen eventually. Open Science will help in that ;).

Reader beware, a rant ahead.

Believe me, I waited 24 hours to calm down before writing this text. But a day passed and I’m still outraged by recent posts of Peter Murray-Rust entitled “Open Research Reports: What Jenny and I said (and why I am angry)” and “Open Access saves lives“. He made there following assertions:

  • close access publishing restricts access to information
  • no access to information means suboptimal decisions, for example in choosing medical treatment
  • therefore closed access means people die
  • which means that open access saves lives

And then he offers some anecdotal evidence supporting this claim.

I would like to offer an alternative view:

  • scientific papers sometime contain false conclusions (whether by a mistake or a fraud)
  • untrained people can use scientific papers with false conclusions as a support for wrong decisions (as many people did with Wakefield’s Lancet paper by not vaccinating their kids)
  • open access means that there will be more potentially harmful papers available to general public
  • therefore open access means people will die

You see? Both claims are based on anecdotal evidence. Both are easy to falsify if you try (and it’s not hard). But there’s more – claiming that Open Access will save lives suggests that access to literature is currently the most crucial problem, at least in medicine. But there far more important problems in health-care, many of which could be solved much faster. If you want to have an impact, please make freely available  summaries of primary literature translated into 60 most popular languages – you don’t need to make it open, free is enough. If you want to have an impact, please make physicians to adopt clinical decision support systems – there are studies showing that between 50 000 and 100 000 people die in the US from diagnostics error alone of which even 75% could be preventable and these mistakes already happen before a physician has a chance to make his suboptimal decision. While I’m not sure both solutions will have substantially more impact in the long run, both will have an impact much faster, because Open Access simply needs time.

Unscientific approach plus ideologization makes Open Access a religion. And I’m not interested in joining religious wars. I’m interested in fixing the problems with scholarly publishing, such as lack of access and outrageous costs. Dear Peter, while I admire your work on openness, by making your language “less nuanced”, as you wrote (and less thought-through as I would say), you’re making my work much more difficult.

Enhanced by Zemanta

Search engine optimization is a fascinating field. I was playing around with the concept, looking for holes in search algorithms or browsers source code a long time ago but I’m trying to catch up with developments a few times a year. Most of Google users haven’t noticed its new algorithm for scoring pages called Panda. It is based on extensive manual assessment of sample websites and quite detailed questionnaire covering such aspects as trust, authority, presence of ads or quality of writing. Looking at the approach and complains it generated on many SEO-related forums I couldn’t help but laugh recalling this cartoon from XKCD:

Now, a cool follow up to this story would be an attempt to use the search results rank to evaluate research manuscript. Imagine listing all keywords for which your papers hit first 10 pages (100 results) of Google Scholar search. As an example – my own papers score 2,8,21 and 22 for “trimeric autotransporter adhesins”.

Arbitrary? Of course – like all other metrics. It’s not going to do wonders, but it has a nice feature – because Google’s algorithm doesn’t seem to care much about citations (my most cited paper has rank 21 for the phrase above, which I would say is quite accurate as the crystal structure had appeared since then and our work is obsolete), we avoid rich-get-richer effect. Other good features are that such measure is harder to gamble than others and it provides rather accurate context-specific description of one’s research area.

But seriously, the beauty of the idea lies elsewhere. If average scientists care about minimal differences between them (whether it’s an impact factor of a journal, or article-level metrics), let’s push the field of research assessment further towards absurdity. Let’s exhaust the field with such ideas, so enough people loose patience, implement whatever can be functional today, and move forward. How many years one can discuss the question “what’s wrong with [scholarly communication/research assessment/science funding] today?”.

To understand fully the reasoning of this post please note that it has been posted in Memetics notebook.

This post summarizes my current focus in area of open science. Whether concepts presented here could make their way into practice, that’s still to be seen. So, consider this a work in progress. I’m happy to be proven wrong, change my opinion or get myself sold into other ideas.

On (open) science

Science (like almost everything else in the world) needs a major transformation, as it no longer serves its purpose in the most efficient manner. For that reason I was attracted to open science several years ago – openness is the mode of the future in science. However the adoption rate for open science turned out to be rather low. One thing is that scientists from some point of their careers are like CEOs – they believe they have the ability to make good decisions, and not amount of facts/data/reports/suggestions is going to make them change their mind (Thomas Kuhn had described that phenomenon more precisely, but I find this analogy a bit better suited for modern times). But there’s also another layer - organization of science is a cultural artifact, and we are all culturally conditioned to accept certain approaches to research. “It’s always been that way” is the reason why we don’t have a “fooling around” grant, to explore different areas without being bound by providing results, for scientists with proven track record. It’s the reason why principal investigator of a grant cannot be a team (such as Polymath Project; after all some teams of great-but-not-geniuses can outperform the ones led by a genius). It’s the reason we massively subscribe to beauty contests such as Nobel Prize (or any other ranking system). Cultural conditioning goes even further – we are conditioned to accept or reject certain scientific questions (for example, fishing expeditions such as metagenomics studies would have hard time to gain acceptance in this part of the world before they became hot in the West, as Central and Eastern European researchers usually start with the hypothesis, not data; on the other hand proposals for really complex studies are often turned down in the West). So, in practice only results of research (assertions) are trans-cultural (concept borrowed from Gregory Lent) – the rest is a cultural artifact.

On the OAI7 conference there was an open science breakout session. Among other things, we divided ourselves into groups and each group had to come up with a single action that was supposed to move openness in science forward. Not surprisingly, 3 out of 4 groups proposed some kind of incentive. Carrots are one way of changing this cultural construct. Another way is adjusting a stick in a form of a mandate or change the processes (such as publishing) in a such way that they require certain amount of openness. Interestingly, we don’t need to take care of sustaining cultural change – once certain mode is adopted, scientists will develop sustaining arguments and procedures by themselves.

Tranformation, growth, evolution

We (scientists in general) do not talk about “personal growth” or anything remotely similar. It’s personal, it’s about growth (and assumes something about us is not OK) and by all means it’s something stupid related to banging one’s own chest and believing everything is possible. However, when you ask people if they think that all the problems we have in science could be solved in a short time provided scientist would grow up, you get definite “yes” almost each time. You can look at this in a slightly different way. On the very same open science session at OAI7 I mentioned above, we were asked to provide one keyword/phrase that describes open science. We got a collection of 30-40 keyword and rarely anything repeated. Surprisingly (or not), about one third of them were actually describing science, as it is “supposed” to be, but apparently is not. Open science as a moral transformation movement? Growing up? Come on. This is not the thing reasonable people talk about, right?

While I’m not going to start a new religion, I want you to re-frame the concept of “growth” into something more scientific. The basis for such framework might be the work of the father of so called positive psychology, Mihaly Csikszentmihalyi. He is most known for the book “Flow” about a state of full immersing in the task and conditions required to achieve it. In the next book, “The Evolving Self” he started to look at patterns of “flow” across the history and showed several examples how “flow” was the basis for substantial shifts in the communities across centuries. He also traced how our “self” evolved over millenia. His main point is that large transformations that happened in the past required a complex self and a laser focus of psychic energy, and to deal with several issues of the present, we need more of the same. He argues that the best idea would be to form a creative minority, called “evolutionary cell” that by its special design (more on that in a minute) will allow its members to transcendent their “selves”.

While it may sound a bit abstract (not surprisingly – the issue is quite complex and it’s hard to simplify), let’s look at an example, El Sistema program in Venezuela. It differs quite substantially from most of other music schools systems all over the world. There’s unprecedented amount of working in groups. Kids play together or form orchestras from very early stages. Because they work together, “flow” is achievable much easier than when playing alone. Because music is relatively neutral topic, participants face no resistance from their own communities which helps them to grow out of social conditioning of their communities (studies notice improvements in school attendance and declines in juvenile delinquency among El Sistema participants). And almost by the way, these kids at the end of the day form one of the best 5 youth orchestras in the world.

Fellowship for the future

I think that carefully designed open citizen science projects are our fellowship for the future, as Csikszentmihalyi calls these future oriented creative minorities. And here’s what I understand by careful design:

  • It’s a group effort (there are plenty of venues for individual performance elsewhere – here only the “orchestra” matters).
  • It has a clear goal, that is ideologically neutral (a participant avoids heated discussion about the project in his/her local community) and attacks it from many different perspectives (systems thinking). Scientific questions fit perfectly, as long they are not in the area of climate change, sociology or GMO.
  • The goal has to be large, difficult and open (in a sense that the group invents new solution to the problem). While it’s the easiest to grab people’s attention for short amount of time, there must be an element of long journey in the process.
  • The project should be hands-on – there has to be real contact with the issue (it can be even abstracted in the lab but not virtual- cyber-science doesn’t work here).
  • Difficulty of the tasks follows the skills of participants – you cannot do the same thing for 8 months and expect enthusiasm or “flow” (unless you are given some space to introduce changes).
  • The group has a natural leader (scientists/teacher) but nevertheless is built on the basis of meritocracy (you don’t want participants to compete for alpha position, but you don’t want to entirely obey either).
  • There must be a close and relatively frequent contact with the leader, therefore groups have to be small and local.
  • It should teach logical and creative thinking, but at the same time integrity, accountability interconnectedness and systems thinking. In other words, let’s do interesting science, let’s report it well, and let’s make sure we understand different sides of the problem.

In essence, the project allows participants to have their own “hero’s journey” (the basic pattern of narratives from all over the world, also called monomyth; coined by Joseph Campbell). The journey allows the hero to destroy the old self and build the new one. While it sounds dramatic (some call it “tragic” mode of self-development), it’s actually not. It all happens almost transparently, because this process is not the main purpose of the journey – the main purpose is to kill the dragon or to solve the scientific issue.

It’s not a religion in disguise – it’s an idea to transform science and society, to increase wisdom level just a little bit.

What next?

This approach definitely needs more work. I’m not yet sure how to avoid common mistakes, such as pseudoteaching or shallowness. I’m not yet sure if within so short time (between 1 and 3 years per cycle) I would manage to convey the basics of scientific and systems thinking (it took me a while to get it). The project will present big challenge to participants on many different levels. It’s also relatively easy to attack the project and hard to defend it – especially easy critique is little results vs large investment (the project requires by design relatively low ratio of scientists to participants – and not all of the participants will be willing to lead next iterations; obviously project has no appeal short term).

However, such idealistic stance (framed in different words) is going to repeatedly come back into science. Have a look at the couple of links below (most are quite recent) – the framework is appearing again and again. But it’s not going to be easy to defend it or implement it. Depending on the angle, it’s going to be seen as dangerous ideology bordering on cybercommunism, nice societal idea disconnected from the reality or something that produces completely unwanted results. But systems thinking (kind of pre-requisite for becoming responsible adult) is the skill of the future, and we need to teach it, in whatever form.

You know the news already:

The Howard Hughes Medical Institute, the Max Planck Society and the Wellcome Trust announced today that they are to support a new, top-tier, open access journal for biomedical and life sciences research. The three organisations aim to establish a new journal that will attract and define the very best research publications from across these fields. All research published in the journal will make highly significant contributions that will extend the boundaries of scientific knowledge.

Cameron has already commented on the issue, but I would like to add a few more thoughts. The concept of “top-tier journal” obviously resulted in some negative comments from the open science community, with which I fundamentally agree. However in practical terms, it is actually a step in a right direction – it will create an direct and indirect incentives to publish in open journals. As nobody expects scientists to grow up (well, I do, but that’s a topic for another post), we need more carrots to shift the system towards openness.

But that’s not the end of the story. As I have written last year, we needed some large scale experiment in Open Science - not to make some significant change, but to significantly change dynamics of the system. Launching new OA journal by science funders might have directly several positive outcomes but at the same time it is going to spark some interesting discussions on obvious topics such as sustainability of OA, future of journals, marketing costs, quality filtering, impact of publications, etc. However, given that these areas aren’t unexplored (OA publishing isn’t something exotic or new and PLoS has been heavily experimenting in OA for the last couple of years), I look forward to discussions outside of the topic of publishing narratives of research. As people acknowledge OA and try to think about the next step, open data or open notebook science will get a way more attention than they receive at the moment.

I’ve recently attended a meeting of Polish homeschoolers. Homeschooling is not that popular in Poland as for example in US. One of the reasons is that it’s basically illegal in here to teach kids at home and a waiver is given only in special cases. Nevertheless there were ca. hundred parents there and I’m sure that number is going to grow for the next few years (unfortunately to understand the discrepancy between illegality of homeschooling and it’s growing popularity you should live here for quite a while – we’ve been saying “Pole can” ages before presidential campaign of Obama).

My presentation was largely about citizen science. I was arguing that all three approaches to answering a question in a traditional education (recalling, guessing and cheating) are almost always preferred to deducting the answer or making an experiment to find out. And the easiest way to avoid that trap is to ask a question that nobody has answer for, that is the scientific question. Then I went on to describe various open citizen science projects and their different levels of participation.

My talk was after another one on open educational resources. Obviously that other presentation received more attention from the parents and resulted in many more questions than mine, but still the concept of citizen science rang a bell with a few of participants.

However, what I realized was that home schooling parents are the best allies open science advocates have within the general public. Open citizen science might play a huge role in early education – not only in teaching how to think, but also how to work together/cooperate. Obviously, parents will be interested – but only if we can give them access to resources (publications, data) that is equal to the access we have.

Enhanced by Zemanta

No, this is not “Ten simple rules of open science” (although it could be nice if we could write such article and publish it at PLoS Computational Biology) – this is the list of TEN COMMANDMENTS of open science:

1. You shall give everything away free (do not over-protect your research); do not patent – sell your expertise.

Release source code under open source license, publish OA articles and do not abuse intellectual property rights.

2. You shall change the world, not just sell things.

Seek for making contribution to the scientific knowledge; do not choose trendy research topics if results may contribute a little.

3. You shall be sharing, aware of social responsibility.

Explain your work in simple language.

4. You shall be creative.

Do not repeat someone’s old experiments using new technology.

5. You shall tell all: have no secrets, endorse transparency and the free flow of information; all scientists should collaborate and interact.

Follow the practice of Open Notebook Science.

6. You shall not work: have no fixed 9 to 5 job, but engage in smart, dynamic, flexible communication.

Get familiar with bursty work and “just-in-time” research – do not let your research fall into any schema.

7. You shall return to school: engage in permanent education.

No step of scientific career should prevent you from asking stupid questions.

8. You shall act as an enzyme: work not only on the research, but trigger new forms of scientific collaboration.

Any kind of communication channel is a potential way of collaboration.

9. You shall leave research silently.

Do not make yourself impossible to replace.

10. You shall be the state: companies should be in partnership with the state.

Be independent – do not endorse technology transfer centers.

Sounds nice isn’t it? Now, let’s re-label things. These ten commandments above are almost exact copy (the original list is to found for example here) of Olivier Malnuit’s ten commandments for the liberal communist. In isolation, many of the above are quite reasonable. All of them together, stamped as “commandments for the liberal communist”, trigger violent or at least negative responses in many places.

Why did I put it in here? Because the label of “liberal communism” or “cyber-communism” is used to describe Open Access initiative here in Poland. Not frequently, not so visibly, but you can guess how difficult is to get through such label.

I will write my comments in another post, but I’m of course interested in yours.

Enhanced by Zemanta

Michael Nielsen some other day shared a story on how researchers are using Twitter to predict stock market movements. The idea wasn’t actually that new – there were other attempts to use Twitter (and not only that) to predict stock prices. And actually it’s fairly easy to come up with another source of signal that will nicely correlate with the market (if you don’t believe that, look for performance figures of astrology-based trading – numbers are quite amazing). However, one of the key issues of algorithmic trading is not how good the method is, but how costly mistakes are. And I believe this is also the key issue in majority of real-life applications of data science projects.

If overprediction in a diagnostics procedure makes somebody loose her/his kidney, that’s not that good procedure. If a trading system makes you loose more money every 10th trade than you earned in previous 9, it’s not a good system either. Assessment of false positives and false negatives (type I and type II errors) is a standard element of statistical hypothesis testing, but real-life applications require weighting mistakes to understand if the algorithm is actually usable.

You may or may not know that I’m the head of Systems Institute, non-profit research organization founded in 2009 in Poland. The Institute is located in Poland, but is operating internationally. There’s not much you can learn about this initiative, as we haven’t put up a real website yet (although there’s a placeholder). There was a reason to that lack of online presence – instead of wasting time on hyping our vision, we decided to start working, and brag about results once we have it.

Today FigShare created by Mark Hahnel went beta and as of today, Systems Institute is officially supporting FigShare’s operations. In other words, there’s a real effort to keep FigShare alive for indefinite period of time. Mark is making sure your figures and data are searchable, citable or whatever else is going to be implemented in a near future. We are making sure the service is operating and your figures are backed-up.

There’re few lessons to learn from Mark’s great work on FigShare and other services he has started within last half a year (especially on doing vs talking ratio). But I will share another one, related to our cooperation. Being a head of any institution means one has a decisive power. It’s something online communities don’t have. Let me explain. I was paying attention to Mark’s work on FigShare from the very beginning, because it was an implementation of ideas on nanopublishing I heard from Fabiana and others at Science Online 2010. It’s great to see this finally materialized. But when certain performance or features of FigShare required some investment, there was an issue how to cover that (without stretching too much Mark’s PhD-student budget) and we had discussion or two on FriendFeed on this issue. In my view, microfinancing of such projects is not going to work – at least not in a near future. Given the importance of the FigShare (really, it’s very important to have such service), I’ve decided to step in and offer support from Systems Institute. No online community could provide financial support in a long term. Which is why Systems Institute was started and why I’ve recently complained that too few people try to institutionalize their work. Beautiful and brilliant things emerge from online collaborations. But to make them sustainable, we need supporting institution, even as virtual as SI is.

Now, go to FigShare and upload some data or never-to-be-published figures. But at the end of the day, please think long term about our ideas we discuss online and about work we do everyday. And read again Deepak’s post on Abundance.

top