I’ve had a number of technical issues with this site, but all seem to be solved right now (errors with WordPress consuming more memory than allowed size seemed to relate to IntenseDebate plugin – now deactivated). As I was cleaning, I’d also removed RSS feeds of Twitter and FriendFeed from sidebar – first one regularly times-out, the second one presents malformed feed. Sorry for incovenience.
I’ve recently looked a bit at clinical decision support systems. These are software systems that assist physicians and health professionals in decision making tasks. Automated diagnosis based on patient’s data is the most prominent example what such systems can do. It turns out that the idea isn’t particularly new – first systems were build already in 70s. However, when you look at the history of CDSS, or more specifically their diagnostics subset, it’s clear that most of them were discontinued quite early or they weren’t really deployed in clinical setup.
Often mentioned challenge to adoption of CDSS is so-called alert fatigue. Pulling in large number of information, tests, correlations in genetic data, drug effects and interactions and what not results in creating large number of false alerts, that quite often aren’t relevant in particular situation. Physicians warned all the time about possible and even reasonable (everybody has non-zero chance of dying of cancer, but these systems do not suggest cancer in every case) effects or diseases stop paying attention to the alerts after some time. On the other hand diagnostics errors alone are believed to cause in US between 50.000 and 100.000 deaths per year (some studies suggest that ca. 75% of them were preventable) – definitely there’s a room for improvement.
I see that as a opportunity for bioinformatics and healthcare to learn a bit from each-other. Specialists working on diagnosis decision support systems seem to say out loud: “it’s way too complex”. Bioinformaticians working on data-intensive areas such as genome-wide association studies seem to say out loud: “it’s not that simple”. Hopefully they are going to meet half way.
There’s interesting relation between this comment by Richard Gordon and Bryan J. Poulin entitled “There is but one journal: the scientific literature” (posted under essay in PLoS Medicine entitled “Why current publication practices may distort science”):
(…) Frankly, when downloading a paper, we pay more attention to the contents than the name of the journal, which has become incidental. “Artificial scarcity” is a frame of mind. The result of all these technologies and attitudes is that we are rapidly approaching the concept that there is but one journal: the scientific literature. (…)
and recent discussion about interdisciplinarity of science, summarized by Abhishek Tiwari in his recent post, where he cites Robert Phair, systems biologist:
If I could change one thing in modern science, I would stop the fragmentation and specialization. To do synthetic work, we need more broadly based professional societies, academic departments, and journals, NOT more specialized ones. Current academic department structures evolved from the reductionist paradigm and simply do not support synthesis effectively.
Sounds nice: one scientific journal, one scientific community. I’d love to see that in practice. However, I feel that resistance is going to be huge. Not only because it may seem at first like throwing the baby out with the bath water (framing or labeling helps people to organize and understand the world). Also, I have an impression that this wholeness seem to require a new kind of mind – capable of synthesis and capable of dealing with complex problems.
It seems obvious that science depends on communication although we tend to limit the scope of this word to communication of (raw) results and knowledge. Moreover, we rarely pay any attention to the way the message is transmitted, what is the perception of the message in the public, and finally, what is the strength of the message. We are rational and we don’t need this “marketing” trick, right? Well, it doesn’t seem to be true. Perception of the messages can vary dramatically, depending on many different variables, including such simple thing like who is its author (that’s why there’s strong resistance against anonymizing papers going to review). And we should start paying attention to these variables, if we want to move Open Science forward.
I’ve become interested in strategies for promoting open science for two reasons. First is quite obvious – I’ve already written one the other blog that together with friends we’re trying to find funding for campaign for open science and open education. We weren’t successful yet, but we don’t give up. But if we get the funding, it’s going to be very important to make little or no mistakes, because I don’t think the next chance would happen again soon. The second reasons is that I’d like to see an experiment with doing science in the open on some really large scale, let’s say couple of universities, small country, etc. Large experiments tend to expose all kinds of issues we didn’t expect and at the same time they allow new opportunities to emerge, because they change dynamics in the environment. Academic environment isn’t going to be the same anymore if for example the half of UK or all Max-Planck Institutes would open the research process.
In the comments on recent post on rankings, I’ve mentioned that I would consider (among other things) using or rather abusing ranking systems available on the net in executing campaign promoting Open Science. Maybe it’s a wrong idea – Bjorn, Bill and William have pointed out a number of its flaws. Maybe it’s a bit unethical. However when I look how “Open Access” is marketed (it’s useful to know that Harvard or NIH went OA, but it’s not a meritorious argument, even in the US; also people still use “increased citation ratio” as an argument in favor of OA, slipping over the fact that the effect will basically disappear when majority of papers are OA), I’m not that sure abusing ranking system is much worse than that. As Bill suggested, we might to the thing openly, so everybody interested know how and why we’re doing that.
Things became even more interesting in the light of the paper entitled The Dueling Loops of the Political Powerplace, published on the well known portal promoting analytical activism and systems thinking, thwink.org. The paper and the whole site has strong political bias, however it contains very good point concerning pushing sustainability agenda. The main point is that environmental activists are focusing on low leverage actions, such as spreading “true memes”, while actions with much higher leverage are spreading “repulsion memes” and ability to detect “false memes”. As I understood that, for open science activists it would mean writing more about why closed science is bad (wasting money, stiffling innovation etc.) and how scientists, agencies, institutions are closing down research, instead of writing about good sides of opening up the research.
I have written on Freelancing Science blog that Open Science needs better targeting and simple message. What if we add selecting topics of posts and publications, and creating artificial hype around the idea? Are you ready to try something like that? Openly?
Related articles by Zemanta
- Paper: “Sociological implications of scientific publishing” (zzzoot.blogspot.com)
- U.S. Seeks to Make Science Free for All (scientificamerican.com)
After few years I’ve finally got back my old camera. I don’t have enough time yet to go out to do some serious shooting, so I’ve decided to try to shooting from the window, just to remind myself how to hold the camera again.
I have tried the standard late night shot, nothing spectacular (other than view from the 9th floor):
Next thing I’ve tried was zoomed in view on the street, to capture in detail the move of cars on the street. Here’s an example of such attempt.
I didn’t have a tripod – if you have strong support (window in my case) and you hold your breath, handholding camera for couple of seconds can produce a reasonably sharp image even if you use focal lentgh of 300mm (at least for 6MPx sensor). I was shooting photo after photo standing on a couch, when my son came in and asked what I’m doing. I’ve told him about my long-time-no-see with a camera, adding that he should stand still on a couch, because jumping on it will produce blurry images. Of course it resulted in exactly opposite behaviour:
When reviewing the images I’ve realized that the last photo is exactly the flow of time I was trying to capture – but it represents the flow of time inside our heads. We go back in time or plan for the future in a chaotic manner in a way documented in Joyce’s Ulysses.
Of course I know that there are millions of almost identical photos out there. But sometimes you need to redo the obvious thing by yourself to discover something new about it.
This is video of my talk on Open Science given at TEDxWarsaw on 5th of March 2010. Slides of this and other talks are available on Presentations page.
Talking about openness in science to non-scientists was a big challenge and I’m not sure I got it right. I hope to learn more from other speakers and from TEDx community how to shape the message it gets through more efficiently.