Filters

The Brittanica is doing something very daring: they are starting a discussion about web 2.0 (like what is the relation between Wikipedia and the Brittanica) on a web 2.0 platform (blogs). Sun Tzu whom is no doubt also in the Brittanica, would have taught never to fight on the enemy terrain. Like I said: daring.
The opening piece is from Michael Gorman where he more of less attacks the collective intelligence and defends the traditional way of finding and selecting information. The reactions by the blogging crowds are as you can expect.
However, I think there is one point overlooked by the people reacting (like Clay Shirky) to what Gorman says. Of course is unlimited information flow good. People can express themselves and lots of different opinions are available. However, filtering and rating information is important, in science as well as in everyday news. We have to know how far we can trust the information and the source. In the “old days” our filter were based in the production side. Production was costly through printing and distribution. To make these decisions we employed professionals hired by firms that printed the books and magazines. We as users could select with our feet by buying the magazine or not. Maybe not the best model overall but the best of all the inadequate models available at us at that time. The nice thing of this model is that the information that is presented to us by trusted sources is usually fairly good. We know that Nature (almost always) uses a rigorous process before publishing.
Comes the Internet and Web 2.0. With Web 2.0 our filters from the production side have been removed. It has been said often: unlimited copies, free distribution. So now we are flooded with information where it is hard to distinguish in objective quality. One of the scientific risks is that this leads to the use of information that fits a priori with your thinking, without the check on the validity of conclusion. A bit of web surfing always leads to articles that support you suspicion. For example, I very much like the writings of Clay Shirky. But most of the information I find is from his blogs. He is eloquent, he is convincing but also in many cases totally insubstantiated. Maybe it is based on facts but I have no way of knowing.
Science does need a thorough process of checks to determine quality. The work I do builds upon the works of others. If I can not trust my sources, how can I trust my results. And again, peer review is the best of all the inadequate models we have for this. I agree with Gorman that, in the end, science needs facts, not hearsay. I wonder how much of the facts in Wikipedia are based on (checked) information people found in one of those bulky paper encyclopedia’s.
We have to understand why and how our quality mechanisms work in the physical production and distribution in order to make the translation to how we deal with it in the digital world. The goals remain the same (quality and trustworthy information) but the mechanisms will be fundamentally different because the new possibilities web 2.0 gives us. Exciting new possibilities and maybe even better ones than we had in the physical domain. There is the possibility for more transparency in the peer review process. The use of scientific papers is seriously hampered by the fact that commercial organisations are running the publications. It would be better if this peer review process would be an open one (open science?) and that the scientific community is responsible for this process.
However, I think that also in that case we will need all kinds of governance structures. More open, more democratic and more transparent. It will not be “like the mind of god” nor will it be like the Hyves mind. Just work but a bit less inadequate than it is now. Small steps.
That’s how progress works.

Living Labs

The last 10 year we have created much more technology than we are using today. We have invested enormous amounts of money in mobile broadband structures and what is the most important mobile application today: texting (or txtspk), the mobile application that is the most basic in mobile technology. We are investing heavily into fiber to the home and what are people watching: low res video’s on Youtube. And they (we?) are loving it.
For many research projects the user was not a part of the equation. Technology had it’s own goal: more broadband, more mobile, more functions: more is good. And as all unix users know: more is less…
Today I had a conversation with people from CETIM (Bernhard Katzy, Benoit Dutilleul and Jean-Marc Verlinden) about Living Labs. For technology research it is more and more important to get out into the field.. Because the social aspects and user experience are the next frontier to take we need to do the research in close contact with the users. A new version of ADSL can be developed in the lab, a Wiki can only be created through a constant iteration with users. The number of possible functions are many but only a few will catch on with users. We can not use a stub for the user like we do in software development to test functions.
Living Labs is a (bit hyped) term used for large scale, in situ testing, of new developments. Tribler is an example where data is collected from the users of this bittorrent client to understand why and how people use this software. Google Mail is even an example because of the constant measurement and adaptation of the software to the way people are using it. CETIM is involved in the “Knowledge Workers Living Lab” And Telematica Instituut is involved in the “Freeband Experience Lab”.
A lot of these labs in Europe are starting to cooperate together. Many have their own expertise and infrastructures so working together leads to better possibilities for user focused research and new paths to (open)innovation. As CETIM and Telematica Instituut we are looking into he possibilities to work together these issues. Maybe you will soon be part of this great living lab we call earth …