Reading the blog entry le soir about the court case of CopiePresse against Google, I was very disapointed of the role of CopiePresse and how the classical editors still don't understand Internet. Any indexer (including Google but others too) is providing a service (I'm not discussing here the various search engine functionalities) to help people for searching information. It's a benefit for the authors of the content AND the readers, it's providing a better access to the works made by the authors. Search engines are providing a mean to better search and sometimes, classify the information. This is the next step after the initial step of printing (from monks to Gutenberg to digital information to organized digital information). CopyPresse is stopped around digital information localized on one personal computer without any network connectivity. I'm not very proud of being belgian after seeing that (a part of?) belgian press has still not understood Internet.
The approach used by CopiePresse to play the legal battle instead of simply using the robots exclusion standard is very dangerous. I don't think that playing the legal battel about digital information is a good idea. It will generate more boundaries to the distribution of information instead of promoting the way of distribution. So editors are not playing their role of editors in that specific case.
So it's maybe the time to build an RSS scrapper to download the daily full article from lesoir.be and store them on a publicly accessible (for educational purpose) server where any search engine (like Google, Google News, Yahoo!,…) could have access ? That could be a nice example that all the legal stuff made by CopyPress is full of non-sense.