Blog

  • A bunch of nerds with maps

    …I think I can define GeoMob this way and I fit this definition perfectly πŸ™‚

    Nice London Geo/Mobile Developers Meetup Group meeting yesterday at City University. High level of the talks, providing vision, reporting experiences, and showing technologies and nice uses of them. Here’s a short summary.

    Andrew Eland – Mobile Team Lead for Google UK

    A very Google-like talk, showing up tech pieces with their vision. Of course, disappointing if you were expecting more in-depth analysis of market, novel ideas, or anything more than current publicly known work. But we’re used to that, and it was not a bad talk at all πŸ™‚
    Best quote: “Tokyo is a vertical city“. That’s absolutely true, and this fact has a direct impact on geo-apps: being shops, clubs, bars, developed vertically at different levels of the buildings (this is a pic I took of the Keio Sky Garden, for example, and there are hundreds of beer gardens up on the roofs of several skyscrapers!) there’s a real need for accurate altitude information and 3d-mapping, or at least altitude-enabled maps. The interesting question for me here is how we can show multi-floor information on the 2d-maps currently in use.

    Julianne Pearce, Blast Theory
    An artists’ collective perspective on geo-development. Absolutely intriguing, as not the average techietalk you would expect from a GeoMob. I found this personally interesting, as I played with the Can you see me know? game and even created a modified version of it at the UbiComp Spring School at Mixed Reality Lab, University of Nottingham in April 2009, during a workshop dealing with Locative Game Authoring.

    PublicEarth
    They introduced their concept of a web 2.0 site for creating a personal atlas. Basically it’s about putting photographs and commercial activities of interest on a personal map. They seem to be developing APIs and the possibility of creating widgets, and directly deal with small companies (hotels, b&b, restaurants, bars) to put them in their database. The idea here is that users will be allowed to tell the (possibly intelligent) system what categories of data they’re mostly interested in, leading to some kind of customised Michelin guide.
    On monetization, they have a three-fold strategy:
    – contextual advertisement, empowered by the fact that users are genuinely interested in what they put in their atlas
    – share of profit on direct bookings
    – [long-term] user base providing more content, improving quantity and quality of contextual data in a positive feedback loop, possibly making it interesting to other companies

    Laurence Penney, SnapMap
    My favourite talk of the night. Laurence has been longing for a way of placing precisely photographs on a map for more than 10 years.
    I was astonished of seeing him doing many of the things I would have liked to see in web sites like Flickr and that I’ve been discussing for ages with my friends and colleagues! Using gps data, a compass, waypoints, directions, focal length, and all the other data associated with a photograph, Laurence is developing a web site to allow users navigate those pictures, even creating 3d views of them like the guys at University of Washington with Rome wasn’t built in a day. Funnily, he started all of these before gps/compass-enabled devices were available, writing down all of his data on a notebook, and he even had problems with the police inquiring why he was taking picture at the Parliament (unfortunately, I have to say he’s not alone -_-).

    Mikel Maron – Haiti Earthquake OpenStreetMap Response
    Mikel explained what OpenStreetMap did to help in Haiti. Disaster response relies heavily on updated maps of building, streets, and resources, and OSM quickly managed to get that done. A great thanks to him and to all of OSM guys to show the world that mapping can be helpful to people even leaving out profit considerations.

  • HootMonitor: a Twitter app with a strategy

    Ollie Parsley is a developer from Dorset I’ve been following with much interest since his first appearance at the London Twitter Devnest last May (you might remember I blogged about it) as his work is often pointing mind-boggling problems in a developer’s everyday life (read about his Cease&Desist experience, for example).

    HootMonitor is his latest Twitter application, even if I would say it’s reductive to call it a “Twitter application”. As it’s been introduced during last Devnest, HootMonitor is simply speaking a website monitoring tool using Twitter as a communication device. I.e.:

    • you get an account on HootMonitor linked to your Twitter account
    • add a web site you want to be monitored
    • HootMonitor will periodically monitor the web site for you
    • the service will send you a Twitter direct message/e-mail/sms if the web site goes down
    • you will also get aggregate status reports (uptime and downtime, average response time, etc…).

    As there has been much interest lately over the use of Twitter as a corporate tool, and never ending discussion over the possibility of a business model that allows Twitter to monetize its success, it looks like Ollie has touched again some issues and addressed the whole process of bringing this service to user in a way that resembles the classical case study from literature. I believe that HootMonitor is going to be an interesting and possibly successful experiment for the following reasons:

    • Mashup use of Web 2.0 technologies: HootMonitor is not the first try of creating an application out of Twitter and there have been many mashups that received extensive press coverage. Nonetheless, HootMonitor is the very first application, as I’m going to explain, to deliver a service over Twitter that carries together: intrinsic usefulness, a business model, and a good “marketing” strategy.
    • Useful service: HootMonitor adds value to user experience solving a real problem without disrupting the users’ life. There is plenty of monitoring tools out there, but not many of them generate reports in a way that integrates seamlessly into their lives and jobs.
    • Freemium model: this is the most interesting aspect of HootMonitor. It can be used for free, but it has premium functionalities that you can get by paying a (reasonably priced) subscription. As far as I’m aware of, this is the first application with such a business model to have emerged over Twitter API. There is plenty of possibilities of trying the service for free. You can experience all the usefulness of it without paying a single penny. The functionalities you pay for, though, are worth the price (for example: personalised statistics or mobile text messages). Many other successful Twitter applications do not have a business model at all and it’s hard to imagine how they will ever lead to generate profit (unless they’re used as an advertisement tool for other products/services).
    • Marketing strategy: Ollie has been developing HootMonitor for some months, letting the users of his other apps and his Twitter followers know about this idea. The steps here were developing some kind of “corporate” HootMonitor blog, a Twitter account to engage with potential users, a small company under whose name work (HootWare). Moreover, HootMonitor was launched exactly the night after its presentation at the Devnest. I believe this was a smart marketing move that made the service getting the highest level of advertisement possible.

    Naturally, I can’t forecast whether or not HootMonitor will be a successful venture but I’m optimistic about it and of course I wish Ollie to get there. And as I’m finding it very useful for my websites, and I’m aware of many other people trying it, given its strategy and model it’s likely we’ll be hearing more about it in the short (and maybe longer) time.

  • The impossibility of appropriate recommendations

    I’ve recently finished reading Hofstadter’s “Goedel, Escher, Bach”, after three years and a number of failed attempts with restarts. Of the main topics touched, I’ve found interesting its approach to the problem of natural language automatic understanding and generation. And I feel that this problem is intrinsically related to that of generating recommendations for users (ok, this is not a great discovery, I must admit).

    The way we can understand the problem can be simply put as follows. Imagine we have a language generator we can ask to create sentences. We could:

    • ask it to create correct sentences (i.e. grammatically correct sentences – this is somewhat possible)
    • ask it to create meaningful sentences
    • ask it to create funny sentences

    The three points before carry different attributes, whose meaning attribution can be subject to discussion. As you can imagine, funny implies meaningful and correct, and meaningful implies correct. Which means that the generation of such sentences is increasingly hard and complicated. Moreover, as everyone can, within certain boundaries, generate a correct sentences, there are surely more shadows on what are the characteristics of a meaningful sentence (e.g. what is meaningful to me could not be meaningful to you), and a funny sentence needs its real, underlying, meaning to mean something different than its apparent meaning. You can also notice that the attribution of such attributes to a correct sentence is increasingly personal, too. The attribution of meanings is an intrinsically human activity, and this is well known to programming languages developers and logicians who deal with concepts such as syntax and semantics.

    How all of this relates to the field of recommender systems should be obvious by now. A RS is a tool that, more or less, tries to understand what is meaningful to a user to provide him or her with suggestions. What a general purpose RS should do is to understand the meaning of objects and find similar objects. The thing is, the meaning of objects, especially when expressed by natural language, is not easy to establish, and in general cannot be established at all.

    I recently reviewed a paper for a friend doing research in RS that reported an example similar to this: “I’m at home, and would like to get a restaurant in Angel Islington for tonight”. Contextual information (and subsequent activity and intent inference) are the interesting part of this request for a recommendation: it does not matter where I’m now, but where I would like to go. This is a very simple issue to deal with, but how about all those situations in which context is implicit?

    You will object that a general purpose RS cannot exist and wouldn’t be that useful. Truth is, however, that even a limited domain RS as one for books or DVDs may encounter similar problems. I’ve been discussing the possibility of a “surprise me” button, proposed by Daniele Quercia. The idea is that sometimes as a user I would like to be suggested something new rather than something similar to what I’ve done in the past or to what my friends like. But this concept opens a very deep issue about to what extent should a surprise be made. In other words: it’s not possible to understand what kind of recommendation the user would like to receive. What a RS may do is to detect users’ habits or activities, and provide always a similarity suggestion.

    So here’s my view of the limitation of current RS: they cannot – as of today – provide a recommendation to a user that likes to try something new. RS are for habituΓ©es.

    A stupid example: I’ve read four books in a row by the English author Jonathan Coe. After that, Amazon kept on recommending me other books by Coe, whilst of course I wanted a break from them.

    Any objections? E.g.:
    meaning in current RS is not expressed by natural language: true, but nonetheless this is a limitation of the systems themselves. This actually produces the result of not being able to give suggestions other than those based on the values. For example, “rate your liking of the book from 1 to 5” will never be able to express if the user actually would like to read it again, if it would recommend to others, or if it. Structured representation does not capture real meaning, and restricts the gamut of representable information about the user.
    no RS is general purpose: I think even limited domain RS suffer from the same problem, as no RS can infer a user’s feelings.

    I’m not proposing silver bullets here, and of course not all research/applications in RS is to be trashed. Some possible research and development directions may be:
    – use direct social suggestions: to whom you would suggest it? (similar to direct invitation in Facebook – where nonetheless all the limitations of this approach are evident)
    – deal with changes in user tastes and try to predict them
    – use more contextual information
    – try inference from natural language, for example inferring user tastes from his or her long reviews
    – better user profiling based on psychological notions and time-variance: TweetPsych has for example tried profiling a user based on tweets, that are short and scattered across time.

  • At the #GeoMob

    Hey folks, long time I haven’t blogged – been very busy at work and home! Let me resume my techie stuff by summarising some of my thoughts after the #GeoMob night at the British Computer Society, last 30 July.
    The #GeoMob is the London Geo/Mobile Developers Meetup Group, and it organises meeting of developers interested in the geo/social/mobile field, usually with participation from industry leaders (Yahoo!/Google), businesses, startups.

    This are my thoughts about the night, grouped by talk:

    Wes Biggs, CTO Adfonic

    • AdFonic is a mobile advertisement provider that launched 1/7/09 (their home page doesn’t work, though. You need to go to http://adfonic.com/home)
    • what about user interaction and privacy? if I don’t get it completely wrong (reading here it seems I haven’t), the actual user experience is to have some kind of advertisement bar on your mobile application. If it’s just this, it’s simply the porting of an old desktop idea to the mobile environment. The problem is that it was not a hugely successful idea. Here the user is rewarded even less compared to the desktop bars (I guess by getting the app for free?). I’m not sure this can be a really successful venture unless the ads are smartly disguised as “useful information” – but, hey, I’m here to be refuted πŸ˜›
    • getting contextual information is difficult, even if you know the location of the user you don’t know what he/she’s doing. Good motto from the talk “advertisers are not interested in where you are, but in where you’re at“. But how to get and use these contextual information was not really clear from the talk. From their website’s FAQ, I read:
      • You can target by country or region.
      • You can target by mobile operator.
      • You can define the days of the week and the time of day you wish your ad to be displayed in the local market.
      • You can choose to target by demographics by selecting gender and age range profiles.
      • You can choose devices by platform, brand, features and individual models.
      • You can also choose to assign descriptive words for your campaign using tags. We compare these tags to sites and apps in the Adfonic network where your ad could be displayed, improving your ad’s probability of being shown on a contextually relevant site.

      This raises a couple of privacy concerns, as well as technical ones πŸ˜‰

    • I would say this talk raised more questions than those answered – nonetheless it was, at least for me, good for brainstorming about mobile targeting
    • some of the issues with this service – which I’m really interested in watching to know where it heads to – are interestingly the same of a paper about leisure mobile recommender systems that I reviewed for MobBlog

    Henry Erskine Crum, @henryec, Co-founder of Spoonfed

    • Spoonfed is a London based web startup (Sep. 2008) that focuses on location-based event listings
    • 12 people work there – which makes it interestingly big to be a startup
    • very similar to an old idea of mine (geo-events but in a more social networking fashion) – which prompts me to realize I need to act fast, when I have such ideas πŸ™‚
    • I would have liked the talk to dig deeper into details about user base, mobile apps and HCI issues, but it was not a bad talk and it provided a very operational and yet open minded view of how the service works and evolves
    • oh, and Henry was congratulated as the only guy in a suit (:P lolcredits to Christopher Osborne)

    Gary Gale, @vicchi, Director of Engineering at Yahoo! Geo Technologies, with a talk about Yahoo! Placemaker

    • get here the slides for this talk
    • Yahoo! Placemaker is a useful service to extract location data from virtually any document – also known as Geoparsing. As the website says: Provided with free-form text, the service identifies places mentioned in text, disambiguates those places, and returns unique identifiers for each, as well as information about how many times the place was found in the text, and where in the text it was found.
    • I see it very interesting especially as it is usable with Tweets and blog posts, and it can help creating very interesting mashups
    • only issue: its granularity is up to the neighbourhood – which is perfectly good for some applications, but I’m not sure it is also for real-time-location-intensive mobile apps

    Steve Coast, @SteveC, founder of OpenStreetMap and CloudMade, with a talk about Ubiquitous GeoContext

    • OpenStreetMap can be somewhat considered the community response to Google Maps: free maps, community-created and maintained, freely usable – CloudMade being a company focusing on using map data to let developers go geo
    • the motto from this talk is “map, please get me to the next penguin in this zoo” – that is, extreme geolocation and contextual information
    • success of a geo app – but according to me also applicable to many Internet startups – summarized in 3 points:
      • low cost to start
      • no licensing problems
      • openness / community driven effort
    • it was an absolute delight to listen to this talk, as it was fun but also rich of content – the highly visual presentation was extremely cool, I hope Steve is going to put it online!

    Oh, and many thanks to Christopher Osborne, @osbornec, for organising an amazing night!

  • Aggregated values on a Google Map

    UPDATE 27/08/09: The functionalities of my version of MarkerClusterer have been included in the official Google code project, you can find it gmaps-utility-library-dev. The most interesting part was the so called MarkerClusterer.

    Imagine you need to show thousands of markers on a map. There may be many reasons for doing so, for example temperature data, unemployment distributions, and the like. You want to have a precise view, hence the need for a marker in every town or borough. What Xiaoxi and other developed, is a marker able to group all the markers in a certain area. This is a MarkerClusterer. Your map gets split into clusters (of which you can specify the size – but hopefully more fine grained ways of defining areas will be made available) and you show for every cluster a single marker, which is labelled with the total count of markers in that cluster.

    I thought that this opened a way to get something more precise and able to make reasoning over map data. Once you have a ClusterMarker, wouldn’t it be wonderful if you had the possibility of displaying some other data on it, rather than the simple count? For example, in the temperatures distribution case, I would be interested in seeing the average temperature of the cluster.

    That’s why I developed this fork of the original class (but I’ve applied to get it into the main project – finger crossed!) that allows you to do what follows:

    • create a set of values to tag the locations (so that you technically attach a value to each marker)
    • define a function that is able to return an aggregate value upon the values you passed, automatically for each cluster

    That’s all. The result is very simple, but I believe it is a good way to start thinking about how the visualization of distributed data may affect the usability of a map and the understanding of information it carries. Here’s a snapshot of the two versions, the old on the left (bearing just the count) and the new on the right (with average data). Data here refer to NHS Hospital Death Rates, as published on here. If you want to see the full map relating to this example, click here.

  • Who wants to be recommended?

    There’s a lot of ongoing research on recommender systems, fostered by the Netflix Prize.

    Recommender systems are basically a software implement of some sort that allows suggestions on a given domain to be offered to users. Usually they are specialised: Amazon’s recommender system recommends books, last.fm’s recommends songs, and the like.

    The key to recommendation relies into different aspects. I may be suggested things similar to things I previously chose, or things my friends like. There’s a whole theory behind this so I won’t bore you. To know more, use this site as a starting point.

    My problem with RS is that of this post’s title: who wants/needs recommendation? Is it always true that I like the same kind of things? Surely, I’m a good counter example to this. I love Star Trek. I have watched and would like to watch again all single episode. Nonetheless, I hate Star Wars. I find it boring. I don’t like sci-fi in general. No Terminator, no Robocop. I can’t even name other non-trek sci-fi. So my hypothetical RS should know that I don’t like every kind of sci-fi film, but only Star Trek. Maybe my friends share this view (but as far as I know, no one really does), so it could try checking my friends’ profiles first.
    If you give a look at my music library (or simply explore my Last.fm profile), you could define it at least eclectic. Someone would say it’s schizoid.
    Moreover, sometimes I might want to do different things from those of my friends. Negative recommendation could be part of the solution, but the underlying algorithm would just be the same.

    So what would represent a good recommendation to me? Well, usually what is important to me is surprise. I like many different things. The parameters that show that I like maybe are originality, quality, …, but maybe they are simply unknown. Some people suggested a “Surpise me button” to accomplish this task. But it’s not that easy. Even if I know what I don’t like.
    Hence, the final questions: how can I represent tastes of a user? How can I represent his or her reactions (or feelings) towards something he or she expects or does not? How can I represent what I would like recommendation on, and what I wouldn’t?

    Stay tuned on RecSys conferences to see if someone comes out with an answer; my guess is that we’ll be seeing lots and lots of new recommender systems in the next years, and each one will be confronted with these issues.

  • Wolfram Alpha and user experience

    There are a lot of ongoing discussions about the power of Wolfram Alpha. I think that most of these conversations are flawed because of the argument that Wolfram Alpha does not find you enough information.

    I believe that the mistake here lies in the common way the press have introduced the service. Wolfram himself has not been clear enough, and when he has, the press has of course misinterpreted him. Wolfram Alpha is not a search engine.

    Many articles and blogs have been issued on the topic will Wolfram Alpha be the end of Google?
    The problem here is that the two services are actually very different. Wolfram Alpha is a self-defined computational knowledge engine, not a search engine like Google. Google is able to return millions of results for a single search, whilst Alpha returns a single, often aggregated, result about some topic.

    Alpha is basically an aggregator of information. It selects information from different data sources and presents them to the user in a nice and understandable way. Google is more like searching in the phone directory. So you’re supposed to ask different questions to the two services.

    Of course, Alpha makes mistakes. A curious example I’ve found is the search for the keyword “Bologna”. Bologna is primarily the name of a town in Northern Italy (the one in which I attended university); it is also the name of a kind of ham, commonly known as “Mortadella”, especially outside Bologna itself. In Milan, for example, Mortadella is commonly called Bologna.

    Well, search for Bologna on Google, and compare it with results on Alpha.

    Google will return mostly pages about the town of Bologna, and its football team, where Alpha will tell you nutritional information of Mortadella.

    Is this a ‘mistake’? I think that the only mistake is in the expectations users have about Alpha. it yields results from a structured knowledge base, hence its index is not as general as the one of Google. Nonetheless, I believe that there’s at least a problem in the user interface that should be corrected: the search box. It’s exactly the same as Google’s, same shape, same height, same width. But is there any alternative way of presenting an answering engine on the Internet?

    What I think is that more HCI research is needed to let users understand what are the goals and the capabilities of a service like Alpha. If users think of it as a search engine, it will never have success.

    Just to have a hint of what Alpha should be about, try this search.

  • The hunt for a Google job

    The first time I got in touch with a Google recruiter was more or less a week after I’d decided to enrol for a PhD. Apparently this – very kind, I must say – recruiter was browsing uni pages and found my profile. At the time, apart from telling her that I was due to start a PhD in some months, I was very interested in Systems Administration. She did all her best to convince me to apply as a Developer. Weird, but probably there’s some rules here. Of course after a couple of interviews in which I told them that I was not interested in moving to Switzerland and that I wanted to do a PhD, they decided not to pursue with my profile.

    Now, again, a recruiter has contacted me. A week after having started a new job. This time, I’ve moved onto being a developer. Guess what? She wants me to apply for a Systems Administration position.

    Google, you have knowledge of everything on the network. What about trying to tune your timing and select me for something I’m actually skilled for? πŸ™‚

  • Twitter and the future of RSS

    I read some interesting thought on the Mashable blog about the relationship between RSS and microblogging. If you think about the two technologies, there are for sure some evident similarities, i.e. they both deliver a stream of short items with high semantical concentration(*).

    In RSS you usually get also a bigger amount of text, but to stay simple, we don’t lose generality if we see RSS as just a list of links about some topic.

    Microblogging is in fact more general: it allows personal communication and link sharing. The content of a message must necessarily be compressed in 140 characters. That’s why I think that every message can be seen as a set of keywords – of course leaving out common words, articles, prepositions, and the like.
    What you can do with these keywords, of course for twits containing urls, is to use them as tags for the url. Hence, you can basically build – over Twitter (**) – a RSS feed for whatever topic you like; moreover you can build a folksonomy of tags for it. Not bad, what do you think?

    The question here is what’s the future of RSS with the increasing diffusion of Twitter, Twitter-like sites, and services built over Twitter (yet again, give a look to @footytweets, or @bakertweet, and you will see the potential here).

    Many newspapers already use Twitter as a means for broadcasting updates and news alerts (the two important examples here: @bbcnews and @cnn). Thousands of users are already using these twits as a replacement for their RSS aggregators. The success of Twitter as a news alert broadcaster relies on its higher versatility with respect to its RSS counterpart: you can use keywords, hash tags, comments, together with urls. You may object that all of these features are more or less already present in RSS. Nonetheless, their usage is not immediate as in Twitter, and there is no single point of aggregation as Twitter offers.

    Naturally, there’s always a dark side πŸ™‚ Finding relevant content in Twitter is not an easy task. There’s plenty of services claiming to be able to recommend users to whom you could be interested (see Mr. Tweet for the most popular example and an interesting application of a recommender system). However, a killer application here is still to appear and no single recommender is able to get you the real number of interesting twits you would like to get (also partially due to serious limitation in the search APIs that Twitter makes available).

    This is what I would call filtering good content; we also need to mention that how to filter out bad content from twits is an issue that hasn’t been solved. It’s very easy to manage bad users: spammers get usually identified and blocked quickly, due to the intrinsically tightly coupled interest that twitterers have on the content of what they read (in other words, as soon as they realize it’s a spammer, they report it to Twitter). But what about filtering out content that is simply not interesting? One user you follow may write something irrelevant to the reasons for which you usually read his or her twits, and you may like to read only those in which you are interested. This is still an open issue.

    Addressing the bad points is not an easy task. Nonetheless, I must say that I already see microblogging as a good replacement for RSS. Many users are starting to use Twitter this way. And I’m realizing, as I write this post, that I’m slowly doing the same, removing RSS feeds I don’t read anymore because I follow their update on Twitter.

    (*) this is a definition I coined for a research proposal I still like a lot πŸ™‚
    (**) ok, I’m using the terms Twitter and microblog interchangeably, but if you think about the expression Google it you’ll realize that the winner takes it all – including the right of naming the appropriate service.

  • Old media censorship on new media?

    I’ve took part to that nice event called the “Twitter Developer Nest”, that is basically a meeting for people interested in developing applications over twitter.

    There have been nice talks and presentation of old and new Twitter apps, including and amazingly funny presentation by @aszolty on his BakerTweet systems (which quite interestingly merges three things I’ve been looking at lately: Twitter, Arduino, and the idea of bringing pieces of technology to uncommon areas).

    What I found very mind-stimulating was the talk by Ollie Parsley (@ollieparsley) about his FootyTweets service. Basically, this service sent out twits with live matches updates, using accounts related to football teams.

    Having become hugely successful (>4K followers for the Manchester United), he received a “Cease and Desist” notice fromΒ Football DataCo (read here and here for some coverage), who are the “owner” of football fixtures updates.

    Exception made for a couple of naif issues (e.g. Ollie used copyrighted club logos to represent the teams, which he promptly replaced with self-designed images), the C&D notice was focused on the service of live update itself. Which raises several interesting points.

    I see Twitter (and many other people do) as a shout-from-your-window-and-see-who-listens service. That is, Ollie was basically telling everyone “look, Arsenal scored one minute ago“. No need to say he didn’t charge any money (but from the point of view of FootBall DataCo this does not matter as it’s lost profit anyway).

    Legally speaking, it’s an interesting issue as some questions can be raised:
    – what if I text a friend whilst I’m at the stadium telling him live that our team scored a goal?
    – what if do the same using twitter?
    – what if Ollie retweets me rather than running the service by his own? who’s infringing data ownership, me or him?
    – are we sure that letting people know live fixture actually represents a threat to Football DataCo’s profits? what if it instead drives *more* people to be interested in getting live fixture, with better service levels, turning into more profits?

    No answers by now, and Ollie had to stop the match update service.

    What is evident to me, though, is that this is the quintessential legal case for the Web 2.0, which is by nature social/collaborative, real-time, and involving mash-ups. I honestly think that old media law about data ownership and copyright can be applied to Web 2.0 only by blocking all services. tout-court. In fact, if you focus on the example before, retweeting from other people – that is exactly the form of crowd sourcing that Ollie is thinking of – you will realize soon that you can identify who is committing infringement of what law. It’s the crowd, in a certain sense, but your acts alone are not against the law.
    Being one of the fundamental principles of law the fact that the responsibility for a certain unlawful act personal, who should be sued?

    I don’t have many hopes here, but I think that we need some form of Law 2.0, which does not mean – as web-opponents usually claim – that the Web doesn’t want rules.