Highlights from WhereCampEU 2012

UPDATED: added the names of two speakers: Hugues Bouchard and Diego Arechaga.

We learn from Ed Parsons, Google’s Geo man, that Google Maps and OSM are not fierce enemies. Au contraire, “Google wants OpenStreetMap to succeed“. So I took a picture of Ed presenting OSM, announcing it cheekily as “Ed’s defection to OSM” (no, that’s not true).

All of this might have earned him the title of Saint Parsons (thanks, Henk Hoff).

I agree on one thing: Google Maps and OSM are not competitors. OSM is a database providing geographical data. You can use it to create a cartographic rendering, but it’s not its main goal. People seem to forget it.


Yahoo’s Adam Rae showed how you can use Scale Spaces theory to infer regions in a map from the geographical distribution of tags. Although Adam’s results are great in their own right, and the mere ability of detecting boundaries via tags is a fascinating possibility, I find this extremely interesting also for its implicit dynamism: tags about a location can change.

Ethnographers who wish to learn how the perception of what is a place, how a place changes, or how the boundaries of a location are fluid, might use this technique. This is true in space and time. An interesting, although intended to be funny, example of such perception shift is showing that the boundaries of a place change according to a spatial component. Moreover, a temporal component might be added to the model, assessing how people’s perception of a given place changes over time. Take, for example, an expanding neighbour.


Jan Nowak, an engineer at Nokia, showed how Nokia Maps API can be used to go “from data to meaning in 20 minutes“. Someone complained that “Nokia doesn’t do anything that Google hasn’t already done“. I don’t think this holds true: first of all, there are some genuinely innovative UI elements in Nokia Maps (see WebGL). But more importantly, they are adding a great element of competition to already existing services. Even if Nokia Maps were just yet another map service, with comparable features to Google’s, that would be beneficial to developers and to the geo community at large. It’s a challenge that will bring innovation. Why complain?


Jeremy Morley from the University of Notthingham gave the attendees the last updates about the OSM-GB project, whose aim is to measure and improve the quality of OpenStreetMap in Great Britain. This is a very interesting project with very useful outcomes both in the academic and applicative areas. The project aims to answer questions such as how authoritative a crowd-sourced map can be, and to show how multiple sources, including crowd-sourced maps, can be used together to improve the overall quality of the geographical information. One very important aspect of this project is exactly its contribution to OSM happening at the same time as offering a model for updates to non-crowdsourced mapping efforts.


Having contributed to the project and to the presentation, I’m very happy with the reception that Taarifa has received, mostly thanks to a great presentation by Mark Iliffe, a PhD student at the University of Nottingham. His public engagement skills are well known and they earned Taarifa promises of contribution by many of the people attending. There was also a mini hack-session with over 10 people willing to discuss Taarifa and envision ways to improve it and put it to good use.


Day two opened with Jan Van Eck telling the audience why Esri is interested in Where Camp and how they think they can contribute to the geo-computing community.

He showed some of Esri’s work in the area of maps production and user experience, with the goal of making a case in support of their philosophy, not always in tune with that of OpenStreetMap. However, it must be said that Esri’s contribution to the perception in the public of the existence of a “geographical issue” is certainly a big one.


Another interesting piece of work by Yahoo, led by Vanessa Murdock with contributions by Hugues Bouchard and Diego Arechaga (pictured below), showed how hash tags can be used to infer hyperlocal trends, similarly to what Adam presented earlier on.


Goldsmith’s design student Francisco Dans led a very practical session about real-time data crunching with map displaying as a result, with tricks and tips on how to use Processing to parse xml/kml. He showed how he used this framework to display trams in San Francisco on a map in real time. Francisco advises using UnfoldingMaps as a tool to create interactive thematic maps and live geovisualizations.


Ollie O’Brien is a well known attendee of WhereCamps and Geomobs, working as a Research Associate at CASA. He introduced his recent work on creating City Dashboards: web aggregators of several data sources about a single city. These are extremely useful central points of information for citizens and tourists. Ollie also showed a fun and mind boggling piece of software, developed at CASA, that allows users to fly the sky as they were pigeons. Using a Microsoft Kinect controller and Google Earth, the user can “play” with the system by flying among building using gestures.


Laurence Penney is always aggregating the more “cartographicals” of us at his enlightening talks. This time, he came up with a very insightful talk about unidimensional (or linear) maps. A good example of such maps is the Trajan’s Column in Rome, which tells a story in time and space on a single dimension. Of course, a similar technique can be used to represent geographical information. In the past such maps have been used by seamen as “rolling maps“, in a way not too dissimilar to what current GPS navigators do, by displaying what your next step is. What was very interesting in Laurence’s talk is how he conveyed the high aesthetical value of these maps.

There were many other open discussions, including Nutiteq‘s Jaak Laineste showing their latest SDK, useful to create and navigate 3D maps, and ITO World‘s Peter Miller showing how ITO uses and contributes to OSM by adding layers of data. Philip Kandall of Skobbler introduced some advanced routing features (“routing from A to B is boring“) that will be in future navigators, including route calculation by time available for the journey (i.e. not as a maximum time, but as a minimum).


As traditional for WhereCamps, a couple of light-hearted moments have closed the unconference: Tim Waters‘ geo-yoga, involving stretching positions that mimic country shapes (I guess we can work on a KML representation of such positions?) and Mark Iliffe’s Geo-Locating Geobeers, suggesting how the Taarifa Platform can be used to report beer drinking location and level of… happiness.

(full set of photos

geo Web 2.0

The past (and future?) of location

I must say – without making it too emotional – that I feel somewhat attached to geo-events at the BCS as my first contact with the London geo-crowd was there over a year ago, with a GeoMob including a talk by the same Gary Gale who gave a talk last night. That was, at least for him, one company and one whole  continent ago – for the rest of us the “agos” include new or matured geo-technologies: Foursquare, Gowalla, Latitude, Facebook and Twitter places, plus our very own London based Rummble, and minus some near-casualties (FireEagle).

Some highlights/thoughts from his talk:

The sad story of early and big players
– early players are not always winners: this can happen in a spectacular way (Dodgeball) or more quietly (Orkut has not technically been a commercial success, for example) – but also
– big players are not always winners: it’s all just a little bit of history repeating, isn’it? Remember the software revolution? The giant IBM didn’t understand it, and a small and agile company called Microsoft became the de-facto monopolist. OS/2 is still remembered as one of the epic fails in software. Remember the Internet revolution? The giant Microsoft had its very own epic fail called Microsoft Network. It took them ages to create a search engine, and in the meantime an agile and young company with a Big G became the search giant. Some years later, the aforementioned Orkut, started by Google as a side project, didn’t have the agility and the motivation to resist to Facebook. The same might happen about location services.

Power to the people
The problem with big players is that they take the quality of data bases for granted. Foursquare et al. found a way to motivate users to keep the POI database constantly updated by using a form of psychological reward. Something that Google hasn’t quite done.

Now monetize, please
Ok, we can motivate users by assigning mayorship and medals. Having a frequently refreshed database is a step ahead. But how do you make money out of it? “Let’s get in touch with the companies and ask for a share of the profit” can work for some brave early adopters. But it will not take long for companies to realize they can use the data – for free – to make business analysis without even contacting foursquare. “Become mayor and get a 10% discount”. What other data analysis should motivate them to pay for it? Knowing where a customer goes next? Where they’ve been before? Maybe to get higher profile in the searches, like in google searches? In the ocean of possibilities, the certainty is that there isn’t yet an idea that works well. “Even Facebook lacks the time to contact the big players to negotiate discounts“. And if you think about the small players it’s even more difficult (but if Monmouth offers me a free espresso I’ll work hard to become their Mayor!).
The way many companies are trying to sell it is still pretty much old economy: sell the check-ins database to a big marketing company, blablabla. Cfr. next point.

Dig out the meaningful data
Ok, we have motivated the users to keep our POIs fresh. But they want to be mayor, so they exploit APIs. Their favourite bar has already a Mayor? They create another instance of the same place. They create their own home. I’ve seen a “my bed”. Is there an algorithmic way to filter out the meaningless data? Surely not in the general case. Moreover, as Gary stressed, simply “selling your database starts eroding its value“. Because the buyer needs to find a use for that mountain of data. As for now, such use is not evident, because most of the data is not meaningful at all.

“If Augmented Reality is Layar, I’m disappointed”
Some time ago I noticed a strange absence of overlap among the geo-crowd and the AR-crowd. The latter presents ideas that have been discussed for years by the former as a “revolution”. One problem is that maybe we have augmented reality but not a realistic augmentation, mostly because of reduced processing power on mobile devices. Ideally you would like to walk down the broadway, see a SuperMario-like green mushroom that gives you an extra shot of espresso (to me it’s like getting an extra-life), catch it, and claim the coffee in the shop around the corner. Unfortunately, GPS is not accurate enough (Galileo might solve this problem soon) and walking down all the time pointing your phone camera to the road will only drain your battery (and probably get you killed before you manage to catch the mushroom). It’s not just an issue of processing power and battery life, though. Even with that, there’s a serious user interaction issue. AR glasses might, partially, solve that, but I can’t really believe that augmenting reality is *just* that and not something that empowers a user’s imagination. Geo-AR is on the boundary between novelty (“oh look, it correctly puts a label on St Paul’s cathedral!“) and utility. And currently on the wrong side of it.

The director’s cut will (not) include recommendations
I’m sure we’ll make it to the director’s cut” – Alex Housley complained in the typical flamboyant way of the Rummble crowd about being left out of the presentation. “We believe trust networks are the future“. Yes and no. I agree with Alex in the sense that how to provide appropriate recommendations is an interesting research problem (but also here)  and the key to monetization of any service. It’s technically not the future, though: Amazon has been using recommendations for years, and I’ve done purchases myself prompted by their recommendations. Trust networks have been extensively used in services like Netflix. What Rummble is trying to do is a more direct way of exploiting trust networks to enrich recommendations, bringing them to the heart of the application. I’m sure that recommendations will play a role in monetizing the geo-thing and that even trust networks may, too. What I’m not sure about is if recommendations will be as they’re now. Without a revolution in the way users perceive local recommendation – that is, a user interaction revolution – they’re not gonna make it. Users need a seamless way of specifying the trust network, and a similarly seamless way of receiving the recommendation.

geo gov Web 2.0

Free data: utility, risks, opportunities

Some random thoughts after The possibilities of real-time data event at the City Hall.

Free your location: you’re already being photographed
I was not surprised to hear the typical objection (or rant, if you don’t mind) of institutions’ representative when requested to release data: “We must comply with the Data Protection Act!“. Although this is technically true, I’d like to remind these bureaucrats that in the UK being portraited by a photographer in a public place is legal. In other words, if I’m in Piccadilly Circus and someone wants to take a portrait of me, and possibly use it for profit, he is legally allowed to do so without my authorization.
Hence, if we’re talking about releasing Oyster data, I can’t really see bigger problems than those related to photographs: where Oyster data makes it public where you are and, possibly, when, a photograph might give insight to where you are and what you are doing. I think that where+what is intrinsically more dangerous (and misleading, in most cases) than where+when, so what’s the fuss about?

Free our data: you will benefit from it!
Bryan Sivak, Chief Technology Officer of Washington DC (yes, they have a CTO!), has clearly shown it with an impressive talk: freeing public data improves service level and saves public money. This is a powerful concept: if an institution releases data, developers and business will start creating enterprises and applications over it. But more importantly, the institution itself will benefit from better accessibility, data standards, and fresh policies. That’s why the OCTO has released data and facilitated competition by offering money prizes to developers: the government gets expertise and new ways of looking at data in return for technological free speech. It’s something the UK (local) government should seriously consider.

Free your comments: the case for partnerships between companies and users
Jonathan Raper, our Twitter’s @MadProf, is sure that partnerships between companies and users will become more and more popular. Companies, in his view, will let the cloud generate and manage a flow of information about their services and possibly integrate it in their reputation management strategy.
I wouldn’t be too optimistic, though. Albeit it’s true that many longsighted companies have started engaging with the cloud and welcome autonomous, independently run, twitter service updates, most of them will try to dismiss any reference to bad service. There are also issues with data covered by licenses (see the case of FootyTweets).
I don’t know why I keep thinking about trains as an example, but would you really think that, say, Thameslink would welcome the cloud twitting about constant delays on their Luton services? Not to mention the fact that NationalRail forced a developer to stop offering a free iPhone application with train schedules – to start selling their own, non free (yes, charging £4.99 for data you can get from their own mobile web-site for free, with the same ease of use, is indeed a stupid commercial strategy).

Ain’t it beautiful, that thing?
We’ve seen many fascinating visualization of free data, both real-time and not. Some of these require a lot of work to develop. But are they useful? What I wonder is not just if they carry any commercial utility, but if they can actually be useful to people, by improving their life experience. I have no doubt, for example, that itoworld‘s visualization of transport data, and especially those about Congestion Charging, are a great tool to let people understand policies and authorities make better planning. But I’m not sure that MIT SenseLab’s graphs of phone calls during the World Cup Final, despite being beautiful to see, funny to think about, and technically accurate, may bring any improvement to user experience. (Well, this may be the general difference between commercial and academic initiative – but I believe this applies more generally, in the area of data visualization).

Unorthodox uses of locative technologies
MIT Senselab‘s Carlo Ratti used gsm cell association data to approximate people density in streets. This is an interesting use of technology. Nonetheless, unorthodox uses of technologies, especially locative technologies, must be taken carefully. Think about using the same technique to calculate road traffic density: you would have to consider single and multiple occupancy vehicles, where this can have different meanings on city roads and motorways. Using technology in unusual ways is fascinating and potentially useful, but the association of the appropriate technique to the right problem must be carefully gauged.

Risks of not-so-deep research
This is generally true in research, but I would say it’s getting more evident in location-based services research and commercial activities: targeting marginally interesting areas of knowledge and enterprise. Ratti’s words: “One PhD student is currently looking at the correlations between Britons and parties in Barcelona… no results yet“. Of course, this was told as a half-joke. But in many contexts, it’s still a half-truth.


Cold thoughts on WhereCampEU

What a shame having missed last year’s WhereCamp. The first WhereCampEU, in London, was great and I really want to be part of such events more often.

WhereCampEU is the European version of this popular unconference about all things geo. It’s a nonplace where you meet geographers, geo-developers, geo-nerds, businesses, the “evil” presence of OrdnanceSurvey (brave, brave OS guys!), geo-services, etc.

I’d just like to write a couple of lines to thank everyone involved in the organisation of this great event: Chris Osborne, Gary Gale, John Fagan, Harry wood, Andy Allan, Tim Waters, Shaun McDonald, John Mckerrel, Chaitanya Kuber. Most of them were people I had actually been following on twitter for a while or whose blog are amongst the ones I read daily, some of them I had alread met in other meetups. However, it was nice to make eye-contact again or for the first time!

Some thoughts about the sessions I attended:

  • Chris Osborne‘s – Maps, data and democracy. Mr Geomob gave an interesting talk on democracy and open data. His trust in democracy and transparency is probably quintessentially British, as in Italy I wouldn’t be that sure about openness and transparency as examples of democratic involvement (e.g. the typical “everyone knows things that are not changeable even when a majority don’t like them“). The talk was indeed mind boggling especially about the impact of the heavy deployment of IT systems to facilitate public service tasks: supposed to increase the level of service and transparency of such services, they had a strong negative impact on the perceived service level (cost and time).
  • Gary Gale‘s Location, LB(M)S, Hype, Stealth Data and Stuff
    and Location & Privacy; from OMG! to WTF?. Albeit including the word “engineering” in his job title, Gary is very good at giving talks that make his audience think and feel involved. Two great talks on the value of privacy wrt location. How much would you think your privacy is worth? Apparently, the average person would sell all of his or her location data for £30; Gary managed to spark controversy amidst uncontroversial claims that “£30 for all your data is actually nothing” – a very funny moment (some people should rethink their sense of value, when talking about UK, or at least postpone philosophical arguments to the pub).
  • Cyclestreet‘s Martin Lucas-Smith‘s Cyclestreets Cycle Routing: a useful service developed by two very nice and inspired guys, providing cycling route maps over OpenStreetMaps. Their strenght is that the routes are calculated using rules that mimick what cyclists do (their motto being “For cyclists, By cyclists“). Being a community service, they tried (and partially managed) to receive funding by councils. An example of an alternative – but still viable – business model.
  • Steven Feldman‘s Without a business model we are all fcuk’d. Apart from the lovely title, whoever starts a talk saying “I love the Guardian and hate Rupert Murdoch” gains my unconditional appreciation 🙂 Steven gave an interesting talk on what I might define “viable business model detection techniques“. As in a “business surgery” he let some of the people in the audience (OrdnanceSurvey, cyclestreetmaps, etc…) analyze their own business and see weaknesses and strenghts. A hands-on workshop that I hope he’s going to repeat at other meetings.
  • OpenStreetMap: a Q&A session with a talk from Simone Cortesi (that I finally managed to meet in person) showing that OSM can be a viable and profitable business model. Even stressing that they are partially funded by Google.

Overall level of presentations: very very good, much better organised than I was expecting. Unfortunately I missed the second day, due to an untimely booked trip 🙂

Maybe some more involvement from big players would be interesting. Debating face to face about their strategy, especially when the geo-community is (constructively) critical on them, would benefit everyone.

I mean, something slightly more exciting than a bunch of Google folks using a session to say “we are not that bad” 🙂