Author: puntofisso

  • Hello world!

    Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

  • Vacuum tubes can be art

    I rarely watch online videos that last more than 30 seconds. However, this time I couldn’t stop for the whole 17 minutes this video lasts.

    It’s about a French ham radio operator who makes his own vacuum tubes. The great thing of this video is that it merges great technique (the guy does really know well what he’s doing) with an almost hypnotic jazz soundtrack. The elegant and delicate way he executes all the process is, in its own very nerdy way, totally artistic. No surprises, he’s French.

    The making of the tubes is explained on his web-site. It’s in French, but Google Translates it pretty correctly.


    Fabrication d’une lampe triode
    Uploaded by F2FO. – Technology reviews and science news videos.

  • Christmas Time

    This blog, as usual, has been dormant for a while. I’m not one of those blogger who spit out everything passing through their minds but I generally like to report events, technologies, and research ideas that I’m really enjoying and understand.

    So, let me deviate a little from my usual scope to report a little bit about myself and my expectations for the new year.

    Firstly, in my day job, I was promoted from my previous post. I’ve been working for a year and a half at St George’s University of London as a Systems Developer and Administrator. Last July a colleague left, so I applied to take over his post of Senior Systems Analyst which I finally got in November. I’m now in charge of the mail and backup servers, and of taking care of the storage systems over our distributed network. Most interestingly, after having the chance of dealing with the implementation of our Common Research Information System using Symplectic, I’ve been able to initiate a couple of projects that I believe will greatly improve our services and our positioning as an educational institution in 2011:

    – the development of a new process for service support, using Request Tracker
    – the design, deployment, and marketing of a mobile portal.

    I believe that both projects will help – given the cuts we’ll be experiencing – improve the quality of our services and reach a wider audience. Internal behavioural changes will be needed and a lot of inter-departmental cooperation will be required to let everyone accept the changes and I’m already working on the advocacy sub-projects.

    Secondly, 2010 has been a great year of Geo development. After starting to get interested in the topic a couple of years ago, I got in touch with some great people that are really helping me expand my knowledge and views. For 2011 I expect to increase my practical skills and manage to do some work in the area – the first opportunity is exactly our corporate mobile portal, which will have extensive location aware capabilities.

    Eventually, as a photographer I finally managed to experiment some techniques like HDR and do some nature photography in the Salt Ponds of Margherita di Savoia. In 2011 I’m planning to do all I can to turn semi-pro, launching a photography website and organize my first theme exhibition in a local cafe. I created my own Christmas Cards this year, with the photo you see below: it’s a picture I took in Bologna, where I was living up to 2008, and it’s the Christmas Tree we have every year in the main square.

    That’s all for the moment. Enjoy your holidays, whatever you wish to celebrate 🙂

    Christmas in Bologna

  • The several issues of geo development: a chronicle of October's GeoMob

    GeoMob has returned after a longer-than-usual hiatus due to other – and definitely very interesting – commitments of our previous Mr GeoMob, Christopher Osborne. It was a very interesting night with the usual format of four presentations covering aspects of research, development and business. Here’s my summary and comments.

    Max Howell, @mxclTweetdeck

    I’m a bit unsure on how to comment the involvement of TweetDeck into the GeoSocial business.
    Max’s presentation has been focused on the integration of their application with FourSquare. It’s a tightly coupled integration allowing users to follow their Twitter friends using the locative power of Foursquare, i.e. putting them on a map. Max gave out some bread for our brains when commenting that “Google Latitude is not good for us because it gives out location continuously, whereas we are looking for discrete placement of users on POIs“: this is a real example of why more-is-not-necessarily-better and, in my opinion, the main reason for which, to date, Latitude has been less successful in catalysing users’ attention on locative services.

    However, I’m not totally sure why TweetDeck foresees its future into becoming a platform to integrate Twitter and FourSquare into a single framework. “Other apps put FourSquare functions in a separate window and this is distasteful“. Is it really? And how exactly will TweetDeck benefit, financially but not only, from this integration? “We spent a lot of time on FourSquare integration but unfortunately it’s not much used“. They should ask themselves why.
    Their TODO list includes Geofencing which might be interesting so let’s wait and see.

    Matthew Watkins, @mazwat Chromaroma by Mudlark

    For those of you who don’t know it yet: Chromaroma is a locative game based on your Oyster card touch-ins and touch-outs. They’re still in closed alpha, but the (not so many?) lucky users (I’ve asked to join the alpha 3-4 times, but they never replied) can connect their Oyster account to the game and take part to some kind of Gowalla for transport, based on the number of journeys, station visited, personal and team targets.

    Two things to be considered:
    open data and privacy: upon joining the service, the user account page is scraped for their journeys. Matthew explained they approached TfL to ask for APIs/free access to the journeys data but “due to budget cuts we’re low priority“. Apparently they’ve been allowed to keep on doing scraping. The obvious issue is a matter of trust: why should someone give their oyster account access to a company that, technically, hasn’t signed any agreement with TfL?
    This is worrying, as to get journey history data you need to activate Auto Top-up. So you’re basically allowing a third party to access an account connected to automatic payments from your payment card.
    Secondly, I can’t understand TfL’s strategy on open data here: if they are not worried about the use Mudlark is doing of such data, why not providing developers with an API to query the very same data? Users’ consent can be embedded in the API, so I’m a bit worried that Chromaroma is actually exposing the lack of strategy by TfL, rather than their availability to work together with developers. I hope I’m wrong.
    monetising: I’m not scared of asking the very same question to any company working on this. What is Mudlark’s monetisation strategy and the business viability of such strategy? It can’t be simply “let’s build travel profiles of participating users and sell them to advertisers” as TfL would have done that already. And if TfL haven’t thought about this, or if they’re letting Mudlark collect such data without even letting them adhere to some basic T&C, we are in serious trouble. However, it’s the declared strategy by Mudlark that does not convince me. Matthew suggests it might be based on target like “get from Warren Street to Kings Cross by 10 am, show your touch-ins and get a free coffee” or on the idea of “sponsor items” you can buy. Does this strategy have a market that is big enough? And, as I’ve already asked, why should a company pay for this kind of advertisement that is potentially available for free? If the game is successful, however, it will be chaos in the Tube – and I’m really looking forward to it 🙂

    Oliver O’Brien, @oobrUCL CASA Researcher

    Oliver has recently had his 15 minutes of glory thanks to some amazing live map visualisation of London Barclays Cycle Hire availability. He went further to develop visualisation pages for different bicycle hire schemes all around the world – before he received a Cease&Desist request by one of the companies involved. As a researcher, he provided interesting insight to the GeoMob showing some geo-demographic analysis. For example, weekdays vs weekend usage patterns are different according to the area of the world involved. London is very weekdays-centric, showing that the bicycles are mainly used by commuters. I wonder if this analysis can provide also commercial insight as much as Chromaroma’s intended use of Oyster data.

    Thumbs up for the itoworld-esque animation visualizing bike usage in the last 48 hours – stressing that properly done geo-infographic can be extremely useful for problem analysis. Oliver’s future work seems targeted at this, and ideally we’ll hear more about travel patterns and how they affect the usability of bicycle hire schemes. I can’t really understand why he was asked to take some of the maps down.

    Eugene Tsyrklevich, @tsyrklevichParkopedia

    The main lesson of this presentation: stalk your iPhone app users, find them on the web, question them and make them change the negative reviews.
    An aggressive strategy that can probably work – and I would actually describe Parkopedia’s strategy as positively aggressive. They managed to get a deal with AA about branding their parking-space-finding-app in exchange for a share of profit.
    Eugene’s presentation was more about business management than development. Nonetheless it was incredibly full of insight. Especially on how to be successful when marketing an iPhone application. “Working with known brands gives you credibility, and it opens doors“. The main door that this opened was actually Apple’s interest in featuring their app on the AppStore, leading to an almost immediate 30-fold increase in sales. This leads to further credibility and good sales: “Being featured gets you some momentum you never lose“. This is a good lesson for all our aspiring geo-developers.

  • The past (and future?) of location

    I must say – without making it too emotional – that I feel somewhat attached to geo-events at the BCS as my first contact with the London geo-crowd was there over a year ago, with a GeoMob including a talk by the same Gary Gale who gave a talk last night. That was, at least for him, one company and one whole  continent ago – for the rest of us the “agos” include new or matured geo-technologies: Foursquare, Gowalla, Latitude, Facebook and Twitter places, plus our very own London based Rummble, and minus some near-casualties (FireEagle).

    Some highlights/thoughts from his talk:

    The sad story of early and big players
    – early players are not always winners: this can happen in a spectacular way (Dodgeball) or more quietly (Orkut has not technically been a commercial success, for example) – but also
    – big players are not always winners: it’s all just a little bit of history repeating, isn’it? Remember the software revolution? The giant IBM didn’t understand it, and a small and agile company called Microsoft became the de-facto monopolist. OS/2 is still remembered as one of the epic fails in software. Remember the Internet revolution? The giant Microsoft had its very own epic fail called Microsoft Network. It took them ages to create a search engine, and in the meantime an agile and young company with a Big G became the search giant. Some years later, the aforementioned Orkut, started by Google as a side project, didn’t have the agility and the motivation to resist to Facebook. The same might happen about location services.

    Power to the people
    The problem with big players is that they take the quality of data bases for granted. Foursquare et al. found a way to motivate users to keep the POI database constantly updated by using a form of psychological reward. Something that Google hasn’t quite done.

    Now monetize, please
    Ok, we can motivate users by assigning mayorship and medals. Having a frequently refreshed database is a step ahead. But how do you make money out of it? “Let’s get in touch with the companies and ask for a share of the profit” can work for some brave early adopters. But it will not take long for companies to realize they can use the data – for free – to make business analysis without even contacting foursquare. “Become mayor and get a 10% discount”. What other data analysis should motivate them to pay for it? Knowing where a customer goes next? Where they’ve been before? Maybe to get higher profile in the searches, like in google searches? In the ocean of possibilities, the certainty is that there isn’t yet an idea that works well. “Even Facebook lacks the time to contact the big players to negotiate discounts“. And if you think about the small players it’s even more difficult (but if Monmouth offers me a free espresso I’ll work hard to become their Mayor!).
    The way many companies are trying to sell it is still pretty much old economy: sell the check-ins database to a big marketing company, blablabla. Cfr. next point.

    Dig out the meaningful data
    Ok, we have motivated the users to keep our POIs fresh. But they want to be mayor, so they exploit APIs. Their favourite bar has already a Mayor? They create another instance of the same place. They create their own home. I’ve seen a “my bed”. Is there an algorithmic way to filter out the meaningless data? Surely not in the general case. Moreover, as Gary stressed, simply “selling your database starts eroding its value“. Because the buyer needs to find a use for that mountain of data. As for now, such use is not evident, because most of the data is not meaningful at all.

    “If Augmented Reality is Layar, I’m disappointed”
    Some time ago I noticed a strange absence of overlap among the geo-crowd and the AR-crowd. The latter presents ideas that have been discussed for years by the former as a “revolution”. One problem is that maybe we have augmented reality but not a realistic augmentation, mostly because of reduced processing power on mobile devices. Ideally you would like to walk down the broadway, see a SuperMario-like green mushroom that gives you an extra shot of espresso (to me it’s like getting an extra-life), catch it, and claim the coffee in the shop around the corner. Unfortunately, GPS is not accurate enough (Galileo might solve this problem soon) and walking down all the time pointing your phone camera to the road will only drain your battery (and probably get you killed before you manage to catch the mushroom). It’s not just an issue of processing power and battery life, though. Even with that, there’s a serious user interaction issue. AR glasses might, partially, solve that, but I can’t really believe that augmenting reality is *just* that and not something that empowers a user’s imagination. Geo-AR is on the boundary between novelty (“oh look, it correctly puts a label on St Paul’s cathedral!“) and utility. And currently on the wrong side of it.

    The director’s cut will (not) include recommendations
    I’m sure we’ll make it to the director’s cut” – Alex Housley complained in the typical flamboyant way of the Rummble crowd about being left out of the presentation. “We believe trust networks are the future“. Yes and no. I agree with Alex in the sense that how to provide appropriate recommendations is an interesting research problem (but also here)  and the key to monetization of any service. It’s technically not the future, though: Amazon has been using recommendations for years, and I’ve done purchases myself prompted by their recommendations. Trust networks have been extensively used in services like Netflix. What Rummble is trying to do is a more direct way of exploiting trust networks to enrich recommendations, bringing them to the heart of the application. I’m sure that recommendations will play a role in monetizing the geo-thing and that even trust networks may, too. What I’m not sure about is if recommendations will be as they’re now. Without a revolution in the way users perceive local recommendation – that is, a user interaction revolution – they’re not gonna make it. Users need a seamless way of specifying the trust network, and a similarly seamless way of receiving the recommendation.

  • Luttazzi, la mala fede, e la ragione

    [Sorry, this post is in Italian]

    AGGIORNAMENTO 2 (14/6/2010):

    Una pagina chiamata “Caccia al tesoro” compare su web archive gia’ a gennaio 2006: http://web.archive.org/web/20060112195056/http://www.danieleluttazzi.it/?q=node/144. Quindi, esisteva a gennaio 2006. Indicizzata circa 2 mesi dopo la sua creazione.

    Si noti un dettaglio: node=144 invece di node=285. Con un formato di URL fondamentalmente diverso. Ovvero, c’e’ stato un cambio di CMS.

    Questo chiaramente non toglie nessuno dei discorsi sul plagio, la copia, eccetera, ma quanto meno svuota l’accusa di cospirazione, che per quanto mi riguarda era fastidiosa (e non utile ai fini “morali” della discussione, che e’ quella di stabilire se e quanto sia lecito “copiare”/”citare”, con o senza riferimento). Non c’e’ stata, almeno per questo post, alcuna retrodatazione: esisteva gia’ nel 2005.

    AGGIORNAMENTO (14/6/2010):

    – Per correttezza, il gestore del blog ntvox, quello che per primo ha parlato di questa vicenda, mi ha chiesto di precisare che la questione di web.archive.org non e’ l’argomento chiave del suo blog, che invece e’ piu’ interessato alla discussione generale della liceita’ del copiare battute, e alla mole di battute apparentemente copiate da Luttazzi. Sebbene l’argomento venga citato nel blog, e’ vero che non ne e’ la questione fondamentale.

    – Giusto per ripetere fino alla noia: mi sto formando una posizione sull’intera questione, e tale posizione ovviamente e’ personale. Questo blog e’ pero’ un blog tecnico, e questo post si riferisce solo agli aspetti tecnici di una prova usata in modo, a mio parere, tecnicamente errato. Non vuole essere un richiamo ad altre prove, o presunte tali. Ci sarebbe da discutere su cosa costituisca indizio e cosa prova inconfutabile; cosi’ come su quali siano i requisiti tecnici di quella che puo’ essere ammessa come “prova”. In questo post mi concentro sul perche’ questa specifica questione non possa essere ammessa come prova, per mancanza di requisiti tecnici. Full stop.

    (fine)

    Non lo nascondero’, fino a ieri mattina “ero” un fan di Daniele Luttazzi.
    Dopo aver letto le notizie sull’eventuale “plagio” sono diventato un ex fan deluso.

    Eppure qualcosa mi ha spinto a verificare le informazioni riportate, in particolare riguardo quella che viene ritenuta la prova “schiacciante” della mala fede del comico romagnolo.
    Credo che ci siano delle ragioni prettamente tecniche che, invece, difendono tale buona fede o quanto meno dimostrano che le prove portate a suo carico sono, nel migliore dei casi, inconclusive.

    Premetto: di professione faccio l’informatico, mi occupo di internet e networking, ho una certa esperienza personale di gestione di siti internet.

    L’accusa: Luttazzi avrebbe copiato delle battute da famosi autori satirici e, onde evitare di essere smascherato quale plagiatore, avrebbe scritto sul suo blog due post in cui invitava a una “caccia al tesoro di citazioni”, retrodatando questi due post in modo tale da non destare “sospetti”.

    Reperti dell’accusa: i due post in questione sono recuperabili dal blog di Luttazzi e sono:
    http://www.danieleluttazzi.it/node/285 datato 9  giugno 2005
    http://www.danieleluttazzi.it/node/324 datato 10 gennaio 2006

    Prove dell’accusa: il sito internet http://web.archive.com. Tale sito permette di recuperare tutte le versioni precedenti di una pagina internet. Cercando su web.archive.com le due pagine in questione, vengono riportate le presunte “data di creazione”:
    – per il post 285, tale data sarebbe il 9 ottobre 2007 (oltre 2 anni dopo la data riportata da Luttazzi)
    – per il post 324, tale data sarebbe il 13 dicembre 2007 (poco meno di 2 anni dopo la data riportata da luttazzi)

    Da un punto di vista tecnico-informatico, in realta’, quelle due date sono fuorvianti.
    Quello che sfugge all’accusa e’ un piccolo dettaglio tecnico: la data che web.archive.org riporta NON e’ la data di creazione della pagina. Si tratta invece della data in cui tale pagina e’ stata raggiunta per la prima volta dai “robot” di web.archive.org. Se oggi viene creata una pagina internet, questa pagina ci mettera’ un certo tempo, piu’ o meno lungo, ad essere “trovata” da web.archive.org. Questo tempo puo’ richiedere, effettivamente, anni.

    Ci si potrebbe chiedere, dunque, se due anni siano un tempo ragionevole per l’indicizzazione di un sito popolare come quello di Luttazzi. Chiaramente non e’ possibile, a rigor di logica, avere una risposta certa. Statisticamente parlando, pero’, abbiamo degli indizi piuttosto seri che i post non siano stati retrodatati da Luttazzi. Basta prendere alcune pagine a caso dal blog, e verificarne data riportata e data su web archive:

    http://www.danieleluttazzi.it/node/277, data blog: 3 aprile 2007, MAI archiviata su web archive (forse questo dimostrerebbe che la pagina non esiste affatto?)
    http://www.danieleluttazzi.it/node/286, data blog: 10 gennaio 2006, prima data web archive: 9 ottobre 2007
    http://www.danieleluttazzi.it/node/289, data blog: 1 novembre 2006, prima data web archive: 9 ottobre 2007
    http://www.danieleluttazzi.it/node/291, data blog: 14 marzo 2007, prima data web archive: 9 ottobre 2007 (per questa pagina viene riportata anche una modifica risalente al 2 agosto 2008, prova che dal 2007 in poi il sito di Luttazzi e’ stato costantemente seguito da web archive)

    Si noti che molte di queste date risalgono a ottobre 2007. Anzi, allo stessa data di ottobre: il 9. La stessa data del presunto post incriminato. Motivo? L’intero sito e’ stato indicizzato a partire da ottobre 2007. Prima non era presente su web.archive.org.
    A maggior riprova di questo, basti guardare http://web.archive.org/web/*/danieleluttazzi.it/* Questa pagina contiene l’elenco di TUTTE le pagine del sito danieleluttazzi.it presenti su web.archive.org. E’ facile verificare come fino al 9 ottobre 2007 il sito NON fosse indicizzato. Tant’e’ che in quella data sono state aggiunge letteralmente centinaia di pagine a web.archive.org

    Lo stesso vale per altri blog.

    Prendete, ad esempio, un altro comico molto, Beppe Grillo:
    http://www.beppegrillo.it/2005/01/il_papa_e_infal.html data blog: 31 gennaio 2005, prima data web archive:
    7 febbraio 2006 (oltre un anno dopo)

    o quello del “cacciatore di bufale” Paolo Attivissimo:
    http://attivissimo.blogspot.com/2005/12/come-sta-valentin-bene-grazie-e-ha.html data blog: 31 dicembre 2005, prima data web archive: 16 gennaio 2006

    Succede anche al noto quotidiano online repubblica.it, seppur con meno attesa:
    http://www.repubblica.it/ambiente/2010/04/27/news/marea-_nera-3646349/index.html?ref=search pubblicato il 27 aprile 2010 e non ancora su web archive; la pagina, tra l’altro, riporta un attesa di circa 6 mesi per entrare negli archivi (nel 2010, potrebbe essere stata piu’ alta nel 2007).

    Se questo non prova che i due post in questione siano stati scritti davvero nel 2005 e nel 2006, diciamo che quanto meno e’ un indizio piuttosto forte che le date non siano state modificate manualmente. E comunque dimostra chiaramente che web.archive.org non puo’ essere usato, come e’ stato fatto, come prova per accusare Luttazzi di essersi difeso in mala fede, in quanto l’indicizzazione comincia troppo tardi.

    Le valutazioni sul fatto se sia o meno lecito usare battute di altri non spettano a me da un punto di vista tecnico, ma al pubblico di Daniele. Di cui, ammirando prima di tutto lo stile di performance, torno a essere “fan”, dato che questo piccolo giro tecnico di verifica ha ristabilito la mia fiducia nella buona fede della sua difesa.

    Mi piacerebbe che blogger, giornalisti, e altri accusatori verificassero il funzionamento di uno strumento tecnico di cui, evidentemente, hanno capito poco, prima di sbandierarlo come prova di mala fede.

  • 4G? No, really?

    Vic Gundotra on Android 2.2:

    • 2x-5x increase in speed (due to Just-in-time compilation)
    • tethering and portable hotspot
    • impressive voice recognition capabilities
    • cloud/app communication with instant mobile/desktop synchronisation
    • Adobe Flash (“It turns out that on the Internet, people use Flash.” is my favourite quote ever…)

    Steve Jobs on iPhone 4G:

    • You can play Farmville

    Do I need to add anything more? 🙂

  • If only…

    I think I sometimes express my childhood dreams even when using mySql…

  • Free data: utility, risks, opportunities

    Some random thoughts after The possibilities of real-time data event at the City Hall.

    Free your location: you’re already being photographed
    I was not surprised to hear the typical objection (or rant, if you don’t mind) of institutions’ representative when requested to release data: “We must comply with the Data Protection Act!“. Although this is technically true, I’d like to remind these bureaucrats that in the UK being portraited by a photographer in a public place is legal. In other words, if I’m in Piccadilly Circus and someone wants to take a portrait of me, and possibly use it for profit, he is legally allowed to do so without my authorization.
    Hence, if we’re talking about releasing Oyster data, I can’t really see bigger problems than those related to photographs: where Oyster data makes it public where you are and, possibly, when, a photograph might give insight to where you are and what you are doing. I think that where+what is intrinsically more dangerous (and misleading, in most cases) than where+when, so what’s the fuss about?

    Free our data: you will benefit from it!
    Bryan Sivak, Chief Technology Officer of Washington DC (yes, they have a CTO!), has clearly shown it with an impressive talk: freeing public data improves service level and saves public money. This is a powerful concept: if an institution releases data, developers and business will start creating enterprises and applications over it. But more importantly, the institution itself will benefit from better accessibility, data standards, and fresh policies. That’s why the OCTO has released data and facilitated competition by offering money prizes to developers: the government gets expertise and new ways of looking at data in return for technological free speech. It’s something the UK (local) government should seriously consider.

    Free your comments: the case for partnerships between companies and users
    Jonathan Raper, our Twitter’s @MadProf, is sure that partnerships between companies and users will become more and more popular. Companies, in his view, will let the cloud generate and manage a flow of information about their services and possibly integrate it in their reputation management strategy.
    I wouldn’t be too optimistic, though. Albeit it’s true that many longsighted companies have started engaging with the cloud and welcome autonomous, independently run, twitter service updates, most of them will try to dismiss any reference to bad service. There are also issues with data covered by licenses (see the case of FootyTweets).
    I don’t know why I keep thinking about trains as an example, but would you really think that, say, Thameslink would welcome the cloud twitting about constant delays on their Luton services? Not to mention the fact that NationalRail forced a developer to stop offering a free iPhone application with train schedules – to start selling their own, non free (yes, charging £4.99 for data you can get from their own mobile web-site for free, with the same ease of use, is indeed a stupid commercial strategy).

    Ain’t it beautiful, that thing?
    We’ve seen many fascinating visualization of free data, both real-time and not. Some of these require a lot of work to develop. But are they useful? What I wonder is not just if they carry any commercial utility, but if they can actually be useful to people, by improving their life experience. I have no doubt, for example, that itoworld‘s visualization of transport data, and especially those about Congestion Charging, are a great tool to let people understand policies and authorities make better planning. But I’m not sure that MIT SenseLab’s graphs of phone calls during the World Cup Final, despite being beautiful to see, funny to think about, and technically accurate, may bring any improvement to user experience. (Well, this may be the general difference between commercial and academic initiative – but I believe this applies more generally, in the area of data visualization).

    Unorthodox uses of locative technologies
    MIT Senselab‘s Carlo Ratti used gsm cell association data to approximate people density in streets. This is an interesting use of technology. Nonetheless, unorthodox uses of technologies, especially locative technologies, must be taken carefully. Think about using the same technique to calculate road traffic density: you would have to consider single and multiple occupancy vehicles, where this can have different meanings on city roads and motorways. Using technology in unusual ways is fascinating and potentially useful, but the association of the appropriate technique to the right problem must be carefully gauged.

    Risks of not-so-deep research
    This is generally true in research, but I would say it’s getting more evident in location-based services research and commercial activities: targeting marginally interesting areas of knowledge and enterprise. Ratti’s words: “One PhD student is currently looking at the correlations between Britons and parties in Barcelona… no results yet“. Of course, this was told as a half-joke. But in many contexts, it’s still a half-truth.

  • Cold thoughts on WhereCampEU

    What a shame having missed last year’s WhereCamp. The first WhereCampEU, in London, was great and I really want to be part of such events more often.

    WhereCampEU is the European version of this popular unconference about all things geo. It’s a nonplace where you meet geographers, geo-developers, geo-nerds, businesses, the “evil” presence of OrdnanceSurvey (brave, brave OS guys!), geo-services, etc.

    I’d just like to write a couple of lines to thank everyone involved in the organisation of this great event: Chris Osborne, Gary Gale, John Fagan, Harry wood, Andy Allan, Tim Waters, Shaun McDonald, John Mckerrel, Chaitanya Kuber. Most of them were people I had actually been following on twitter for a while or whose blog are amongst the ones I read daily, some of them I had alread met in other meetups. However, it was nice to make eye-contact again or for the first time!

    Some thoughts about the sessions I attended:

    • Chris Osborne‘s Data.gov.uk – Maps, data and democracy. Mr Geomob gave an interesting talk on democracy and open data. His trust in democracy and transparency is probably quintessentially British, as in Italy I wouldn’t be that sure about openness and transparency as examples of democratic involvement (e.g. the typical “everyone knows things that are not changeable even when a majority don’t like them“). The talk was indeed mind boggling especially about the impact of the heavy deployment of IT systems to facilitate public service tasks: supposed to increase the level of service and transparency of such services, they had a strong negative impact on the perceived service level (cost and time).
    • Gary Gale‘s Location, LB(M)S, Hype, Stealth Data and Stuff
      and Location & Privacy; from OMG! to WTF?. Albeit including the word “engineering” in his job title, Gary is very good at giving talks that make his audience think and feel involved. Two great talks on the value of privacy wrt location. How much would you think your privacy is worth? Apparently, the average person would sell all of his or her location data for £30; Gary managed to spark controversy amidst uncontroversial claims that “£30 for all your data is actually nothing” – a very funny moment (some people should rethink their sense of value, when talking about UK, or at least postpone philosophical arguments to the pub).
    • Cyclestreet‘s Martin Lucas-Smith‘s Cyclestreets Cycle Routing: a useful service developed by two very nice and inspired guys, providing cycling route maps over OpenStreetMaps. Their strenght is that the routes are calculated using rules that mimick what cyclists do (their motto being “For cyclists, By cyclists“). Being a community service, they tried (and partially managed) to receive funding by councils. An example of an alternative – but still viable – business model.
    • Steven Feldman‘s Without a business model we are all fcuk’d. Apart from the lovely title, whoever starts a talk saying “I love the Guardian and hate Rupert Murdoch” gains my unconditional appreciation 🙂 Steven gave an interesting talk on what I might define “viable business model detection techniques“. As in a “business surgery” he let some of the people in the audience (OrdnanceSurvey, cyclestreetmaps, etc…) analyze their own business and see weaknesses and strenghts. A hands-on workshop that I hope he’s going to repeat at other meetings.
    • OpenStreetMap: a Q&A session with a talk from Simone Cortesi (that I finally managed to meet in person) showing that OSM can be a viable and profitable business model. Even stressing that they are partially funded by Google.

    Overall level of presentations: very very good, much better organised than I was expecting. Unfortunately I missed the second day, due to an untimely booked trip 🙂

    Maybe some more involvement from big players would be interesting. Debating face to face about their strategy, especially when the geo-community is (constructively) critical on them, would benefit everyone.

    I mean, something slightly more exciting than a bunch of Google folks using a session to say “we are not that bad” 🙂