Categories
smart devices Web 2.0

The smart thermostat is hot

“Of all possible devices”, remarks a friend, “a thermostat is a curious choice for the first mass-marketed smart device”.

His analysis is about Nest, recently acquired by Google and now widely advertised on billboards everywhere, including in the Tube. “This is not the kind of object you use every day; it’s also too simple – you just switch it on when it’s cold and off when it’s warm. People know how to programme a thermostat, my grandma does it.”

One cannot but agree with observation of simplicity, except I think this is exactly what makes a thermostat a great choice for a smart device.

First of all, it’s obvious that a thermostat makes your life better by allowing you to pre-heat your home at given times of the day. A standard home thermostat is not particularly flexible, though. Some thermostat allow a different programming for week days and week-ends, but any such complication is seen as clunky and requires the user to adoperate a somewhat tricky interface. Curiously, the mechanical thermostats (those in which you just pull a little lever up and down) are amazingly simple to use, but have you ever had to programme one of those digital thermostat? I still find it difficult to do it without a couple of missed attempts.

Nest does just one further step: it simplifies, in what is after all a short time, the need to programme a thermostat. It learns its users’ preferences. By doing this, it has further streamlined an already simple process. That’s what makes it a winner: it makes your life better and easier.

Another simple observation is also that Nest is not a dangerous device. It replaces a well understood process. It’s hard to operate it in a way that it can cause real damage. Compare it with the other possible “smart” devices and it’s pretty obvious that the balance between danger and functionality is another winner for Nest. The fact that it learns, also, means it will correct any statistically non-normal configuration pretty quickly. (Now, please, don’t use it to kill your great-grand-mother by overheating her room).

Needless to say, using a smart thermostat can also have a big impact on your heating bills. I think this will be a positive impact for most users, resulting in savings. Other smart devices cannot make a similar claim, and this is another reason why Nest makes sense as the first smart device to go mass-market.

A smart thermostat is not a curious choice because it is all about simplification, improving life, and allowing savings. Not many other smart devices could do the same, and I reckon that Nest is the trojan horse that will make the general public finally appreciate the need for smart devices.

Categories
social media Web 2.0

Instagram’s T&C and the hipsterization of the digital economy

The twittersphere has gone mad after Instagram’s announcement that they were changing their T&C, sparking a users’ revolt. The issue revolves around the possibility, for Instagram, to sell photographs without the author’s consent.
Needless to say, I had been expecting this announcement for quite a long time.

 

In the beginning was the dot-com boom
Once upon a time, Flickr had won the competition of the photo-sharing web services. I say web service because that was what Flickr was: not an “app” in today’s sense. After the dot-com boom with all of its “join us for free, one day we’ll have a product“, which failed to deliver that promise of creating a viable business model, history brought us a new concept: Freemium. Flickr incarnated perfectly that concept: it offered a very basic service for free (only 200 pictures), and offered a paid-for photo storage option for a reasonable price. What could beat them?

 

Hey, hipsters
Years passed, and something began to shift. Freemium started to look dull. There was a big return to the very idea that had created the dot-com bubble: you don’t need to concentrate on the business model, just execute the idea. Plenty of VC money fostered this idea. The funding for $41,000,000 of the product-less start-up color.com (and the attempt Google made to buy them for $200,000,000) is recounted as an example to follow (or as a total joke by the outsiders).

What were the reasons behind this return? I think it’s down to two facts:

  • the progress in technology, that made mobile devices available to an increasing number of people, and their penetration reaching ratios never seen before: this convinced VCs that there was nothing that could possibly go wrong having the right mass of users (i.e. a model based on conversion rates)
  • a culture switch in the type of people setting out to create businesses: from product nerds (think of Bill Gates) to hipsters.

 

Growth is the word
Hipsters have the big merit of having turned entrepreneurship into something cool, fashionable, and exhibiting good taste. Unfortunately this came with the attached condition that good ideas don’t need a money-making component straight away. No, you first need to concentrate on growth. Growth became the mantra, the magic word that could move capitals. Companies started to receive an evaluation no longer based on their profitability, but on their growth ratio. Sometimes, not even on their growth ratio but on their growth expectations. Instagram itself was acquired by a panicking Facebook for a whopping $1,000,000,000. At the same time Flickr became unfashionable.

 

Murder by growth
There is one problem with growth, however, and it’s rarely spoken about: growth can kill. A profit-less company can’t sustain growth. Well, it can, provided it constantly finds VC to back it. As a consequence, a relevant part of the business ends up concentrating on increasing the appeal to VCs rather than to paying users. The two aren’t necessarily two planets apart, but they’re not exactly the same.

 

If the product is free, you are the product
This is one of my favourite quotes, because it’s so true. When a product is free, you are consciously or not accepting T&C that allow the company to use your data. It’s what Google does, making money on ads made relevant to the user by what they know about them. Think about it just a second: Instagram and Flickr offer basically the same service. Storage and sharing of photographs. Surely Instagram’s offering is cooler and sleeker. And the lack of a proper mobile app for Flickr has never been received positively by the market. When Yahoo acquired Flickr, they paid $35,000,000. Instagram cost considerably more than that. However, Flickr had a product and sold it. Flickr has become profitable very quickly and in a very tangible way. Will Instagram ever be able to get to the green valley?

 

Flickr is the new Flickr?
There are many ways Instagram might get out of the PR fiasco that the T&C change has become. Certainly, they could just wait and hope that people will forget about it. They could offer an opt-out fee, but this would create two sets of T&C for two categories of users. They could start charging for the app, following the example of the very popular Whatsapp; what’s even more interesting is Whatsapp’s explanation of why they refuse to display ads (because they want to concentrate on selling a product and service). I think there’s a lot Instagram can still learn from Flickr. For example, Flickr offers the option to license images to third-parties, making a commission in the process. This is smart and puts the user in a better feeling towards it than being forced to give up all their pictures. Instagram was the new Flickr. But with Flickr re-entering the photo-sharing game from the front door (with an incredibly – and unexpected – good app) it might place itself as the new Instagram. In two steps, Flickr might be the new Flickr. Teaching the digital entrepreneurs that having a clear idea of how to generate revenue can be what keeps a “startup” successful – renovating and innovating – for years.
Categories
social media Web 2.0

Is Social Media changing our relationship with Death?

“This morning I’ve started the most amazing journey”. These are the opening words of a Facebook status update of a friend of mine, announcing her own death. The status goes on to explain about her terminal illness (many of her friends, like me, were unaware of it), explain why she kept it private, and say good-bye, all in first person. How peculiar yet powerful is it?

Surely my friend was not someone belonging to the “Internet generation”, being in her seventies. Still, she was a moderately active Facebook user. Especially as someone living far away from many of her friends, she used Facebook as a way to keep in touch. And when she understood that her illness was terminal, she decided to arrange for what was going to happen on her profile. Hence my question: is social media changing our relationship with death?

There are many examples of how death is represented on social media and how social media becomes the first point of arrival for mourners (and the curious). Especially with famous people, their Facebook profiles become sources of photographs for the press. Their final status updates are used to show how sudden their departure was. On Twitter there is a flow of messages from mourners using hashtags as #RIP. Steve Jobs’ death became a trending topic on Twitter, as Amy WInehouse’s. But all of this is just the digital transposition of an non-digital process.

Announcing one own’s death is a new kind of behaviour, a novel need emerging as a consequence of the perceived importance of social media in our everyday lives. As we have become less worried of posting photos of our children and to display our location to a level of accuracy that would have scared us ten years ago, so we have started experiencing death in unexpected ways. We can still see the profiles of dead friends as they were still part of our daily lives. As an eternal memorial to their lives, they stay with us, presumably forever – or until Mr Facebook decides to delete them, sometimes upon request of friends and relatives. But differently from Twitter, where a profile of someone who dies just disappears from their followers’ timeline, Facebook profiles stay there and occasionally make a comeback in the most painful way: “today is your dead friend’s birthday – write happy birthday on their wall!”.

In a new model of human relationships, friends end up writing that well wishing message. The birthday of a dead friend becomes the occasion to revive them. Hundreds of messages from common friends will spread on all the common connections’ timelines. New behaviours, shaping a new attitude towards death. It’s collective mourning. Death has become social. Can you think of an equivalent of this in a non-digital context? Have you ever been at a cemetery, making a mass-visit to a dead loved one? I don’t think so, and that’s where social media is changing our relationship with death. It makes people remember once more and at the same time it provides ways to celebrate a person in their social circle.

Given that death becomes so relevant to our daily digital lives, it’s not surprising that people start making arrangements for their digital after-life. Expressing the need to make your death manifest on your Facebook profile is acknowledging that your digital self is an important part of your life. Some people might behave differently; some might decide to close their profiles, or ask relatives to delete them. As in a non-digital context, for some life will go on, for some will not. But there’s no denying that social media is affecting our perception of and reaction to death in the same public, open way that it has changed other previously private aspects of our lives.


Categories
gov open data policy Web 2.0

Making Open Data Real, episode 3: the corporation

I have submitted my views to the Public Data Corporation consultation. Here are the answers.

Charging

Q1 How do you think Government should best balance its objectives around increasing access to data and providing more freely available data for re-use year on year within the constraints of affordability? Please provide evidence to support your answer where possible.

 I strongly believe that the Government should do its best to keep free as much as data it’s possible. In all honesty, I believe that all data should be kept free as there are two possible situations:

– data are already available, or refer to processes that already produce data, in which case the cost of publishing can be kept relatively low;

– data are not available, in which case one should ask why this dataset is required.

In the second case, I would suggest that the agency releasing such dataset could gain in efficiency, justifying the release of the data for free to the public.

There is also a consideration of what a data-based business model should look like. I think companies and individuals using public data as a basis for their business are finding it very hard to generate ongoing profit based on data only. Which brings me to the idea that charging for such data might actually make such companies lose their interest in using them, with a loss of business and service to the community. 

A good example to this point is represented by real-time transport-related mobile apps: they provide, often for a price that is very low, an invaluable service to the public. These are data that are already available to some agencies, as they are generated by a process of driving the transport business to higher efficiency and effectiveness by knowing the location of the transport agents (buses, trains, etc…). Although in some cases this requires costs for servers to support a high demand, in absolute and relative terms we are talking about limited resources. Such limited resources create a great service to the public, effectiveness for the transport company, and possibly some profit for the entity releasing the software. The wider benefit of the release of these data for free is much more important than the recovery of costs through a charge. That’s why I question in first place the need for a Public Data Corporation, if its goal is just that of charging for access to data.

 Q2 Are there particular datasets or information that you believe would create particular economic or social benefits if they were available free for use and re-use? Who would these benefit and how? Please provide evidence to support your answer where possible.

 Surely, transport and location based datasets are the most important: they allow careful planning by the public and, as a result, a more efficient society. But I would not talk about specific datasets. I would rather suggest the Government to have an ongoing relationship with the data community: hear what developers, activists, volunteers, charities ask for, and see if such requests can be satisfied by issuing a dataset appropriately.

Q3 What do you think the impacts of the three options would be for you and/or other groups outlined above? Please provide evidence to support your answer where possible.

 As I outlined in Question 1, I think data should be kept free. Hence, the best option is Option 1, provided that there is a genuine commitment to release more data for free. As I said the real question is whether data are available or not. When data are available, publishing and managing their update is a marginal cost to the initial process. When data are not available, the focus should be moved to understanding whether their publication can improve ongoing processes.

The freemium model works in the assumption that there is a big gap in the provision of a basic version of the data with respect to a more advanced service. I do not believe that this assumption holds for most of the datasets in the public domain.

Q4 A further variation of any of the options could be to encourage PDC and its constituent parts to make better use of the flexibility to develop commercial data products and services outside of their public task. What do you think the impacts of this might be?

I think that organisations involved in the PDC should keep to their public task. 

The risk in letting them develop commercial data product outside the public task is that the quality of the free portion of the data would plummet.

Q5 Are there any alternative options that might balance Government’s objectives which are not covered here? Please provide details and evidence to support your response where possible. 

I cannot see any other viable alternative, unless we consider the very unpopular idea of asking the developers for part of their profit, if any, in a way that shadows the mobile apps market. However, I think that the overhead in doing so is not worth setting up such a system.

 

Licensing

Q1  To what extent do you agree that there should be greater consistency, clarity and simplicity in the licensing regime adopted by a PDC? 

I think that realistically developers and other people interested in getting access to public data want to have clear and simple terms and conditions. I am not a legal expert and cannot possibly comment on the content of such licensing regime, but I would like it to be clear, short, and understandable to people who are not lawyers. The Open Government License, and any Creative Commons derivative, is a good example.

Q2  To what extent do you think each of the options set out would address those issues (or any others)? Please provide evidence to support your comments where possible.

Once again, I would like to stress the fact that the Open Government Licence is the ideal licence for any open-data. This would suit Option 3: creating a single PDC licence agreement, with a simple, clear, short licence to cover all situations. Option 2, an overarching PDC licence agreement that groups all commonalities of a number of licence, is possibly a second best, but it comes with a great risk of lack of simplicity, and confusion.

Option 1, a use-based portfolio of standard licences, would possible make sense in terms of clarity, but it complicates greatly the management of legal issue for the licensees. The consultation highlights that “rights and associated charges [would be] tailored to specific markets”, making it very difficult to understand such licences.

Naturally, if these licences need to be more restrictive than the Open Government Licence, I still think that a single restrictive licence, on the model of what the State of Queensland in Australia has done, would be the best idea for maintaining clarity and simplicity.

Q3 What do you think the advantages and disadvantages of each of the options would be? Please provide evidence to support your comments

It’s very hard to tell at this stage, but I think that overcomplicated licences would greatly slow down access to the data and, consequently, delay the development of services to the community and the possibility of creating sustainable business. That’s why my choice goes to a single PDC licence agreement, possibly the Open Government Licence itself, in order to get services quickly developed and available. 

 Q4 Will the benefits of changing the models from those in use across Government outweigh the impacts of taking out new or replacement licences?

I reckon there will be situations in which changing the models will have a positive impact as well as some cases in which there will be a local negative impact. We need to look at the overall benefit to society.

 

Oversight

Q1  To what extent is the current regulatory environment appropriate to deliver the vision for a PDC?

I would say the current regulatory environment is appropriate and ready to deliver the vision for a PDC, having already produced a very effective OGL. The problem is not in delivering the PDC, it is rather in questioning the need for the corporation tout-court.

 Q2 Are there any additional oversight activities needed to deliver the vision for a PDC and if so what are they?

 The only oversight activity needed at this stage is a deep analysis questioning the need for a PDC. I would strongly recommend to question the need for charging and using licences other than the OGL. A PDC charging for data risks to destroy the thriving open data ecosystem and deprive the community of great services. The development of a rich ecosystem will generate, at some point, an income for the Government through taxation. It’s just not the moment to think about directly charging for data.

 Q3 What would be an appropriate timescale for reviewing a PDC or its constituent parts public task(s)?

I would recommend an ongoing review to be held no more than every 7-8 months, no less than every 18 months.

Categories
gov open data open source policy Web 2.0

Making Open Data Real, episode 2: the consultation

This is my response to the Open Data Consultation run by Cabinet office:

My name is Giuseppe Sollazzo and I work as a Senior Systems Analyst at St. George’s, University of London, dealing with projects both as a consumer and a producer of Open Data. In one previous job, I was dealing with clinical data bases so I would say I developed a certain feeling for issues around the topic of this consultation both from a technical and policy-based perspective.

 

An enhanced right to data

I believe this is the crucial point of the consultation: the Government and the Open Data community need to work side by side in developing a culture that fosters openness in data. The consultation asks specifically what can be done to ensure that Open Data standards are embedded in new ICT contracts and I think three important points need to be made:

1) independent consultants/advisors need to be taken on board of new ICT projects when the tendering process is started; such consultants need to be recognised leaders of the Open Data community and their presence should ensure the project has enough drive in its Open Data aspects.

2) Open Source solutions need to be favoured over proprietary software. There are Open Source alternatives to virtually any software package. Should not this be available, a project should be initiated to develop such a solution in-house with an Open Source licence. Albeit not always free, Open Source solutions will offer a standard solution for a lower price, and will create possibilities for resource-sharing and business creation.

3) ICT procurement needs to be made easier. Current focus of ICT procurement in the public sector is mostly on the financial stability of the contractor. I argue it should rather be on reliability and effectiveness of the solution proposed. Concentrating the focus on financial stability is a serious mistake, mainly caused by the fact that contractors will develop proprietary solutions; a bankruptcy becomes a terrible risk because of the closedness of the solution; because no other company would be able to take it where the former contractor left; hence the need of strict financial requirements in the tenders. I object to this. In my view, relaxing the financial requirements and moving the focus to the quality of the solution, its openness, its capability to create an ecosystem and be shared, its compatibility with open standards, will improve the overall effectiveness of any ICT solution. Moreover, should the main contractor go bankrupt, someone else will be able to take their place, provided the solution was developed according in the way I envision: consequently, no need for strict financial requirements.

 

Setting Open Data Standards

As I have already stressed in the previous paragraph, the Government will need to change its rules of access to ICT procurement. Refocusing the attention to openness, standards, ability to re-share the software, is the way to go to start setting a new model in the Open Data area. Web standards can be used and they can represent an example to follow to create new data standards. Community recognised leader can help in this process.

 

Corporate and personal responsibility

It is absolutely important that common sense rules are established and make into law. The goal of this is not to slow operations down, but to ensure that the right to data mentioned earlier on is actually enforced.

The consultation asks explicitly how to ensure the commitment to Open Data by public sector bodies. I believe that, despite many people feeling that the Government should “stay away”, there is a strong need for smart, effective regulation in this area. Think about the Data Protection and the Freedom of Information Act. Current legislation requires many public bodies to deal with data-sensitive operations, and most do so by having a Data Protection Officer and a Freedom of Information Officer. I believe that an Open Data Officer should operate in conjunctions with these two, and that this would not require many more resources than already allocated. The Open Data Officer should drive the publication of data, and inspire the institution they work for to embrace the Open Data culture.

The Government should devolve its regulatory powers in this area to an independent authority to be established to deal with such regulatory issues. I envision the creation of Ofdata on the model of Ofcom for communication and Ofsted for education.

 

Meaningful Open Data

A lot of discussions have been going on about the issue of data quality. Surely, the whole community aims for data to be informative, high-quality, meaningful and complete. Unfortunately, especially at the beginning of the process, this is hard to reach.

I think that lack of quality should never be a reason for publication to be withheld: where data is available, it should be published. However, I also believe that quality is important and that is why the Government should publish datasets in conjunction with a statistical analysis and independent review (maybe run by the authority I introduced in the previous paragraph) that assesses the quality of the dataset. This should serve two goals: firstly, it would allow open data consumers to deal with error and interpretation of data; secondly, it would help the open data producer to investigate problems in the process leading to the publication and setting goals in its open data strategy.

The final outcome of this publish-and-assess procedure would be a refined publication process that informs the consumers and the public about what to expect. Setting a frequency of update should be part of this process. Polishing the data should not: data should always be made available as it is, and if deemed low quality it should be improved at the next iteration.

There are questions about how to prioritise the publication of data. I believe that in this respect, and without missing the requirements of the FoIA, the only prioritisation strategy should be requests numbers: the more a dataset the public requests, the higher priority it should be given in being published, improved, updated.

 

Government sets the example

I think the Government is doing already a good job with this Open Data consultations, and I hope it will be able to take the lessons learnt and develop legislation accordingly.

Unfortunately, in many areas of the public sector there is still a “no-culture” responsible for data not to be released, Freedom of Information requests going unanswered, and general hostility towards transparency. I have heard a FoI officer commenting “this is stuff for nerds, we don’t need to satisfy this kind of need” to Open Data requests. This is a terrible cultural problem preventing a lot of good to be done.

I believe that the Government should set the example by reviewing and refining its internal procedures for the release of data and responding to FoI requests in a more simple, compassionate way, stressing collaboration with the requestor rather than antagonism.

Moreover, it should be the Government’s mission to organise workshops and meetings with Open Data stakeholders in the public sector, to try and create a deeper perception of the issues around Open Data and its benefits. Being on http://data.gov.uk should be standard for any public sector institution, and represent an assessment of their engagement with the public.

 

Innovation with Open Data

The Government can stimulate innovation in the use of Open Data in some very simple way. Surely it can speed up awards and access to funding to individuals and enterprises willing to build applications, services, and businesses around Open Data. This should apply to both for-profit and not-for-profit ventures, and have as only discriminating factor the received social benefit to their communities or to the wider public.

The most important action the Government can take to stimulate innovation is, however, simplification of bureaucracy. Making Company Law requirements easier to satisfy, as we have already discussed for ICT procurement, is vital to bring ideas to life quickly. Limiting legal liability for non-profit ventures is also a big step ahead. Funding and organising “hackathons”, barcamps, unconferences, and any other kind of sponsored moment where developers, policy makers, charities, volunteers can work together, is also a very interesting way of pushing innovation and making it happen.

 

Open data offers an amazing opportunity of creating “improvement-by-knowledge”. Informed choice, real time analysis, accurate facts, can all be part of a new way of intending democracy and innovation, and the UK can lead the way if its leaders will be able to understand the community and provide it with the appropriate rules that make its tools work and the results happen. This way, we will have a situation where services can be discussed and improved, and public bodies can have a chance to adjust their strategy; where citizens can develop their ideas, change the way they vote, take their leaders to account; and, as a result, communities can work together, and society can be improved.

Categories
policy Web 2.0

How Data saved my flatmate from Police questioning (or how online services have become our memory)

A couple of weeks ago my flatmate, Claudio, was called by a very angry Police officer who wanted to question him as a possible suspect for breaking some other guy’s ribs at Leicester Square tube station on a night last December. Everyone who knows him would consider this very possibility almost a funny one to think about. However, the whole episode was great to think about… privacy and data!

The first thing that comes to the mind of the person being questioned, and being confident of their own innocence, is “I need an alibi“. “It was not me” is not sufficient to the Police, they will ask “Ok – so what where you doing that day at that time“?.
Of course, when asked this question you are surprised enough, and possibly shocked and worried about consequences, that you don’t necessarily remember. It’s easy to fall into despair. “What will I say?“, he asked me.

This is when I thought about the magic word: “data“.

Each one of us disseminates data about themselves, especially heavy Internet users as we both are. So, the first thing I did was to check my Gmail account. E-mails and chats for that specific date. A day like any other became suddenly meaningful and full of memories.

For example, there was a significant lower amount of e-mails than my average day, a sign that I was mostly out, not at work. It turned out it was a Saturday. It seemed from my e-mails that I was heading to a party that night: it turns out I was at my rugby team’s Christmas party.

Why is what *I* was doing useful to Claudio? Very simply, because we spend most Saturday evenings with the same group of friends. Apparently I was not with him that Saturday – I could not be his alibi. It also seemed I checked back at my station around 11pm. A very early time to go back home on a Saturday after a party. What happened?
Before despairing, I made a step back to the e-mail flow and found a very peculiar e-mail at about 1230 saying just “Klaus” and a phone number. Funnily enough, I don’t know anyone called Klaus so… who is Klaus? Why did I mail his number?

You should know that I live between Alexandra Palace and Wood Green and I have this habit of walking to Muswell Hill for a coffee on Saturday mornings. I also tend to have lunch at home. So, 1230… I was probably on the way back from Muswell Hill. So I started – using OpenStreetMaps – to check for all possible locations I tend to visit on the way back. There’s a shop, a tennis club I’ve played at, a friend who lives at the corner of the tennis club, a couple of cafes, and a Piano shop.
That’s when the Eureka bulb switched on.

I headed to Facebook: not many status updates, but one very important with a photograph. A single photograph showing me in a bus, on the way back home, with heavy snow outside!
I remembered that to avoid the snow I entered the Piano shop. I was moving to another flat at the time, and was investigating the possibility of getting a free piano from the Freecycle. I entered the piano shop to enquire about the cost of piano removals – Klaus being the name of the van man. That’s also why I headed back home very early after the party: I was worried about transport not working because of the snow. I remembered I found Claudio at home when I was back.

I checked again my e-mails and chats: the flow interrupted around 6pm. Basically, there was a hole of about 4 hours in which I didn’t know where Claudio was. Still, we had a track of where and when to look at. There was no chat/e-mail/facebook status update from him on that night, suggesting he had been out, too. Hence, we contacted all our common friends he could have been with that night. All of a sudden he said: “Now I remember it all! After you went out, I called Jasmin and went with her for dinner at Satsuma with her friend visiting from america“. In less than 20 minutes, photos showing him in the restaurant were in his e-mail account.

More interestingly, it turned out he actually was at Leicester Square tube station when the police claims he was. More worringly, he had been alone for some time before meeting his friend.
I’m not a good thriller writer. The finale is probably obvious to you now. He had touched his Oyster card out at the same time the person the Police is looking for was in the view of a CCTV camera. Of course when Police showed us the pictures, it was obviously not him: they had called him because his Oyster card is registered.
But can you see why I’m amazed by this story? A day of which we couldn’t remember anything is now a story full of details.

Moreover: there was an accusation built on data (coming from the Oyster Card system), to which we found a defense built on data (coming from Gmail, Foursquare, and Facebook).

I began to think what this story would have been before Gmail, before Facebook, before check-ins? I know the answer: Claudio would have gone to the police scared, unable to answer the questions in all of his honesty, almost sure he had no way of defending himself. Instead, thanks to this data society he was able to go there without any fear, ready to hear their story and to respond to their questions, being sure he knew every move of that day.
Online services have become our memory.

Don’t get me wrong: I still find problematic the use of users data and the way most online companies deal with privacy. It’s maybe scary the fact that on-line strangers-managed services have become a replacement for our own memory. However, the mountain of data they allow us to have access to can be useful and helpful. The question is how to make good use of these data, and store them in a secure and private way that allow us to decide who we want to share the data with (luckily not the Police).

Personal lesson learnt: I will now save all my Oyster history (before it expires every 3 months), check-ins, and latitude. I want to be ready for questioning.

Categories
geo Web 2.0

Wherecamp, Therecamp

Disclaimer: This is a dashboard/notepad-like stream of ideas and questions, rather than a proper blog post 🙂

WherecampEU, Berlin

An amazing time with some of the best minds around. Some points I’d like to put down and think about later:

1) Ed Parsons (@edparsons) run a very interactive session about what kind of open data developers expect from public authorities and companies. One of the questions asked was “would you pay to get access to open data?“. This issue has long been overlooked. Consider for a moment just public authorities: they are non-profit entities. Attaching an open license to data is quick and cheap. Maintaining those data and making them accessible to everyone is not. As developers and activists we need to push the Government to publish as many data they can. However, we want data to be sustainable. We don’t want to lose access to data for lack of resources (think about Tfl’s TrackerNet). Brainstorming needed…

2) Gary Gale (@vicchi) and his session on mapping as a democratic tool can be reduced to a motto: We left OS times, we are in OSM times. Starting from the consideration we don’t talk about addresses but about places, part of the talk was dedicated to the effort Gary and others are putting into defining a POI standard. The idea is to let the likes of Foursquare, Gowalla, Facebook Places, etc…, store their places in a format that makes importing and exporting easy. Nice for neogeographers like us, but does the market really want it? Some big players are part of the POI WG, some are not.

3) I really enjoyed the sessions on mobile games, especially the treasure hunt run by Skobbler. However, some of these companies seem to suffer from the “yet another Starbucks voucher” syndrome. I’m sure that vouchers and check-ins can be part of a business plan, but when asked how they intend to monetize their effort some of these companies reply with a standard “we have some ideas, we are holding some meetings“. Another issue that needs to be addressed carefully – and that seems to be a hard one – is how to ensure that location is reported accurately and honestly. It doesn’t take Al Capone to understand you can easily cheat on your location and that when money are involved things can get weird.

4) It was lovely to see Nokia and Google on the same stage. Will it translate in some cooperation, especially with respect to point 2)?

5) I can’t but express my awe at what CASA are working on. Ollie‘s maps should make it to the manual for every public authority’s manager: they are not just beautiful, but they make concepts and problem analysis evident and easy to be appreciated by people who are not geo-experts. And by the way, Steven‘s got my dream job, dealing with maps, data, and RepRap 😀

6) Mark run a brainstorming session about his PhD topic: how to evaluate trust in citizen reported human crisis reports. This is a very interesting topic, and he reports about it extensively on his blog. However, I’m not sure this question can have a single answer. What I feel is that different situations might require different models of trust evaluation, to the point that each incident could be so peculiar that even creating categories of crisis would result impossible. Mark’s statistical stance on starting his work might return an interesting analysis. I’m looking forward to see how things develop.

7) Martijn‘s talk about representing history in OpenStreetMap exposes a big problem: how to deal with the evolution of a map. This is important from two points of view: tracing down errors, and representing history. This problem requires a good brainstorming session, too 🙂

8) Can’t help but praise Chris Osborne for his big data visualisation exposing mcknut‘s personal life 🙂 And also for being the best supplier of quotes of the day and a great organiser of this event, as much as Gary.

What didn’t quite work

Just a couple of things, actually:
1 – live code presentations are doomed, as Gary suggested. They need better preparation and testing.
2 – no talk should start with “I’ve just put these things together”. Despite this being an unconference, that shouldn’t mean you want to show something bad quality. Or anyway give that impression.

Me wantz

Next time I wish to have:
1 – PechaKucha-style lightening presentations on day 1, to help people understand what sessions they want to attend
2 – similarly to point 1, a wiki with session descriptions and, upon completion, comments, code, slides, etc…
3 – a hacking/hands on/workshop session, on the model of those run at #dev8d.

Categories
recommender systems Web 2.0

Does the world want recommendations?

NewScientist reports on April 30th that Futureful, a Finnish start-up, is building a predictive iPad based search engine that will use a recommender system. By harvesting information from social feeds from Facebook, Twitter, etc…, its algorithm take the topics that are trending, it analyses the users’ interests and behaviour, and recommends new topics that might interest them.

Eric Schmidt is also quoted as having said “The ability to tell me things I didn’t know but am probably very interested in is the next great stage of search“.

I am possibly cynical about this topic and have extensively blogged (Who wants to be recommended?, May 2009) about the problem of appropriate recommendations and the ability to surprise of such systems.

The problems I see relate to how you are supposed to evaluate a system whose task is to generate surprising recommendations. Especially in academic research, the success of a recommendation engine is traditionally evaluated using a very simple metric: take a list of users choices on the given domain, hide a number of entries, check if the recommender system returns them upon analysing the remaining ones. Straightforward, although several other metrics have been proposed.

Now, how are you supposed to evaluate a system that doesn’t have a reference list? We can surely think of many metrics, some of them quantitative, some of them qualitative (or even social-based):

  • the probability a user follows the suggested link
  • the strength of the trust feeling towards the recommender
  • the fact that a user suggests the recommender system to other users …

However, a metric needs to be meaningful and qualitative metrics often lack this meaningfulness. If I’m a user and I want to be surprised, I will be probably following any random link. I often do that in what I call my serendipitous Wikipedia crawls. My favourite recommender system is, above all, Twitter: I only follow people that make me learn something interesting. Not one of the people that Twitter’s “Who to follow” system recommended me was relevant to me.

So I am a bit confused: what exactly a predictive search engine is really trying to achieve?

Categories
geo geomob mobile Web 2.0

GeoMob, 12 May 2011

A good level of participation for yesterday night GeoMob. Despite two speakers’ defections we had a well balanced schedule (one big company, one researcher, one startup) and a rich Q&A session. Here’s my usual summary with some thoughts embedded.

Microsoft Bing Maps, by Vikas Arora (@vikasar), Solution Sales Specialist
General show-case talk as we often have from big companies. However, some interesting products seem to be coming out of the Microsoft pipeline, especially StreetSlide and the partially related Photosynth. Some awesome novelty (although not immediately usable) like the amazing live Augmented Reality video stream on a static image view. I’m not totally sure the GeoMob crowd is the right one to show AR 😉

There was some good debating about updating StreetSlide imagery, thanks to a question by Ollie. This is a well known problem in Google StreetView, especially in busy London High Streets where shops sometimes change hands multiple times in a year. As a result, by the time StreetView imagery has reached Google’s servers it displays a vintage version of reality. Vikas claims that by partnering with Navteq they will be able to update images every 4-6 months.

Vikas earns the best quote of the night award: “I can’t say much about Nokia except that it’s good for us”.

Mapping Surnames Geographically, by James Cheshire (@spatialanalysis), UCL Geography
I was absolutely fascinated by James’ work upon discovering it on the National Geographic Magazine some months ago. The general subject of this talk is showing how surname origins and popularity can be displayed on a map. Two works were presented about surnames in the US and in London.

The talk and the Q&A session highlighted both the power of a map to show surnames but also its limitations. There are obvious problems of visualization: short and long surnames being displayed in different size, choice of colours, positioning, density, granularity.

Although the map itself is a beautiful item, I think that its dynamic version, able to show the nth most popular surname, is more useful, but only if used… dynamically. What I mean is that in places that are true melting pots like London what it’s interesting is not what surname or surnames are the most popular, but rather what’s the distribution of names of a certain origin in a given place. In other words, given the assumption that certain surnames can be related to certain communities, it’s interesting to see that the first five most popular in a given area are sometimes from five different origins.

James was open about the issues of visualising surnames this way, especially about how to treat granularity (e.g. the Irish community in New York is not as big as it would be). There is lot of work to do in this area and a map is only the tip of the iceberg of research, development, coding, and imagination.

Introducing Eeve, by Jan Senderek (@jansenderek)
Impressive UI analysis for this young start-up whose goal is to let people have fun creating and sharing events. Jan, their CEO, delivered a very interesting talk about how UI can lead to a great mobile application. Their strategy of “mobile first, then web” is interestingly different by that of many other startups around. Event creation and sharing seems to have a mind-boggling peculiarity: initially, events will need to be created in the place where they will be held and shared immediately. No forward planning allowed, which sounds strange but might capture the fantasy of party goers. They plan to extend the service to let event organisers create entries.

The (long) Q&A session seemed critical but was truly interested. First of all, turning myself into the bad guy, I asked what makes them different from their competitors. I’ve attended GeoMob since 2009 and this is at least the third company introducing a similar service, and their unique selling point is not extremely clear. Surely, UI seems to be really good for their app, but is that enough to get to that critical mass of users needed to succeed?

Secondly, the business model seemed not very well defined. Although as any stealth startup Eeve wouldn’t probably disclose too much about it, the general perception was that they need to think about it a bit more accurately, and Jan admitted that.

However, I also have the general impression that small companies presenting at GeoMob (not just Eeve) tend to come just with their shiny iPhone application rather than with the backstage work which might be of great interest. This also gives the wrong impression that most of them are trying to monetise upon nothing more than a mobile app. As it turns out, one of the other LBS introducing at GeoMob a similar event-based app was also selling a CRM system to event organisers which is where their main revenue stream comes from. None of this was mentioned at the presentation and we were left wondering with the same questions.

I won’t mention all the discussions about stalking and privacy: we’ve done that for all companies providing LBS, so nothing new from that perspective. But it’s always good to have our @StevenFeldman pointing that problem out.

To be honest, I’m curious about Eeve and will probably try it out (paying attention to privacy, of course :P). It would be nice to have a report on how many users join the system and especially their B2B strategy.
Maybe for a next GeoMob?

Categories
art Web 2.0

This blog in a cloud

Thanks to Wordle this is my blog’s word cloud. Unsurprisingly, and forgetting a minute interesting appearences as get and way the winners are users, data, and recommendations.