Thursday, May 19, 2011
Do you like this Article?
There is a widely held belief in the field of HCI (Human-Computer Interaction) that context awareness is a key technology towards better interfaces and applications. To achieve more satisfactory interaction, ubicomp systems should have better understanding of the situations of the people they are dealing with. This seems quite logical, as people are very context-aware. We apply context information in a number of ways in our daily lives: in our conversations with other people, in our search of new interesting things, in our navigation through busy streets. It appears that to fully integrate with the human society, machines have to develop at least rudimentary context awareness. Mobile computing has made this need more imminent. With desktop computing, the physical context could be more or less assumed. Mobile computers, in contrast, are moving with people into a variety of places and situations. Some ubicomp functions may be quite inappropriate in those situations. Consider the infamous example of cellphones ringing in libraries, concerts, or churches. The function – alerting mobile users of incoming calls and messages – is useful for many cases, but without context awareness, the resulting blunders cause major disturbance. The phones should know when it is appropriate to ring. Yet, this function is extremely hard to implement successfully. As a result, people are adapting their lives instead.
The context of location: Location Awareness
As another example, consider location awareness. Location has been the most popular form of context in the ubicomp systems and numerous applications have been proposed that use location as a key source of information. Location-based services are appearing everywhere in the marketplace. Finding businesses through Google maps is today routine and finding the nearest coffee shop via a text message is now possible through mobile networks. Personal content sites such as Flickr feature location information as metadata in user-contributed images. GPS navigators are becoming standard fare for drivers, and soon for pedestrians. The ability to track parcels, buses, flights, and other moving objects is making our life easier – or more up-to-date. Location is the first form of context data that is really taking off. Why is it, then, that mobile operators have been silently dropping their early trials for locating people through their mobile devices? Why is it that the various prototype systems of finding people in offices through their active badges have invariably been fallen out of use, after the initial novelty has worn off? Apparently in our present society many people do not feel ready to publicly announce their position, particularly when such tracking happens through an obscure monitoring system.
Widespread use of positioning will ultimately increase the acceptance, but you and I feel that the human need for privacy will remain a fundamental barrier to location awareness well into the future. Put bluntly, tracking people is wrong, while tracking things, services and information is fine. Put another way, users feel okay to pull location data but not push it. I do not expect this to change anytime soon, at least until the current generations of Internet users fade away.
Presence and Activity Recognition
What are the next kinds of context information to make it to the public use? At least presence information is already out there. Users of instant messaging and IP telephony are able to announce their availability through "busy", "unreachable", "online" presence attributes. This makes it easier for people to judge whether it is appropriate to approach the other party. Similar developments are available in mobile phones, though the applications have not yet caught on. Note that presence is usually information that people input explicitly. Automated sensing of presence data is still generally impractical. Researchers around the planet are busy collecting sensor data related to humans and decoding it into user activities
At the moment, there is much promise in the area of motion analysis. In labs we can now, with acceleration sensors and gyroscopes, detect such primitive motion states such as walking, standing, sitting, or climbing stairs. Others are able to abstract ambient noise levels or video feeds into guesses of the surrounding places and situations. Another promising research track is sensing presence via short-range networks, such as Bluetooth or Wi-Fi. If two phones are within the range of each other, it is also possible (but not certain) that their owners are also in each other's physical vicinity. Adding knowledge obtained from social networking analysis, we can today guess when someone is among friends or colleagues. Presence can be a new source of information overflow. Unless there are sufficient filters, we could be flooded by situation updates from hundreds of people. More research and prototyping is needed before we can say what kinds of context data should be transmitted, to whom, and under what circumstances, and how such information should be made visible to the receiver.
And What's the Right Domain?
No single source of context information – location, motion sensing, proximity, audio, video – gives 100% recognition of the human context. On the contrary, context awareness is likely to always remain approximate. Through sensor fusion, or combining information from various channels, we hope that the inherent redundancy will help us to reduce the amount of noise in our results, but still it is of little hope that foolproof generic context awareness methods would arise. There is more hope with limited domains. As with AI (Artificial Intelligence) in general, limiting the scope of application invariably brings better performance. Choosing the right application area brings even better performance. So, we feel that while the goal of generic context awareness will be beyond reach, it is important to try out context information in all kinds of applications and, slowly, introduce the successful solutions to the generic ubicomp systems.
Imaging is one application area that can benefit of context awareness. Digital image and video files can carry metadata about the context present at the time when the file was created, transferred, edited or displayed. The metadata can contain various features like optical parameters, location and orientation, the devices that were present (and hence, their users), and the messaging history for files that were sent. Such contextual metadata allows reconstructing the scene much after the fact, which can be used for autobiographical purposes, as an automated field report or diary, or automated presentations. Importantly, the metadata allows more human-oriented searching: instead of tags or directories, we can search for people and places. New ways of sharing visual material are starting to come up, as online image galleries are starting to embrace location data (and later, other contextual metadata).
Ultimately, context aware systems would need to understand human life to offer meaningful help for humans. This goal is presently too far, and probably will be for quite some time. Machines lack direct access to our minds, and therefore have to resort into externally observable signals of our behavior. Even us humans, with a lifetime of training, have difficulty in making sense of other humans. In some limited domains it is already possible to guess what users of computer systems may be thinking (in help systems, for instance), and the number and scope of such systems will be increasing.
Subscribe via Email
This post was written by: Alex Wanda