Posts Tagged ‘mobile’

Enhancing Mobile Recommender Systems with Activity Inference

Thursday, July 2nd, 2009

Daniele had briefly blogged here about this interesting paper, by Kurt Partridge and Bob Price, for which I will give a longer review. Some of the techniques used in this paper could be useful for further research and even its limitations are interesting subject of analysis.

Given that today’s Mobile Leisure Guide Systems need a big amount of user interaction (for configuration/preferences), this paper proposes to integrate current sensor data, models built from historical sensor data, and user studies, into a framework able to infer user high level activities, in order to improve recommendations and decrease the amount of user tasks.

Authors claim to address the problem of lack of situational user preferences by interpreting multidimensional contextual data using a categorical variable that represents high-level user activities like “EAT”, “SHOP”, “SEE”, “DO”, “READ”. A prediction is of course a probability distribution over the possible activity types.

Recommendations are provided through a thin client supported by a backend server. The following techniques are employed to produce a prediction:

  • Static prior models
    • PopulationPriorModel: based on time of the day, day of the week, and current weather, according to typical activities studies from the Japan Statistics Bureau.
    • PlaceTimeModel: based on time and location, using hand-constructed data collected from a user study.
    • UserCalendarModel: provides a likely activity based on the user’s appointment calendar.
  • Learning models
    • LearnedVisitModel: tries predicting the user’s intended activities from time of day, learning from observations of their contextual data history. A Bayesian network is employed to calculate the activity probability given location and time.
    • LearnedInteractionModel: constructs a model of the user’s typical activities at specific times, by looking for patterns in the user’s interaction with his/her mobile device.

Activity inferences are made by combining the predictions from all the five models, using geometric combination of the probability distributions.

A query context module is fed to the activity prediction module to provide prediction data of the context in which the user may be interested. For example, the user could be at work when searching for a restaurant, but his/her actual context could be the area downtown in which he/she plans to go for dinner.

Authors carried out a user study, evaluating the capability of each model to provide accurate predictions.  Eleven participants carried the device for two days, and were rewarded with cash discounts for leisure activities they engaged in while using the device. The Query Context Prediction module was not enabled because of the short duration. Results show high accuracy (62% for baseline=”always predict EAT”, 77% for PlaceTimeModel).

Some good and problematic issues with this paper

  • the prediction techniques used are interesting and could be applied to other domains; moreover I think it’s useful to combine data from user studies and learning techniques as user profiling helps developers (and apps) to understand users in general - before applying this knowledge to a specific user
  • the sample size makes the user study flawed: 11 participants carrying devices for 2 days approaches statistical insignificance; weekdays/weekends is the first issues that bumps into my mind, just to mention one
  • offering cash discounts for leisure activities is presumably not the correct form of reward for this kind of study as it makes users more willing to engage in activities that require spending money over the free ones (e.g. EAT vs. SEE)
  • authors admit they have mostly restaurants in their RS base, which I think is not taken in enough account when claiming high accuracy. Given that the baseline predictor has a 62% accuracy predicting always EAT, a deeper analysis would have made the paper more scientific
  • one of the most interesting contribution of the paper is the definition of the query context module, which is unfortunately not employed in the study for usability reasons related to the its duration. Again, a better defined user study would have solved this problem. I question whether it’s worth carrying out user studies when resources are so limited that the statitistical significance becomes objectable. However, there is some attempt to discuss expected context vs. actual context which is potentially very interesting: e.g., a user wants to SHOP but the shops are closed, so he/she EATs. It would be interesting to discuss how a RS should react to such situations
  • user-interaction issues: the goal of the presented system is to reduce user tasks on the mobile; yet, this is needed to tune the system and address its mistakes; yet, one of the predictors uses exactly user’s interaction with the mobile as a parameter. It looks like there is some confusion considering the role of user interaction in this kind of systems (imho, I think that a HCI approach could improve RS usability and, consequently, accuracy)
  • the systems is not well suited to multi-purpose trips (e.g. one EATs whilst DOing, or alternatively SHOPs and EATs) and in this case predictions are mostly incorrect.

mobile projects

Wednesday, December 17th, 2008

ixPocket, a mobile technology consultancy, has numerous interesting mobile projects.

kamagra soft tab
cialis professional cheap

For example, over a period of 2 years, ixPocket designed a data collection system to monitor the effectiveness of a number of location based technologies (pdf)