Ubiquitous Computing And In Particular Context Aware Applications

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Introduction

Ubiquitous computing and in particular context-aware applications have the potential to solve a wide variety of issues across several areas. According to Gartner, increased usage of operating systems, browsers, a wider range of applications and the devices that run these applications, coupled with a larger user base due to the increased availability of such devices creates new opportunities both for businesses and consumers. Context is anticipated by 2015 to become as important in mobile consumer services and relationships as search engines have become to the Web (Gartner, 2011). By utilizing knowledge of a user or device's operational environment including location, presence/proximity, activity, social interactions and attributes along with other environmental inputs we can implement more responsive applications and services capable of recognizing users' situations and anticipating their needs. Technological advances in several areas have made it simpler to obtain contextual information including communication, mobile devices, and development frameworks.

Mobile devices are today considered to be one of the most obvious representational forms for ubiquitous computing devices based on the description of these devices by Weiser (Weiser, 1999) given their form-factor and computing capabilities. They continue to contribute to the progress of context-awareness with the advances in their development and accessibility combined with their general portability providing a platform for both sensing and responding to changes in context. The general permeation of mobile devices into almost every aspect of our daily lives, from smartphones, portable music players, and PDAs incorporating advanced communication technologies and increased computing and storage capacity, has resulted in an almost instantaneous access to knowledge. This knowledge can be of the operating environment, the user, in general - the context. It can be used to better understand human behavior and interactions and tailor applications and services to more quickly and accurately respond to observed situations.

While context-aware services on mobile-devices have become increasingly interesting to researchers as well as the average consumer, it is still in the relatively early stages. Currently envisioned applications include usage for activity recognition, location-based recommender systems, and social-networking applications amongst others. There are many areas in which context-aware applications can provide improved services and quality of life, these include areas such as health-care services - both in medical facilities and for home care, monitoring of the physical environment, and educational initiatives. In a personal setting for example, a user possessing a mobile device on approaching or entering their smart car, after establishing the user's identity will have the engine start automatically and perform route calculation for the trip to work, factoring in current traffic conditions and current road conditions.

Within the workspace, assuming that it is also a smart space and has the requisite infrastructure in place to support context-aware interaction the user's device is capable of acting as a security token providing access to the office and also access to a computing terminal and the relevant profile including temperature and lighting preferences, appointments, etc. Inter-office communication and business calls can provide automatic translation services based on the geographic location or a specified primary language of the participants. Lunch can be ordered or scheduled based on the user's preferences and/or schedule. The use of a mobile device is not necessary for establishing user identity as this can be done using various methods including the use of RFID tags; however, mobile devices enable a level of continuity as it relates to the determination of context with the ability to utilize the power and resources of different sensing environments such as smart spaces and smart vehicles when they are available, but they are still capable of establishing a partial view of the user's situational context independently.

Context-awareness due to its very nature provides a basis for more efficient services and mechanisms, and enables the tailoring of application behavior to suit end-users requirements. The importance of context-awareness can be seen in the number of applications that currently use context information captured by both software and hardware mechanisms. These applications use amongst other information knowledge of time, temperature, location, and social environment. Having this understanding of the user and their surroundings is a significant component in the realization of the goals of ubiquitous computing. Beyond its application in ubiquitous computing, context has also been used in several areas of computer science including machine learning, computer vision, information retrieval and filtering, and computer security amongst others. The application of context-awareness to computer security is a relatively recent prospect and is aimed primarily towards securing context-aware applications.

Ubiquitous Computing

As computers have advanced so has the push to create more seamless forms of human-computer interaction (HCI). Ubiquitous computing is a reference to a model of HCI that transcends typical desktop computing and sees information processing more widely integrated with everyday items. The increased computing capabilities of these items, has made it necessary for individuals to interact with computers frequently, often without being aware of the interaction. The general objective in this model is to have computers and information processing devices more adequately tailored to satisfy the needs of the users and to more seamlessly integrate into their existing environments. There are several models for ubiquitous computing; however, all the models seek to maintain a consistency in their objectives. The general objectives surround the desire to have a set of relatively inexpensive, distributed and networked processing devices included at various stages of daily activities attuned to resolving, making more efficient or simplifying some common goal while remaining unobtrusive and effectively invisible to the user. However, to bring computers to this point while retaining their power requires radically new kinds of computers of all shapes and sizes (Weiser, 1999).

There are many ubiquitous computing applications both existing and proposed including the use of the model in smart spaces, wearable computing, and applications of context-awareness. Early work in ubiquitous computing proposed three form-factors for these devices: (1) a wall-sized interactive surface similar to a white-board or the magnet panel of a refrigerator; (2) a notepad/tablet device envisioned to be less like the typical form-factor of a desktop or laptop computer but rather, similar to a scrap of paper; and (3) the tiny computer, analogous to tiny individual notes or tiny displays (Weiser, 1999). The ideal scenario for ubiquitous computing is for devices to not require human instruction or interaction to be able to autonomously perform a specific task, but for these devices to recognize scenarios and automatically and unobtrusively respond to allow a user to accomplish a goal dynamically designated by the identified scenario. Work being done in the area of context-awareness is aimed at accomplishing just this objective, as context-awareness is viewed as a bridge to ubiquitous computing.

Context Awareness

Context-awareness relates to the determination of situational context. More specifically, as it relates to Computer Science, context-awareness examines the viewpoint that computers/computing devices are able to sense their physical environment, and are able to react to the observed state of the physical environment and potentially effect changes in the physical environment as part of that reaction, normally through some indirect means. There are however, multiple views and interpretations as to the implied meaning of context particularly as it applies to the specific application. One such interpretation treats the concept of context-awareness as a classification of sensor data by a given algorithm, trained by example (van Laerhoven, Schmidt, & Gellersen, 2002). A context-aware system should be aware of the very specific context that falls in its management responsibility including storage, dissemination, adaptation, provisioning, and reasoning. Presently there are two main approaches to implement context-aware computing: improving vision and audio recognition in an attempt to more closely model human perception, and the fusion of information from different sensors (Lukowicz, Junker, Stäger, Büren, & Tröster, 2002). For context-aware applications, contextual information retrieval is partly accomplished through the use of sensor-networks (Anagnostopoulos, Tsounis, & Hadjiefthymiades, 2007).

What is context?

One of the major factors in studies related to context-awareness is identifying what exactly constitutes context. A well-known method for establishing context is the use of proximity selection, i.e., acting based on user location (Anagnostopoulos, Tsounis, & Hadjiefthymiades, 2007). As such, location has been used as one of the main attributes in the determination of context and in many context-aware research projects. The location information is often augmented with a time-stamp and an identity or identifier. However, this is a somewhat limited set of attributes given the wealth of information that may be available about a user, the physical environment, or a combination of both that more completely describes a particular situation. With a more descriptive set of inputs and ultimately a more detailed description of the physical/operational environment, a more thorough description of the situation and by extension, higher-level contexts can be derived.

As context-aware research and applications have progressed, it has been observed that the factors being used to determine context are dependent on the intended application. A context-aware system should be able to respond to changes in context such as changes in user direction, sensor unavailability, or the presence of new or additional sensors. There are additional context models with corresponding requirements on applications that are to be categorized as context-aware with respect to the models. As an example, a context-aware application should have in place mechanisms to obtain context data from diverse sources, whether from hardware sensors or software mechanisms, depending on the context that is being observed (Anagnostopoulos, Tsounis, & Hadjiefthymiades, 2007). In a context-aware system, a practical and reliable context inference method is indispensable (Kawahara, Karasawa, & Morikawa, 2007).

Given that the current action/activity of a user is often a major component in determining the required response for an application, activity recognition is another major component in context-awareness. As previously stated, location or activity information is often coupled with some form of user, device, or other form of identification, as this information is required to respond based on the individual's preferences or behavioral patterns for a given situation. Time is also a significant factor in context determination and most application include some indication of the current time as the use of contextual information requires some means of determining the temporal proximity of the data being processed. It may also prove beneficial to be aware of the user's social context including social interactions, scheduled activities, and user presence. On the other hand context determination can also be based upon additional physical factors such as motion, infrastructure, lighting conditions, and ambient noise amongst others. In fact, most of the work that has sought to more accurately detect user state requires the use of accelerometers installed at specific positions on or near the body.

Presentation of Context

Applications that utilize context-awareness may use the information in several ways including displaying the information to the user, or storing the data for later retrieval. For those applications that display context information, there is often a level of abstraction required before displaying this data to the user. The raw sensor data is likely to be of little or no value to the user and needs to be represented in some user-consumable form. For example, a GPS sensor may return positional coordinates (longitude and latitude) but this information would probably be more useful to the user if presented as an address, or providing information on the distance from the given position to a known reference location as is the case with most navigation systems.

On the other hand, for applications where the contextual information is collected and stored, the benefit of storing this context information either as a form of metadata linked to other data entries or as the actual data entries, is that it provides a framework for contextual searches. Searches can be based not only on the content of entries but also on the context of the search. These techniques can be applied to databases as well as to files and file systems e.g., (Gyllstrom, Soules, & Veitch, 2007), (Chen, Guo, Wu, & Wang, 2011), and (Soules & Ganger, 2005). There are also applications to general information retrieval and information filtering systems (Brown & Jones, 2001). Applications may also use the context information to dynamically modify their behavior without the need for user intervention. Context aware systems also have provisions to have a mechanism that allows for contexts to be stored as a form of context aggregation allowing for the merging of correlated contextual information and also serving a repository for historical context (Anagnostopoulos, Tsounis, & Hadjiefthymiades, 2007).

Context-Awareness in Mobile Devices

Mobile devices have advanced at a rapid pace, with ever increasing storage capacity and processing power in addition to a growing number of embedded sensors and improvements in data communication technologies. With more than one billion smart-phones in use globally and a projection that this number will double by 2015 (Five Star Equities, 2012), smart-phones are widely accessible and are tools that have come to form an integral part of daily life, satisfying communication needs as well as serving as information and entertainment hubs. Given the advancements in mobile technologies, and the level of permeation and accessibility, in addition to the generally accepted form factors of ubiquitous computing devices, it is only natural that context-aware research has utilized these devices to seek to more efficiently determine user context. These advances have given mobile devices the capability to use sensors embedded in the device as well as external sensors for obtaining contextual information, as well as the ability to conduct reasoning related to the observed sensor data and context. This data can be used on-demand and in an ad-hoc manner to satisfy user or application needs (Loke, 2012). However, even with the progress that has been made with mobile devices, there are still certain limitations that affect their use for context classification and the development of context-aware applications.

Ambient sensing and context awareness have become the primary inputs for a new class of mobile applications and corresponding interfaces. Researchers have observed the current trend of applications that employ some kind of sensor-based input moving away from specialized applications towards those targeted at the general consumer. These applications are often framed as cooperative services such as real-time traffic monitoring or social networking, or crowd-sourcing applications (Wang, et al., 2009). There have been several approaches taken to implement context-awareness using mobile devices. Early research saw mobile phones being used as a communication tool responsible for aggregating data from external sensor modules and relaying this information to a desktop or laptop computer for further processing. Additionally, some information could be accessed directly from the mobile device such as electronic calendars, and this information used in part to determining user context (Siework, et al., 2003). As the number and technological advancement of sensors included in the devices increased, so did the attempts to utilize their processing power and the larger information sets available for determining user context.

Due to the synergy that exists between the technological push and the corresponding demands on technology (pull), context aware applications have increasingly used the available data from the existing embedded sensors (Wang, et al., 2009). These include the use of acceleration or simple movement data available through on-board accelerometers, orientation data available through digital compasses, to location information that can be obtained using cellular positioning (determining a rough triangulation of user location using nearby cell-tower information), or using Wi-Fi to determine location, Bluetooth to approximate device or user proximity, or the use of the device's GPS modules. By using the available sensor data, we are able to obtain a more holistic view of the user's characteristics and physical surroundings allowing for these context-aware applications to be more adaptive to environmental changes and user preferences (Wang, et al., 2009).

Common Contexts

Context-aware applications and concepts have been explored for many years, with researchers evaluating various methods of determining the overall situational context of a user or computing device. Efforts have ranged from work on classification algorithms to increase the efficiency and accuracy of context inference using raw sensor data, to sensor fusion techniques wherein multiple heterogeneous environmental interfaces (sensors) are coupled to provide more detailed information about the environment. There are however, a set of situational contexts which have been explored consistently in context-aware research due to their relevance in describing user context. These contexts are location, and activity combined with identity and a time reference, whether as elapsed time from some reference time or as the current time.

Location

The current location of a user can be seen as having a large impact on their current actions and possible future actions. For instance, it may be observed that based on a user’s schedule it may be time for lunch. The user may however be restricted in his/her choice, to establishments that are within a relatively close proximity to their current location. Knowledge of the current location provides a means of refining the options available to the user in fulfilling the current goal, i.e., getting lunch. One approach could be to simply identify the restaurant that is closest to the user, or using knowledge of the user’s preferences, possibly for a particular type of food, and use this information to identify nearby establishments that specialize in that cuisine.

As previously stated, location and proximity have been probably the most widely used contextual attribute in context-aware research. There are several methods that have been employed to determine location and proximity. This includes the use of RFID badges, embedded presence sensors typically embedded in the floor, or the use of infrared sensors or video image processing. Some of these technologies can be readily seen in aspects of our daily life. For example passive infrared sensors are often used to control light fixtures, turning on the lights when movement is detected and turning them back off if no moment is detected over a period of time. Particularly as it applies to the technological advancements of mobile devices, researchers have sought to use the available resources to determine location without the need for additional external components. Most modern mobile phones have embedded GPS sensors that are able to determine location to a fairly high degree of accuracy. As such, GPS is still one of the most widely used methods for determining location.

GPS is widely recognized as the most popular mechanism for determining location, particularly in outdoor settings. GPS is currently used for many popular applications requiring location services like navigations systems and in transportation services for fleet management. These are just basic representations of services that are available to the general consumer and not representative of the original intended use of GPS in military applications. There are however, some drawbacks to GPS systems. In its primary domain of outdoor positioning, physical structures including buildings and tunnels can negatively impact the system's accuracy and even prevent its functioning. More so, while GPS has an accuracy of approximately 10 meters in outdoor applications, it typically does not perform well in indoor settings and is practically unusable as an indoor positioning system.

Other methods that are often used for determining mobile phone location include the use of cell-tower triangulation, and Wi-Fi location data. While these methods provide relatively reliable positional information they are not as fine-grained as using GPS readings and are therefore normally used in the absence of, or to supplement GPS data. Proximity is another area of interest and several efforts have been made to use Wi-Fi and Bluetooth to determine proximity. This is achieved by observing Bluetooth or Wi-Fi MAC addresses that are in range of the device, providing the corresponding devices have been associated to some known location, and using these in-range addresses to infer the device location. These techniques have also been applied in an effort to overcome the shortcomings of GPS as it relates to providing indoor positioning information.

Outdoor Positioning

To allow the Cyberguide system to be used over a wider area and to overcome its limitation to use in a single building, Welbourne et al. implemented an extension of the initial system introducing an outdoor positioning system using GPS. The positional coordinates from the GPS unit were used to dynamically update the user’s location information on the map component allowing them to be aware of items of interest in close proximity. While it does make use of Place Lab, the system implemented by Welbourne et al. extends the functionality by making use of both cellular triangulation and Wi-Fi-based location determination to perform outdoor positioning (Welbourne, Lester, LaMarca, & Borriello, 2005). There are three components to the system: a GSM mobile phone, a Wi-Fi enabled device worn around the waist, and the multimodal sensor board. The cell phone provides GSM data for location and the Wi-Fi enabled device provides wireless data for the same purpose in addition to being the central processing and data storage unit. Classification is performed using time, location, and accelerometer data in a simple classification scheme to distinguish different modes of transit. Using an average location history between 25 and 120 seconds, a rough estimate of the user’s speed can be calculated and further classification as to the mode of transportation can be done using the accelerometer data.

Indoor Positioning

The Cyberguide project (Abowd, Atkeson, Hong, Long, Kooper, & Pinkerton, 1997) is a series of prototypes for a mobile context-aware tour guide. The system uses the knowledge of the user’s current location as well as knowledge of past locations (location history) and orientation to autonomously provide the services that one would expect from a tour guide. The primary focus of the initial work was on the user’s location and orientation as it relates to contextual attributes. The system was implemented as several independent components which ultimately allow for greater extensibility. Among the components implemented in the initial project were: a map component which provided a view of the physical environment being visited; an information component which is simply a structured information repository related to the interesting details surrounding the physical environment. There is also a messaging component that facilitates communication among users and between end-users and representatives from points of interest.

Cyberguide uses a positioning component to determine the location of the user and this is at the heart of the system since a visitor would be most concerned with areas of interest in their immediate surroundings. The first implementation of the positioning component used an infrared-based location system. TV remote control units were used as active beacons while the mobile device with an affixed infrared receiver tuned to the carrier frequency of the control units was used as a location beacon for the user. This tracking system proved to be too costly for large scale use based on the cost of the infrared receiver used. Welbourne et al. sought to fuse location-based and non-location sensors to provide high-level mobile context inference (Welbourne, Lester, LaMarca, & Borriello, 2005). While the system incorporates other context attributes, the primary focus was on classifying location-based situational contexts and augmenting them with the additional context information that is available to allow for higher-level inferences.

Welbourne et al. utilized Place Lab, an indoor tracking system that utilizes Wi-Fi and is capable of providing location tracking with a resolution of 20-30 meters (LaMarca, et al., 2005). This was in addition to a custom multimodal sensor board containing various sensors such as accelerometers and barometric pressure sensors (Welbourne, Lester, LaMarca, & Borriello, 2005). The system is capable of classifying modes of transportation and extracting significant points in users’ daily activities, features that are available in previously developed system. What is introduced here is the ability to perform classification of modes of transportation in the absence of GPS data and without prior knowledge of the transit routes.

Activity Recognition

Activity is another major contextual component and as such, great emphasis has been placed on activity recognition. Most implementations for activity recognition require the placement of motion- or orientation-based sensors such as accelerometers, gyroscopes, magnetometers, digital compasses, etc., at pre-specified points on the user’s body. Much attention has been given to more accurately and efficiently classifying user activities. These efforts range from the use of a greater number of motion-based sensors to variations in the feature extraction and classification algorithms used. While high rates of classification accuracy have been obtained for some standard human postures such as standing and sitting, and for experiments conducted using small activity sets in supervised laboratory settings, there is still a lot work to be done on improving these classification techniques for real-world applications.

There have been several approaches used in performing activity classification. Among the more prominent are the use of threshold-based classification techniques (Sposaro & Tyson, 2009), the use of decision trees (Miluzzo, et al., 2008), and pattern matching (Wang, Yang, Chen, Chen, & Zhang, 2005). Bao and Intille first investigated the performance of recognition algorithms with multiple wire-free accelerometers using user-annotated data sets (Bao & Intille, 2004). The objective was to develop and evaluate algorithms for detecting physical activities from accelerometer readings. The systems used five small biaxial accelerometers placed on different parts of the body. Acceleration data from the wrist and arm are known to improve recognition rates for activities that involve predominantly upper-body movements. To ensure that these movements were adequately represented in addition to full- and lower-body movements, accelerometers were placed on the right hip, the dominant wrist, non-dominant upper arm, the dominant ankle, and non-dominant thigh. No wires were used to connect the boards to any other device and data was collected and stored on memory cards integrated into the sensor boards.

Users were asked to perform and label a set of 20 activities without researcher supervision, and the resulting data was used to perform both training and testing. Activity labels were chosen to reflect the content of the action but not the style e.g., the data obtained from a user walking would be labeled as "walking" there would be no attempt to indicate pace. The classifiers examined were based on mean energy, frequency-domain entropy, and correlation of acceleration. Activity recognition was performed using Decision Table, Instance-based Learning (IBL/Nearest Neighbor), C4.5 Decision tree, and naïve Bayes classification. All the algorithms used were done using the WEKA Machine Learning Algorithms toolkit (Witten & Frank, 2005). There were two protocols used for training: user-specific, where classifiers are trained on each user’s activity; and leave-one-subject-out where training is performed using data for all subjects except one.

Decision tree classifiers were observed to perform the best, with a classification accuracy rate of 84%, and nearest neighbor were found to be the second most accurate algorithm. Decision trees are slow to train but can to run and pre-trained trees can provide real-time classification on mobile devices. Results showed that recognition was higher in all methods using the leave-one-subject-out protocol. It was also observed that while some activities could be classified with a high level of accuracy without user specific training, there were some activities that showed more accuracy using user-specific training data, for instance a user that stretches during the "rest" activity. The results also indicated that multiple accelerometers aid in recognition due to conjunctions in acceleration feature values which can help to discriminate between many activities. In addition to the analysis for the performance of the various classification techniques, the researchers wanted to determine the discriminatory power of each accelerometer location. The experiment indicated that the accelerometer placed on the subject’s thigh was most effective for recognizing the set of activities; this was followed by the accelerometer placed on the hip. The best combination of sensors for recognizing activities requiring both upper- and lower-body movements was a combination of the thigh and dominant wrist locations.

Activity Recognition on Mobile-Devices with External Sensors

Győrbíró et al. introduce a mobile context recognition system that recognizes and records the activities of a user on a mobile phone (Győrbíró, Fábián, & Hományi, 2009). This system is comprised of three main components: wireless body sensors, a smartphone, and a desktop workstation. It utilizes wireless sensors (triple-axis accelerometer, magnetometer, and gyroscope) in the form of a custom device referred to as the MotionBand (Laurila, Pylvanainen, Silanto, & Virolainen, 2005) placed at different locations on the user’s body. These sensors record and relay the intensity of user motions to the mobile phone. The use of these three sensor types allow for the tracking of both the orientation and motion of the corresponding body position. However, of the sensors available, the accelerometer provides perhaps the most valuable information; that concerned with the forces describing the motion. In addition to the accelerometer data, other events that are considered to be interesting to the user are recorded. For example, if an image is captured or a phone call is made or received. These captured events are augmented with metadata to describe the context of the events.

The activity classification system was developed as part of a larger research project concentrated on "life logging", the goal of the project being to capture personal memories through the use of extensive recording. The focus of the project has been on information that can be acquired primarily through mobile phones such as videos, photos, telephone calls, messages, etc. A primary focus of this work is to enhance the recoverability of recorded events; thus for each event, additional metadata including time, location, and activity is stored. The system uses feed-forward back propagation neural networks to perform feature classification and activity recognition. This method is employed to take advantage of the fact that such neural networks are able to perform classification quickly after initial training, allowing for a near real-time classification of activities on the smartphone.

Having decided to use neural networks, the choice remained as to whether a single large neural network or multiple small (targeted) neural networks should be used. The small neural networks were ultimately chosen, as these are specifically trained to classify a single activity and as such are believed to perform better than those required to perform multiple tasks. The networks were trained using a ten-fold cross-validation method using the Levenberg-Marquardt method as the back propagation algorithm. The algorithm while effective is also computationally expensive, with memory and speed requirements, limiting its applicability to only relatively small networks. Given that this system is expected to provide contextual information in near real-time, the calculation of intensity values is performed continuously. Using this classification method, the average observed activity recognition rate was 80% for the six motion patterns tested. These activities were resting, typing, gesticulating, walking, running, and cycling.

Another system that uses external sensors for mobile activity recognition was proposed by Kawahara et al. to perform context recognition and posture/activity classification using only a single sensor attached to a mobile handset (Kawahara, Karasawa, & Morikawa, 2007). While most implementations for activity recognition require the placement of motion-based sensors at specific points on the user’s body, the proposed system automatically determines the placement of the sensor on the user’s body and dynamically selects the most relevant inference method based on this position. A module containing a triple-axis accelerometer was attached to the mobile device and the sensor data was transmitted to a mobile PC via Bluetooth. The module performs sampling at a rate of 20Hz; it should be noted however, that processing of the data was done on the mobile PC and not the handset.

The inference method in this project is divided into three separate steps, pre-processing, sensor position inference, and user posture inference. Pre-processing involves actually extracting feature values from the accelerometer data. These features are namely the variance of the last 12 samples, the average of each axis for the last 4 samples, the change of angle of the sensor device, etc. Sensor position inference as the name suggests, seeks to determine the position of the device relative to the user’s body. This is done using the previously calculated features and makes certain assumptions towards a final inference. For example, it is assumed that when a user doesn’t wear the sensor device, the variance is nearly zero.

After determining the orientation and position of the sensor, the system then selects the appropriate algorithm for inferring user posture. This inference is done following four rules, two general rules that are applied regardless of the device position, and two rules that are applied based on the sensor position. The two general rules are: (1) use variance value to determine whether the user is moving or not, (2) use maximum value of FFT power spectrum to determine the state of walking or running, and the pace. The two specific rules used are: (1) when the sensor is determined to be in the pants packet, a change of the sensor angle can be used to estimate a sitting motion, (2) when the device is in the chest pocket, sensor angle is helpful for estimating forward-leaning, backward-leaning, or side leaning. The proposed classification methods when applied in the experiments showed the system can correctly infer posture with accuracy greater than 96%.

Another notable activity classification system with a primary focus of mobile health monitoring is proposed in (Hong, Kim, Ahn, & Kim, 2010). The researchers propose an expansion of basic activity recognition to a more holistic health monitoring system by measuring expended calories without the use of complex or expensive equipment. In this system a gas analyzer coupled with accelerometers are used. Passive RFID tags are also used to perform human-object interaction recognition. In addition to recognizing the current state of motion of the user's body, identified object interactions can often be used to infer a more detailed description of the user's intent. For instance, an action that may have simply been classified as a hand gesture can be recognized as the user interacting with a coffeemaker, a cup, a hairbrush or other everyday items. The system employs three accelerometers placed on the thigh, hip, and waist of the subject. These sensors used Bluetooth modules to wirelessly relay the collected data. The gas analyzer monitors the inspired oxygen and expired carbon dioxide and this information can be used to determine the amount of energy expended based on the oxygen consumption during activities. The gas-analyzer used in the experiments was impractical for daily use due to its rather intrusive and bulky design and the requirement of the user to continuously wear the mask.

Activity Recognition Using Mobile Device Accelerometer Readings

iLearn is an activity classification system developed by researchers at the University of Washington that utilizes the three-axis accelerometer in the iPhone and supplemented with information from the Nike+iPod Sport Kit (Saponas, Lester, Froehlich, Fogarty, & Landay, 2008). The system attempts to perform classification on several regular daily activities including running, walking, sitting, and cycling among others. Training data was gathered from a small set of test subjects that were selected to perform the desired activities while wearing the requisite sensing devices. The data collected was then used to determine the machine learning algorithm to be used for classification in addition to identifying the features for performing the classification. The researchers developed iLog, an iPhone application used for gathering the training data. The data collected using the application was provided as input to a secondary application iModel, a desktop application that is used to learn a model or test an existing model.

The models generated from the iModel application, are used in the iClassify application, another iPhone based application, which performs the actual activity classification based on the learned models. The iClassify application reports activity classifications approximately once per second. This work identifies one potential problem with using external sensors. Although the researchers were able to successfully use the data captured from the Nike+iPod sensor in feature classification, they were ignorant of the packet contents and simply used the individual bytes as they were received. This was done in light of the fact that these packets were only transmitted when the user was performing some activity such as walking or running. Features were created over one second data intervals on data from each axis of the triple-axis accelerometer. A Naïve Bayesian Network as implemented in the WEKA machine-learning toolkit (Witten & Frank, 2005) was used to learn models and classify activities. The results of this work suggest the observed activities can be correctly classified at accuracies of up to 97% without end user training.

Multi-Sensor Heterogeneous Sensor Fusion for Context Determination

Given the requirement of context-aware applications to observe both the physical and operational environments, there are often hardware components, namely sensors, which facilitate such observations. A device containing such sensors can be viewed as a sensor node, or as a sensor network containing a single node. By facilitating communication between sensing devices we allow for the expansion of the sensor network and facilitate a potentially more detailed view of the sensed environment. Wireless communication technologies such as Wi-Fi and Bluetooth facilitate communication between these devices, and the associated improvements in the technologies allow higher communication speeds, lower latencies, and increased bandwidth resulting in the ability to both sample and transmit larger volumes of sensor data. For example, smart spaces having less operational restrictions than mobile devices may be able to integrate greater number of sensors and collect data for larger timeframes may be able to pre-process the raw sensor data and provide contextual input to a mobile device operating within the environment. The capability of nodes to perform processing on their own sensed data or on data aggregated from other nodes, greatly increases the ability to provide more specific situational inferences.

Smart Artifacts

Smart artifacts refer to everyday objects which are augmented with information technology. "Smart-Its" are small embedded devices for the augmentation and interconnection of artifacts which in general, integrate sensing, processing, and communication with variations in perceptual and computational capabilities. The integration of sensors and perception techniques facilitates the autonomous awareness of an artifact’s context, independent of infrastructure (Holmquist, Mattern, Schiele, Alahuhta, Beigl, & Gellersen, 2001). The Smart-Its devices have data acquisition allocated on the sensor unit, with a dedicated processor for sensor control and the extraction of generic features.

Each Smart-Its device is aware of its sensing capabilities and can report them to its neighbors (other Smart-Its devices) if necessary. This knowledge of the sensing capabilities of neighboring sensors allows a sensor to make assumptions about the current context and forward the requisite communication packets (Smart Context-Aware Packets (sCAPs)) to the appropriate neighboring sensor for further processing. One area of experimentation using the Smart-Its sensors is having connections established based on context proximity. Context proximity as a paradigm in Smart-Its refers to the determination of closeness of artifacts that experience similar situations or conditions, whereby the connection initiation can be implicit and occur automatically when two sensors nodes are determined to be close or through explicit connection requests.

Applications of Sensor Fusion in Context Inference

Huadong et al. have proposed a solution using sensor fusion and an associated framework towards achieving more efficient context-sensing and to overcome the current shortcomings of sensor-fusion technologies and their applications in context-aware systems (Huadong, Siegel, & Ablay, 2002). Due to the distributed nature of sensors in a mobile environment, sensor cost, the nature of the sensed context - numerical versus semantic description, and the desire to achieve near-human perception capabilities; traditional sensor fusion technologies are not able to satisfy current context-sensing requirements. Based on prior work, the researchers chose to combine the knowledge of the user's environment with human factors and historical knowledge to perform context classification. The system was developed using a context component architecture built on the Context Toolkit System (Salber, Dey, & Abowd, 1999), with the sensor fusion technology being implemented by the researchers. A hierarchical model was chosen to represent the contextual information in an effort to facilitate scalability with regard to the modification or addition of contextual inputs.

Loke in (Loke, 2012) proposes the concept of sensors and context cloudlets in a system developed using the Context Toolkit. Sensor clouds refer to ad-hoc sensor groupings and the associated reasoning modules used in the analysis of the data sets available within the groupings. These sensor clouds can be used to provide contextual input to applications or services requiring this information. If sensor clouds also have the requisite mechanisms to fuse sensor data and to perform context inference they are then referred to as context clouds. Further classification can be made referring to cloudlets, with those cloudlets having sensors as the primary components called sensor-cloudlets and as context-cloudlets if they are able to perform context inference. Cloudlets are well-connected, resource-rich clusters or clusters of computers available for use by nearby mobile devices (Satyanarayanan, Bahl, Caceres, & Davies, 2009).

There are however, challenges with using sensor cloudlets including: the requirements to pool resources in an ad-hoc, on-demand fashion constantly changing end-user or application requirements, accessibility, scalability, and security to name a few. One of the main complications in addition to identifying sensor resources is determining what subset of components an individual node contains i.e., sensors, analysis mechanisms, and context inference components. This requires the establishment of a framework that provides a consistent platform for: (1) the representation of sensors, sensor data, context information etc.; (2) the representation of the components that perform context aggregation, analysis, and inference; (3) mechanisms to compose the necessary sensor components satisfy the context requirements for a particular end-user or application, (4) a mechanism to facilitate searching and discovery of sensor components and context providers; and lastly a system of governance for the services required.

Hong et al. examine a theoretical system in which smart homes and potentially other smart spaces equipped with centrally managed networked devices capable of adequately identifying a user's situational context can be used to provide a means for independent but safe living, remote but effective care, and constant professional health monitoring (Hong, Nugent, Mulvenna, McClean, Scotney, & Steven, 2009). Such a system would facilitate users requiring medical attention or observation to undergo rehabilitation in the comfort of their own home while allowing healthcare professionals to observe their progress and identify potential problems. One of the main concerns as it relates to such an implementation is the privacy and comfort of the user. Simple sensors such as motion detectors and switch-on pressure sensors are viewed to be less invasive than other devices such as cameras and microphones that produce a more holistic view of the user's environment. In this work context is seen as any information that can be used to characterize the user's activity, this may include the room currently occupied, object interactions, motion and current time. In this work context is seen as any information that can be used to characterize the user's activity, this may include the room currently occupied, object interactions, motion and current time.

Mobile Context Inference Using Sensor Fusion

SenSay (Sensing & Saying) is a context-aware mobile phone system developed by researchers at Carnegie Mellon University, which seeks to use non-traditional sources for context inference (Siework, et al., 2003). The system modifies its behavior based on the user’s current context, i.e., the user’s current state and surroundings. The system utilizes several sensors to determine context including a light sensor, single axis accelerometers, a temperature sensor, and microphones. There are two types of microphones used in the system: ambient noise – an omnidirectional microphone, and voice – a binary sensor only needing to identify whether or not the user is speaking. In addition to the sensor readings, the system also uses information that is available about a user’s schedule by way of an electronic calendar, and the current state of the device to further refine the inferred context and associated actions.

SenSay is not self-contained i.e., the system is not implemented on a single (mobile) device, but rather uses a PCB module housing several of the sensors, while the microphones are mounted directly to the user’s body (the ambient on the chest and the voice on the throat), and the light sensor unit is attached to the mobile phone. This is in addition to the decision and action modules which are both running on a laptop connected to both the mobile phone and the PCB module. The sensors are queried once per second and the associated data provided to the decision module for classification. The overall focus of the system design is to use the knowledge of a user’s current activity to adjust mobile phone settings, particularly as it relates to ringer configuration; for example recognizing the user’s unavailability and turning the ringer off or increasing the ringer volume if the user is involved in some high intensity activity. Additionally, the system was capable of automatically handling incoming calls based on the user’s current context and provides the caller with information about the user’s current context especially for those circumstances where the user’s state is uninterruptible. It also recommends contacts during recognized periods of inactivity or ‘idle’ periods based on contact history.

To help determine the user state, and to identify transitions in state the system stores ten minutes of sensor value history. Sensor history is of particular importance for the latter, as it prevents frequent inadvertent transitions between states due to variations in sensor values over negligible timeframes. If the phone state is updated based solely on the sensor data sampled at each interval, the state transitions would be susceptible to small changes in environmental data. Within the decision module, there is a circular buffer capable of storing ten minutes of data. This means the decision module is capable of processing at most the last ten minutes of sensor data. The decision module inspects the gathered data and produces a small number of outputs. Of these outputs, the most important is the state that the phone should enter (the current state).

The action module is responsible for issuing changes in settings and operations on the mobile phone. The action module is implemented as an interface on the mobile device that provides access to the smartphone OS (Palm OS). It is controlled by the decision module and was designed to simplify access to a set of simple controls required by the decision system. This includes ringer volume control, vibration control, the ability to send SMS messages, access to the electronic calendar, etc. The sensor module is responsible for querying the sensor box which is mounted on the user’s waist and returning the data to the decision module.

The system consists of a set of predefined states: normal, active, uninterruptible, and idle; with each state being associated with set of phone actions. The uninterruptible state is one of the more notable and when the device is in this state for example, the ringer is turned off. A user is considered as being uninterruptible if they are engaged in a meeting or conversation. The high priority attribute when checking for this state is a simple check of the electronic calendar to determine whether or not the user has a meeting scheduled. A major assumption that is required for this function to operate as expected, is that the user’s calendar represents an accurate schedule and most if not all activities are documented. The second priority parameter is whether the user is speaking or engaged in a conversation. In this state the phone, as much as possible, will not interrupt the user, thereby eliminating potentially undesired inconveniences. To ensure this, only the past five seconds of voice and ambient noise data (the two sensors required to define a conversation) are inspected when determining if the user is uninterruptible. Conversely, in accordance with the design objective of not interrupting the user, a large amount of sensor history is considered before transitioning from the uninterruptible to the normal state.

Santos et al. propose UPCASE (User-Programmable Context-Aware Services), a project funded by a national telecom operator intended to provide context inference based on a smartphone augmented with an array of sensors connected via Bluetooth (Santos, Cardoso, & Ferreira, 2010). The system uses a decision tree model for context inference with the decision tree being built from a common set of context rules in XML considered to be a base representation for user profiles. It is updated in real-time as the user trains the system with new contexts. Identified contexts are buffered over a finite number of readings which are used for a final context inference based on the most commonly identified context. This prevents spasmodic changes in context due to momentary changes or noise in the sensor data, and facilitates the assignment of a confidence value to the inferred context. The system is integrated with social-networking sites such as Facebook and Twitter using their available APIs to allow users to publish their activity and situational context information. The researchers also attempted to cluster context-change sequences from all users in an effort to determine if it would be possible to rediscover the individual user profiles. This analysis was performed using the sequence clustering algorithm of Microsoft SQL Server Analysis Services.

Miluzzo et al., having acknowledged individuals as both carriers of sensing devices and consumers of sensed events, have sought to explore a new application domain that extends beyond the traditional sensor networks and focuses on environmental and infrastructural monitoring while integrating the individual as a core component of the system. In light of these observations the CenceMe application, a people-centric sensing application is introduced and evaluated (Miluzzo, et al., 2008). CenceMe utilizes existing off-the-shelf sensor enabled mobile devices to automatically infer a user’s sensing presence and share this information using existing social networks such as Facebook. The system uses a split-level design in which some or potentially all of the data classification can be done on the phone and the classification of the data in a social context is performed on a server. Details of the implementation will be discussed later. The impact of phone placement on the quality of context inference was analyzed.

Data classification on the mobile phone produces outputs referred to as primitives. These primitives are sent to the backend server where they are stored in a relational database from which they can later be retrieved for more complex classification. Primitives are often transmitted in batches, resulting in a delay in updating backend information, where the delay varies according to the type of presence inferred. This delay introduction is in pursuit of achieving the goal of making the system both energy efficient and reliable, and fits well with the design objectives as the system is intended to share user activity with members of a social circle, so a reasonable delay is acceptable. The CenceMe mobile application handles the collection of sensor data, classification of raw sensor data to produce primitives, relaying primitives to the backend servers, and displaying other users’ presence. Primitives are calculated from the classification of sound samples from the microphone using DFT and a machine learning algorithm using discriminant analysis, the classification of accelerometer data to determine activity using the mean, standard deviation and number of peaks along each axis, Bluetooth and wireless MAC addresses in range of the device, GPS readings, and photos captured whenever a keypad key is pressed or calls are received.

The decision trees are constructed using the J48 decision tree algorithm from the WEKA workbench (Witten & Frank, 2005). This results in a lightweight decision tree allowing the classifier to complete, on average, in less than one second while performing classification on the mobile device. As noted earlier, the backend classifiers perform higher level classification such as social context, significant places, and statistics over large amounts of data. The backend also performs conversation classification to determine if a user is engaged in a conversation. To perform this classification, it uses a rolling window of N audio primitives, where for this particular implementation N = 5. This classification is done using the simple binary conditionality of voice versus non-conversation (silent) classifications, where a requirement of two out of five classifications should be voice to trigger a conversation classification.

Social context is determined by aggregating several primitives in addition to backend classifiers such as proximity to social contacts, and social status which is a reflection of the output of other primitives such as activity and conversation status. The location classifier calculates location estimates of users for use by other backend classification such as proximity to social contacts. Social context can therefore be used to represent social gatherings and interaction patterns among social contacts. In these experiments data collection was performed with the device in one of three locations: clipped to the belt, in the front pants pocket, or on a lanyard around the user’s neck. For determining activity, it was noted that placing the phone in the pocket or clipped to the belt produced similar results, but placing the device on a lanyard yielded poor results. There was however, much less variance based on device placement in the classification of conversations with only a 1% difference in classification accuracy in favor of being placed on a lanyard for classifying non-conversation scenarios and a 6% difference in favor of being placed on a lanyard for classifying conversations. The environment was also shown to be a major factor in accurately classifying conversations, as noisy environments, both indoor and outdoor, proved to have different effects on the number of false positives.

Breslin et al. propose a theoretical framework where users’ and social networks play a more integral role in the determination of context (Breslin, et al., 2009). The framework is based on establishing a social network between communication-capable sensing devices including mobile phones, where users are able to obtain a larger picture of the situational context in an environment by querying the contextual input of other social contacts. An example would see a user being able to analyzing the microphone readings of the mobile device of a social contact to determine ambient noise levels at the contact’s location. This sensor information can be augmented, or in the absence of sensors replaced with human input whereby people are treated as sensors and queried for the desired information. Such a system potentially introduces primary concerns and may require user’s to indicate their level of participation in context queries regarding what information they are willing to supply and who can access this information.

The system can use a version of expanding ring search where direct contacts are queried, and if a satisfying response is not found, query those users linked by one indirection. This however, may be at the cost of less accurate information possibly due to access limitations, and acts as an incentive for the user to expand their social and sensor networks. In this scenario sensors are also viewed as a social network or more specifically a sensor network. As members of the network they represent just another set of paths in the graph. A trust relationship is established where if person A owns sensor A and person B owns sensor B, and person A knows person B, it can be implied that sensor A "knows" sensor B. Potential applications of this framework include the support of independent living and health support for the elderly. It can be used to find company for regular activities such as shopping, exercise or other recreational or rehabilitation-based activities, and promoting mobility in the elderly. From the standpoint of health-monitoring, abnormal activity patterns can be noted or alerts issued based on a set of rules, to caregivers, clinicians, family members or friends in the user's social network

With the general progression of embedded devices associated with reductions in size and increased flexibility, these devices have begun to be embedded in fabrics including articles of clothing and general accessories. This has resulted in increased availability and functionality of wearable computers and sensors. Wearable system design requires consideration towards the resolution of communication and computation tradeoffs resulting from the ability to equip sensors with processing devices and network and transmission technology issues need to be resolved as well. The primary consideration for both issues are power consumption and user comfort. Lukowicz et al. proposed a wearable sensor system known as WearNET (Lukowicz, Junker, Stäger, Büren, & Tröster, 2002). WearNET seeks to extend the Smart-Its sensor board by integrating additional sensors, primarily multiple motion sensors, and distributing them appropriately on the user's body in an effort to accurately detect user activity. The overall focus of the system design is to establish a balance in terms of efficiency and versatility in a distributed sensor network and more specifically in utilizing a wearable sensor network to effectively determine user context.

Van Laerhoven et al. in recognizing the importance of using sensor data in context classification have applied this methodology to a set of wearable sensor networks (van Laerhoven, Schmidt, & Gellersen, 2002). The sensor networks consist of varying sensor types including but not limited to: ambient light, temperature, accelerometers, and sound sensors. The general goal of the research is to focus on sensor fusion and the efficient use of sensor data in determining context. There is also an examination of the potential gains and feasibility of utilizing an increased number of sensors and sensor data in determining context. The design uses multiple sensors connected to single microprocessors, with these microprocessors being further daisy-chained to the desired level, providing a somewhat centralized processing architecture for sensor data. In considering this implementation, each microprocessor is able to collect and pre-process sensor data allowing for certain desirable features in the implementation including scalability, flexibility, and robustness. Of these features flexibility and robustness are of particular importance since these sensors are expected to be embedded in some form of clothing. Analysis was done on the basis of several factors including the number of sensors used, identifying the best discriminating sensor and subsequent sensors up to the maximum 30 sensor limit, the best distinguished context and subsequent contexts up to the 10 contexts, and finally the degree of dispersion for each set of contexts. In this project the limited knowledge-base for recognition and the potential/typical values for multi-sensor wearable systems proved to be a challenge in implementation primarily due to the fact that such systems are not easily realizable.

The implementation of context-aware applications on mobile devices requires that certain inherent problems related to both the nature of context aware and mobile applications be addressed. The two main concerns for developing mobile applications relate to resource constraints and energy requirements. Although mobile devices continue to advance at a rapid pace, they are still not yet capable of efficiently and quickly performing complex operations on sizable amounts of data. Additionally, although devices are often advertised and used as entertainment and social hubs,



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now