Leveraging Sensor Data And Context Computer Science Essay

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Abhinav Mehrotra

University of Birmingham, UK

[email protected]

Abstract

Desktop user interface design has evolved on the basis that users are stationary (i.e. - sitting at a desk) and can normally devote most of their visual and physical resource to the application with which they are interacting. In contrast, users of mobile and wearable devices are typically in motion whilst using their device which means that they cannot devote all or any of their visual resource to interaction with the mobile application – it must remain with the primary task, often for safety reasons. Additionally, such devices have limited screen real estate and traditional input and output capabilities are generally restricted.

Consequently, collaborative mobile applications support users on the move in order to perform a collaborative task. One of the challenges when designing such applications is to consider the context where they will execute. Contextualized applications are easy to adopt by the users; unfortunately the design of such tools is not evident. However, now mobile-phones are coming with inbuilt sensors, capable of sensing many factors around the device. Sensing techniques boosts the aspects of human-computer interaction with such handheld devices in unique ways. Special features of mobile interaction include changing orientation and position, changing venues, the use of computing as auxiliary to ongoing, real-world activities like talking to a colleague, and the general intimacy of use for such devices.

This paper introduces a set of sensors into a mobile device, and demonstrates several new functionalities stimulated by the sensors. For instance, switching between portrait and landscape display modes by holding the device in the desired orientation, automatically powering up the device when user picks it up to start using it, and scrolling the display using tilt. Additionally, we discuss the issues in interaction with mobile, some empirical examples of the potential to achieve context-aware interaction with mobiles. Finally, we present an idea of how some of the existing issues can be solved in future by the help of sensor data and context-awareness.

Keywords— Input devices, interaction techniques, context-awareness, mobile devices, mobile interaction, mobile sensors.

Introduction

Interface design of desktops has evolved on the basis that users are stationary and can devote most of their visual and physical resource. The interfaces of desktop based applications are typically very graphical, often extremely detailed, and utilize the standard mouse and keyboard as interaction mechanisms [4]. But now, the typical user is not facing a desktop machine in the relatively predictable office environment anymore. Rather, users have to deal with mobile devices (i.e. - smartphones, tablets etc.) sporting diverse interfaces and used in diverse environments. However, many important pieces necessary to achieve the ubiquitous computing vision are not yet in place.

Compared to desktop computers, the use of mobile devices is more intimate because users often carry mobile device throughout their daily routine. Most remarkably, interaction paradigms with today’s devices fail to account for a major difference with the static desktop interaction model, so they present HCI design opportunities for a more intimate user experience [46]. People also use mobile devices in many different and changing environments, so designers don’t have the luxury of forcing the user to "assume the specific position" to work with a device, as it is the case with desktop computers [53]. For example, it is required by the user to accept the qualities of the environment such as light levels [22], sounds and conversations, and the proximity of people or other objects, all of which taken collectively comprise attributes of the context of interaction [23]. But if mobile devices remain unaware of important aspects of the user’s context, then the devices cannot adapt the interaction appropriate for the current task or situation [57]. Thus an inability to detect these crucial events and properties of the physical world can be viewed as missed opportunities, rather than the basis for leveraging deeper shared understanding between human and computer [10]. Furthermore, the set of natural and effective gestures are the tokens that form the building blocks of the interaction design, which may be very different for mobile devices than for desktop computers. Over the course of a day, users may pick up, put down, look at, walk around with, and put away (pocket/case) their mobile device many times [27], these are naturally occurring "gestures" that can and perhaps should become an integral part of interaction with the device[58].

 

Because the user may be simultaneously engaged in real world activities like walking along a busy street, talking to a colleague, or driving a car, and because typical engagement with the device may last seconds or minutes rather than hours [62], interactions also need to be minimally disruptive and minimally demanding of cognitive and visual attention. One hypothesis shared by a number of ubiquitous computing researchers is that, enabling devices and applications to automatically adapt to changes in their surrounding physical and electronic environments will lead to an enhancement of the user experience [31]. This is only possible by the help of sensors available in the mobile devices. The information in the physical and electronic environments creates a context for the interaction between humans and mobile devices [28]. Here, context is defined as any information that characterizes a situation related to the interaction between users, applications, and the surrounding environment [28]. The growing research activities within the area of ubiquitous computing, deals with the challenges related to context-aware computing. Though the notion of context can entail very subtle and high-level interpretations of a situation, much of the effort within the ubiquitous computing community takes a bottom-up approach to context. The focus is mainly on understanding and handling context that can be sensed automatically in a physical environment and treated as implicit input to positively affect the behavior of an application. As there is a plethora of inexpensive but very capable sensors [23] [59], so before going further, let’s discuss the sensors available in today’s mobile devises, and what components of context can be sensed by these.

Sensors available in today’s mobile

Figure1. Sensors in a standard mobile device.

Accelerometer – An accelerometer is a sensor that measures proper acceleration. The proper acceleration measured by an accelerometer is not necessarily the coordinate acceleration (rate of change of velocity). For example, an accelerometer at rest on the surface of the earth will measure an acceleration g= 9.81 m/s2 straight upwards, due to its weight. Conclusively, an accelerometer is a sensor that measures acceleration relative to a free-falling frame of reference. It can measure the magnitude and direction of the acceleration in 3-dimentional space, and can be used to sense the orientation of the device.

Ambient light sensor – Ambient light means the light that is already present in a scene, before any additional lighting is added. It usually refers to natural light, either outdoors or coming through windows etc. It can also mean artificial lights such as normal room lights. So, ambient light sensors are used to detect light or brightness in a similar way as the human eye. They are used wherever the settings of a system have to be adjusted to the ambient light conditions as perceived by humans.

Bluetooth device – Bluetooth is used to exchange data over short distances using short-wavelength radio transmissions. This device acts as a sensor to sense the Bluetooth devices, as it is capable of connecting to the Bluetooth devices that are available within a radius of up to 100m.

Cameras – A camera is a device that records images, these images may be still photographs or moving images such as videos or movies. It can even classify the human expressions, different objects etc. Now-a-days mobile phones have camera dual camera, one on the front and one on the back side.

GPS – The Global Positioning System (GPS) is a space-based satellite navigation system that provides location and time information in all weather conditions.  It is maintained by the United States government and is freely accessible to anyone with a GPS receiver. (find more)

Gyroscope – A gyroscope is a device for measuring or maintaining orientation, based on the principles of angular momentum. It can be used to measure the tilt of the device and also it can be used to either measure, or maintain, the orientation of a device. Unlike an accelerometer, which measures the linear acceleration of the device, a gyroscope measures the orientation directly.

Humidity Sensor – A humidity sensor, also called as hygrometer, is a device that measures and regularly reports the relative humidity in the air. A humidity sensor senses relative humidity. This means that it measures both air temperature and moisture. It consisting of a special plastic material whose electrical characteristics changes according to the amount of humidity in the air.

Microphone – A microphone is an instrument for converting sound waves into electrical energy variations, which may then be amplified.

Proximity sensor – A proximity sensor is a sensor that can detect the presence of nearby objects without any physical contact. It emits an electromagnetic field or a beam of electromagnetic radiation (infrared, for instance), and looks for changes in the field or return signal. A typical capacitive proximity sensor has a 10-mm sensing range and is 30 mm in diameter.

Context-awareness

Context-awareness puts emphasis on users who are carrying portable devices, such as Smartphones or Tablets that has been augmented with hardware sensors, such as accelerometer, ambient light sensor, microphone, camera, GPS receiver etc. These sensors could detect location, activity, network connectivity, user’s state, and so on. As the users have to deal with mobile devices (i.e. - smartphones, tablets etc.) sporting diverse interfaces and used in diverse environments, therefore context aware mobile devices should react and adapt to the changes to the user’s environment and provide more relevant way of interaction with mobiles. But for this it is necessary for the mobile devices to detect the change in the user’s environment that means making mobile context aware. This is possible by the help of sensors available in the mobile devices, which can detect many factors of the user’s context.

Context awareness is an interdisciplinary work which is associated to several fields. Context-aware computing was first studied by Schilit and Theimer in 1994 [60]. The adoption of context awareness in learning is not a new idea; several classical researches have made context aware systems [1, 20, 51]. Context awareness, of course, is not our goal. Context adaptation strategy is crucial in order to improve mobile interaction performance.

To elaborate leveraging sensor data and context information to improve user’s interaction with mobile, context and interaction through context adaptation, are described in the following subsections.

Context

Understanding context is per se a complicated task. The basic definition of context is still not very certain. Some researchers simply defines context in term of concepts like "situation", "location", "environment", etc., which is similar to dictionary definitions. Some tried to explain it just by the means of examples of daily life situations, applications and gadgets [14, 61]. Some researchers have also defined it with very formal definition [1, 14, 62]. Some recognized context with only location 1994 [60] or some with location, time, season, etc. [9, 58]. But for the better interaction performance the following main factors might be enough.

Figure2. Context of a mobile user.

Location: Having a physical location awareness means that it is possible to deduce distance, travel speed, and, to some extent, the location’s personal meaning to the user based on experience. This can be done on a cell phone on a rudimentary scale using cell tower triangulation, although this is information that consumers and, ironically, most developers do not usually have ready access to. However, the best technique thus far for location awareness is global positioning systems (GPS). While plug-in GPS modules for PCs have existed for years, now GPS functionality is commonly seen in cell phones. But since GPS uses satellite triangulation, it only works outdoors. For indoor location awareness, researchers and commercial companies are starting to look to 802.11b signal strength triangulation [3]. Already, just by combining both indoor and outdoor location awareness and correlating that with calendar information, a device could learn locations of people and place names, and deduce if the user is where they said they would be or is on their way.

Ongoing task: Ongoing task refers to what a user is doing, what task they are engaged in or trying to achieve. Various pattern analysis techniques or case-based reasoning could apply to this area of context awareness. Specifically in the domain of user interface, Cypher, Lieberman and others have written extensively on programming by example or programming by demonstration (PBD), which works like a task context recorder/operator. [16, 45]. This can get benefit from the sensor data of the sensors like: proximity sensor, accelerometer, gyroscope etc. A device with the ability to recognize user task patterns and then perform those tasks semi-autonomously would be a very valuable personal device. Coordinating that task awareness with location awareness means that a device could automatically perform simple tasks in the background whenever it is in a user-specified (or service provider specified) location. Coordinating task awareness and location awareness with temporal awareness means that a device could know when it will need to perform certain tasks and either begin them ahead of time or automatically coordinate its efforts with the user. If required, then even the user’s emotion while performing the task can be integrated to this. Researchers have already introduced the basic emotion sensing capabilities in smartphones [51].

Environment: Determining and adapting to the environment is deceptively difficult. This, of course, is not simply the sensing of environment factors like temperature or humidity. This refers to determining the surroundings of the user. It can be achieved by the help of sensors like: Bluetooth device, cameras, microphone, ambient light sensor etc. With the present technology, speech recognition systems are capable to sense the environment of user by analyzing the background voices. Though, researchers have made use of such technologies to gain context awareness in smartphones [5]. However, integration of such technologies with the sensor data of the above mentioned sensor will enhance the system of sensing the environment.

Interaction through situated or reactive adaptation can improve the performance. Adaptation of interaction strategies can be made in many ways, but it depends on the interaction issue and the context in which it happens. Before discussing the interaction issues, let’s see some achievements in this area by the researchers.

Related work

When it comes to suitability of a specific user, it is quite efficient to consider the necessary sensor data and context-awareness while making modification in the technology. This can be achieved by considering the degree of its adaptivity and/or adaptability [18, 64]. "Adaptivity" means the degree to which software can change itself in response to user behavior. "Adaptability" means the degree to which software can be customized by a user, therapist, or caregiver. Prior approaches to improve mobile interaction are numerous, unavoidably and appropriately having considerable overlap with each other, and accordingly, embody different principles to different extents. To illustrate how the principles of improved mobile interaction design may be upheld in specific technologies, below is a selection of projects that informed and inspired the formulation of the idea of leveraging sensor data and context information to improve mobile interaction. Table below summarizes the projects reviewed in this section and the ability based design principles upheld. Accordingly, it serves as a guide to the remainder of section.

True Keys

It was an approach of making keyboard typing more accessible, besides using keyboard settings and filters, is to simply correct typing errors as they occur [39]. True Keys was an adaptive online typing corrector that combined a lexicon of 25,000 English words, string distance algorithms [44], and models of keyboard geometry, enabling users with poor dexterity to produce accurate text from inaccurate typing. True Keys triggered its correction mechanism after a user finished a word and hit SPACEBAR. Later, the same technique was used in mobile phones to assist users while they type on their small mobile screens. However, on mobile phones using this technique reduced the error rate a bit, but users were still struggling to click on the right keys in a faster way.

Gesture Keyboard

The concept was designed by the researchers at the University of St. Andrews [72]. The gesture keyboard can be defined as a keyboard which takes input by sensing to the user’s gesture. Here, gestures are the movement trajectories of a user’s finger or stylus contact points with a touch sensitive surface. So, the user can slide across adjacent letters to enter a string of letters. As in almost every mobile the keyboard size is very small therefore it is difficult to enter the text at times and it is even slower. But this text entry method improves the speed of text entry as the user do not have to click the exact key, rather they can just pass by the set of keys. This technique provided mobile users much faster way of text entry as compared to True Keys.

Speech recognition

Speech recognition is a text entry system where user can enter the text by dictating the same to the mobile [32]. However, speech is an input technology quick to grab headlines but perennially unable to enter the mainstream of computing. Speech is a deserving technology, but it is unlikely to supplant traditional interaction techniques for mobile computing. Though, some mobile phones support limited speech recognition, where the interaction consists of selecting from a small list of preprogrammed entries, such as names in an address book. Speech recognition is not used for general purpose text input on mobile devices. A potential example for this is SIRI; it is an intelligent personal assistant and knowledge navigator which works as an application for Apple's IOS. The application uses a natural language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of Web services. Apple claims that the software adapts to the user's individual preferences over time and personalizes results, and performing tasks such as finding recommendations for nearby restaurants, or getting directions. Thus, the application improves the performance of the user interacting with mobile.

Merge model for multimodal text input

The approach of combining the speech and gesture modalities was named as a merge model [41]. This model is capable of combining output from several recognizers asynchronously and was originally developed for combination of multiple speech signals for a speech-only correction interface [36]. Later it was made capable to also be used to fuse speech and gesture keyboard recognition results. A merge model is now capable of combining recognition results in an asynchronous and flexible manner. This model can be proved as a better solution for text entry methods in mobile phones, but it is not applicable in every context of user. As there may be some cases where the user cannot make a proper use of this model, such as user is at some place where there is high background noise. So, such techniques can be automatically adapted according to the context of the user where it can perform its best.

Dwell-free Eye typing

Dwell-free Eye typing, is a gaze interaction techniques which enable users with certain motor disabilities to communicate via an eye-tracker [42]. For users with certain motor disabilities gaze interaction may be the only communication channel available. Given their importance to such users, gaze communication systems have been actively researched for over 30 years [49]. Unfortunately the record-speeds in gaze communication were relatively slow and range from 7–26 wpm [48, 49, 68, 72], as compared to this model which rates at 46wpm. The primary technique used for gaze communication in this system is eye-typing. To eye-type, the user looks at a letter on an on-screen keyboard. If the user’s gaze remains fixed on the same letter for a set time period (the dwell-timeout) the system assumes the user intended to write that letter.

User defined Motion gesture

User defined motion gesture is a technique where mobile phones can performs the tasks by users gestures. These mobiles will be pre-computed with the gestures of the user which are common in people while they perform daily tasks using their mobiles [55]. Modern smartphones contain sophisticated sensors to monitor three-dimensional movement of the device. These sensors permit devices to recognize motion gestures deliberate movements of the device by end-users to invoke commands. This includes gestures like: receiving a call, rejecting a call, navigating gallery images etc. The phenomenon works on the trajectory formed by the users while performing these tasks. For instance, when any user receives a call the trajectory they form is programmed to be remembered and the next time a call comes and forms the same trajectory then he call is received automatically. However, this technique can really improve the performance of users’ interaction with mobile but there may be some cases where users may perform the same task I a different way, so can these mobile devices made capable of learning users motion gesture automatically. Also, can the mobile be made intelligent to such an extent that if the mobile is used by some other user then instead of performing the task according to the gestures of previous user, it can again start learning the gestures?

Barrier Pointing:

Mobile interfaces are full of tiny targets, which may pose accessibility challenges. Whereas the customary approach of "flying in" to tap a small target with a stylus or finger is difficult for many people with motor impairments, utilizing physical screen edges provides an opportunity for increasing stability during pointing. According to Fitt’s law, the error rate while clicking the targets near the border is much less than to click other targets [21]. Barrier Pointing is a technique to move the targets near the screen border, which provides ease in clicking those [24]. However, this model might be very useful when the user is not stable (i.e. – walking, running), it has a constraint that it can display limited number of icons only, as the screen size of mobile is very small.

Slide Rule

Slide Rule is a prototype utilizing accuracy-relaxed multi-touch gestures, "finger reading," and screen layout schemes to enable blind people to use unmodified touch screens [38]. Slide Rule offered three applications: Phonebook, Mail, and Music. It was prototyped on an Apple iPod Touch. In contrast to a key- or button based screen reader, audio output was controlled by moving a "reading finger" across the screen. The spoken audio read the screen at a level of detail proportional to the speed of the moving finger. For example, if the finger moved quickly down an alphabetical list of Phonebook contacts, the spoken audio would say only the first letter of the names: "A," "E," "G", "L" and so on. If the finger moved more slowly, last names would be read. If it moved even more slowly, both last and first names would be read. This prototype is useful only at the situations when the users cannot devote their visual resources while interacting with mobile.

Tactile Feedback

Tactile feedback is a mechanism to provide visual feedback on mobile’s touchscreen devices [8]. Touch screen devices often featured audio feedback in response to user actions, but these mobile devices with small screens are used in dynamic mobile contexts, so the audio feedback alone is not sufficient often in all contexts. Adding tactile feedback to touch screens was argued to improve the usability by providing additional feedback to the user [8], e.g. confirming that a button has been pressed. This may be particularly beneficial in noisy environments, where audio feedback would not be heard. In addition, tactile feedback could improve the user experience by making virtual widgets feel more like real physical widgets.

Numeruous researches on tactile feedback for touch screens were been carried out by a number of researchers. Fukumoto [47] used a voice coil actuator mounted in the body of a device to provide single pulse tactile feedback to the finger when buttons on a numerical keypad are pressed. Nashel and Razzaque[52] used a vibration motor from a tactile mouse to provide information to users about the buttons on a PDA screen.

Snap-Cracle-Pop is a virtual ITU-T number keypad which provides tactile feedback on key press [37].The buttons are designed to be pressed with a thumb or a stylus. When a button is pressed the color of the button changes and a tactile click is presented; when it is released the color changes back to the original and second tactile click is presented to the user. This tactile feedback is designed to simulate the real tactile feedback experienced when pressing a physical button.

But, is that technique valid for improving the performance rate? Suppose, if the user can get the feedback before clicking any button, this might improve the performance by reducing the error rate. Touch screens technologies like "ZeroTouch" are capable of sensing the objects near to the screen without any contact [50].

Discussion

Mobile-phones have been a remarkable addition to human life, providing ease of access which has become a necessity of our life. This wonderful invention allows users to work on these devices regardless of location and even when they are in motion. But each time they cannot devote all or any of their visual or physical resources to interaction with the mobile application. Additionally, such devices have limited screen size and traditional input and output capabilities are generally restricted. Consequently, we want to develop effective techniques on mobile technology which can embrace a model modification with respect to the improved interaction with such devices. Since, today’s mobile phones comes with a plethora of inexpensive but very capable sensors [23] [59]. So, developing these techniques may be possible by integrating interactive sensing techniques. Speaking about an example may be: different ways of text input according to the user’s context, which gets adapted according to the sensor and contextual information.

Prior approaches to improve mobile interaction are numerous. Several researchers have found that we needed to adapt the present interaction techniques to prevent undesired interactions (e.g. tilt-scrolling and tilt display mode selection), which is possible by aggregating the data from multiple sensors and contextual information. This will help to make the mobile devices adaptable to various contexts without tempering the performance of the interaction. Additionally, new techniques can be developed for the contexts where the existing techniques do not work. However, mobile sensing is not really annoying for the users, but designers need to be careful to design for graceful failure in the event of incorrect inferences. Remaining section contains discussion on some common issues with mobile interaction.

The current technology development in mobile computing and upcoming application stores enable an easy development and distribution of mobile applications. This leads to an increasing number of available applications. Let’s consider an example, where user wants to search an icon from the list of icons present on his/her mobile. Due to the small screen size and numerous application icons presents, user finds difficulty in content discovery. Can this type of interaction be made faster, if the icons get arranged according to the context of user? It is not an easy task to understand how people arrange icons on mobile devices. To evaluate the impact of their contexts, a study was conducted by Bohmer and Bauer, who investigated the impact of context on the menus of mobile phones [7]. They found that the context of user plays an important role while selecting the application icons. A prototype was developed by them by inferring the contexts the mobile users are currently in by sensor data clustering approach [35], and a list of applications ordered by the estimated contextual relevance, were presented in the main menu. But, it might have implications on users, so why not just highlighting the icons instead of changing user’s arrangement of those icons? Is this technique valid for all types of user as it does not considering the user's habit on how to arrange icons? For instance, some users might always want to see some specific icons on their mobile’s main menu. So this arrangement may not be applicable to such users.

Now, let’s discuss how does inclination of human finger on mobile screen imposes hindrance in proper usage of mobile applications. A potential example for this will be adaptation of improved interaction according to the user’s mobile grip. Another example will be inputting text by multimodalities like: gaze tracking and voice recognition [2]. This application will be highly useful for people who cannot write. Speaking of another example, collecting the sample words from the speech at various user’s contexts and making use of that to refine the results for text input [11], this can be done by give higher priority to these words than other according to the context. Let’s consider the first example, when user is walking and also holding some stuff from one hand, so now he or she can only devote one hand to hold the mobile device as well as to interact with it. Now, at such moment mobile should automatically sense the single handed user’s interaction with mobile and adapt the interface related to the techniques like "Barrier Pointing". To sense user’s single handed interaction, techniques like "Grip Sense" can be used. "Grip Sense" is a system that leverages mobile device touchscreens and their built-in inertial sensors and vibration motor to infer hand postures including one- or two-handed interaction, use of thumb or index finger, or use on a table [27]. Grip Sense also senses the amount of pressure a user exerts on the touchscreen despite a lack of direct pressure sensors by observing diminished gyroscope readings when the vibration motor is "pulsed." This will be extended to provide a better way of interaction at the time of one-handed interaction by user. The outcomes of such techniques could also be applied in future application development. It is further hoped that the research will establish an innovative model to study implications of interaction with mobile devices at different contexts.

Research Challenges

This paper attempts to establish the nature and purpose of mobile interaction design, and put forth principles for its enhancement by leveraging sensor and contextual data. As a concept, leveraging sensor data and context to mobile interaction design may go far beyond any of the reviewed projects to realize a day when all mobiles, will be perfectly tailored to its user and his or her context. Strikingly, this is relatively contrary of universal design; rather, it will be the universal application of "design-for-one" [29].So what is required to make the "universal application of design-for-one" a reality? Of central importance is work on automatically detecting, assessing, and understanding people’s ability at specific contexts [26, 33, 40, 43, 54]. Of special importance is the need for quick, low-effort, accurate, and reliable performance tests that can be administered once for most users, or periodically for people with conditions that change over time. A more challenging but more useful solution would be to accurately measure users’ abilities from performance "in the wild;" that is, outside an artificial test battery [13]. However, accurately measuring tasks in the wild requires inferring the intention of tasks, requiring, among other things, the ability to know what a user is looking at, trying to point to, attempting to write, and so on. Algorithms for segmenting real-world mouse movement into discrete aimed pointing movements are necessary, but they are only a start. We also need to know where targets in the interface lie [19] and what mouse behavior constitutes an error. The same is true for understanding text entry without the benefits of a laboratory study where phrases are presented to participants for transcription; prior work has made some progress here [71].

Related to ability measurement is the challenge of sensing context. Future work, should consider how to make devices more aware of and responsive to environmental factors like lighting, temperature, and ambient noise [70]. While "on the go," mobile device users may experience multiple modes of transportation, from walking to riding a bus to driving a car. They may be stationary or moving up or down stairs. They may be outdoors, exposed to cold temperatures (and the need for gloves), rain water, glare, wind, ambient noise, and so on. Mobile device users’ social contexts also change, from being in a movie theater to business meetings to personal conversations. Current mobile devices are mostly ignorant of these factors and require users to adapt their behavior to the environment. Recent advances in mobile activity recognition hold promise for enabling devices to become more aware of their users’ contexts [15, 30].

Once performance has been accurately measured or context has been sensed, there still remains the considerable challenge of modeling a user’s abilities. For systems to create, modify, and assess a predictive model of a user’s abilities is still an ongoing challenge [6], especially for users with impairments, whose abilities are greatly varied, even for the same medical condition. For this reason, conventional user models do not seem to hold well for many people with impairments [40]. Because variability among such users is high, techniques for accommodating this variability in user models are necessary for advancing ability-based design. Work on perceptive user interfaces [34] is still in its early stages, but could help systems understand the state, context, and abilities of users. Recent advances in automatically detecting the impairments of users also hold promise [33].

Assuming we can accurately measure and model users’ abilities and context, the next question is one of design: how best can we employ this knowledge? More research is needed in user interface adaptation. Although SUPPLE makes important advances [25], it leaves much unanswered, like how to incorporate errors, visual search time, input device characteristics, and aesthetic concerns.

Another design approach is to allow end-user adaptation in a more flexible manner. Although most applications contain some configuration options, applications must offer a much wider range of possibilities to fully support ability-based design. Of particular importance is providing feedback in the form of previews so that users know the results of their changes [66]. Showing previews can be difficult for changes that alter complex motor-oriented parameters or other non-visual interface aspects. However, previews could improve the trial-and-error process users currently endure when adapting software.

Finally, we need to further investigate how commodity input devices can be repurposed in novel ways for people with disabilities. Hardware researchers need to devise more flexible input devices that can be used in various ways. Reconfigurable devices may hold promise for greater adaptability [63], although they have yet to be realized. Software that can remap device inputs to required outputs may also make devices more versatile [12, 69].

In the end, the progress necessary for ability-based design to flourish will leave few areas of computing untouched. It is our hope that ability based design can serve as a unifying grand challenge for fruitful collaborations in computing, human factors, psychology, design, and human-computer interaction.

Summarised research challenges

The research question will be refined after the literature review, from the following:

Q1: What are the challenges while interacting with mobile in different contexts?

Q2: What interaction techniques can benefit from contextual and sensor data?

Q3: What activities of users can be sensed while they interact with their mobile devices?

Q4: Will adaptation of interaction techniques according to context, have same impact on all users?

Q5: What are the implications for end-users?

Q6: What further consequences can be arose?

Q7: What is the impact of one solution on other cases?

Conclusion

Improving user interaction with mobile is a big issue, where many researchers are contributing to improve the performance of user’s interaction with mobile. This article presented leveraging sensor and contextual data as a refinement to, and refocusing of, prior approaches to improve mobile interaction design. Just as user-centered design shifted the focus of interactive system design from systems to users, design by leveraging sensor and contextual data shifts the focus of accessible design from users to user’s ability at specific contexts. This paper attempts to establish the nature and purpose of mobile interaction design, and put forth principles for its enhancement by leveraging sensor and contextual data. Future research directions that touch the area of mobile computing were also provided. Leveraging the sensor and contextual data to make user’s interaction with mobile easier might act as a vantage point from which to envision a future consisting of intelligent mobile devices with the ability to change its interaction behavior according to the user’s ability at different contexts.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now