Multi Touch Tabletop Display

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Modern interactive displays are not content-contextual in the location in which they are placed. Furthermore, these devices do not have the capability of intelligently detecting user presence within the range of interaction distances. One of the ways in which contextual contents can be formed prior to and during user interaction is by making displays aware of the presence of users, even before they touch the screen. This research describes the integration of simple algorithms and hardware setup with minimal number of strategically placed sensors on horizontal multitouch displays that facilitate user experience with contents within the paradigm of group interactions around table computers. By using a heuristic approach of identifying individual users in their rightful positions through touch gestures and adaptive distance sensing, the research demonstrates the robustness of applying such methods to a wide variety of locations where interactive displays are installed. Sensing the presence and movement of users around a tabletop display is very important as it allows us to use the users’ locations information around the table to perform assistance for users to use the table display more efficiently by providing enhanced interactions to the users.

INTRODUCTION

In traditional interactive computing, users usually are the ones who provide input to computers. Computers are not enabled to take full advantage of the context of the human-computer dialogue (Dey 2001). In the modern interactive computing, computers are able to communicate with all kinds of sensors so they become intelligent in so many ways depending on the context they are intended and programmed for. By improving the computer’s access to context, we increase the richness of communication in human-computer interaction and make it possible to produce more useful computational services (Dey 2001).

Multitouch-multiuser surface computing has been receiving widespread adoption recently and with many promising application areas it can offer, it will be a pervasive technology in the near future (Ch 2012). Multitouch tabletops have a lot of potential benefits to offer, such as supporting a more "natural" user experience (Dietz & Leigh 2001){Formatting Citation}, and encouraging casual and collaborative interactions (Esenther & Ryall 2006). Collaborative surface computing such as multitouch tabletops and interactive surfaces have been popularly adapted in recent times for museums, galleries, exhibition spaces and business places.

It has been observed by {Formatting Citation}(Tse et al. 2007) the categorical behavior of people when dealing with direct-touch tables in uncontrolled environments: Touch Interactions – "at first, some people are hesitant to touch the table at the same time", "accidental input is common, especially when pointing at something on the table", "GUI elements designed for a mouse need modification for finger-based input", "some people preferred to use a stylus (or other input device) to interact with the table rather than their hands". Organisation of Content – "users appreciate their elbowroom", "bare fingers are insufficient for text input", "for some types of documents, orientation is not a problem". Occlusion – "the actions of multiple people often conflict with one another, both intentionally and accidentally". Physical Setup – "concerns about shadowing caused by top-projected displays are not a problem in practice", "the design of the table’s edge and its height impacted its use", and "users do view the interactive table as a ‘computer’".

Personalised functionality can be used to track achievements in multi-player games, or to enforce social protocol (Richter et al. 2012) and can be achieved if we know which touch belongs to whom.

The larger sized displays are more preferred since they provide ample space for collaborative work since multiple users are able to work and interact with the same or different content simultaneously. Users have the tendency to move around the table to interact with the vast array of digital information presented on the table display. Some icons, objects or menus are not within reach of some users hence they would move along the table’s sides to reach for the object of interest. When a user moves to a new position, the content they own such as pictures, videos or opened windows could be automatically moved to follow user’s current position. Having managed to track user’s position smoothly and accurately gives the opportunity for museum and gallery owners to offer plenty of possible new interaction method to their visitors. More valuable information could be extracted from the system for example deciding which touches belong to who and who by using simple heuristics based method for example, the leftmost touch belongs to the left hand and the rightmost touch belongs to the right hand of a user whose position is closest to the touch.

In this paper, we describe the setup of IR distance sensors around a multitouch tabletop to sense users’ presence and track users’ positions around a multitouch tabletop display. Sensing the presence and movement of users around a tabletop display is very important as it allows us to use the users’ locations information along any sides of the table to perform assistance for users to use the table display more efficiently by providing proactive interaction to the users. For example users could touch an icon or a menu item and the resulting window would be opened and automatically positioned in-front of him/her. To achieve this the table needs to know who initiated the touch on the table. To discriminate the touches, we have implemented two reliable methods to match touches to the user who initiated them which we will describe more in the methodology part of this paper. Touch discrimination is useful for example in an image gallery application. By drawing an imaginary line between the object and the user’s body’s position we could help bring the object closer to the user by snapping the object to his position and re-orient the object.

The data that tracks the number of users working on the table and their locations around the table at a particular time is continuously recorded and later can be inspected and analysed to see how users interact with the contents on the table.

In this paper we present a practical, easy to assemble, reliable and low cost setup of user tracking system for multitouch tabletop display. We used 12 infrared (IR) distance sensors attached to the corners of a table display. We used minimum number of sensors to ensure simplicity of installation and lower the cost of setting up such tracking system while at the same time able to perform the tracking accurately, fast and reliable using less processing power. The actual user’s location is computed at a considerably high resolution at 14 pixels distance between positions (1920 pixels divided by 138 cm). Resolutions could be doubled up by measuring sensor distance down to 0.5cm precision but this is not needed as computing power could be spared for something more important such as object rendering and fluid user interaction. The prototype system has been built around a 65 inches multitouch table from our Prototyping Hall in European Research Institute, University of Birmingham UK.

RELATED WORK (LITERATURE STUDY)

There have been a number of attempts at making tabletop aware of users standing around it, tracking users’ positions while standing or walking around it and also identifying touches and match them to the users who initiated them. DiamondSpin by (Shen 2004) was mentioned to be one of the earliest systems that tracks users’ locations around a multitouch tabletop. Static positions of receiver pads are direct indicators of where the users are located. Users are required to remain in contact with their pads all the time. Another attempt was made by (Tănase et al. 2008) who used 12 IR sensors in total with 3 sensors on each side of a table facing away perpendicular to the table’s edge. This setup uses a small number of sensors that sense users’ presence and track users’ positions at low resolution (5 user’s positions on each side of table). Another attempt with a more costly setup named Medusa (Annett et al. 2011) was constructed to sense users’ presence and track users’ positions around a tabletop display using in total of 138 IR sensors. Medusa also tracks hands and arms above the display. Out of 138, 38 sensors are used to sense and track users’ positions around the table. Both of the mentioned setups require a number of additional sensors and I/O boards for larger size tables. It is quite difficult to port specifically Medusa to new tables because of its crowded arrangement of sensors and cables.

Daniel K. et al (Klinkhammer et al. 2011) implemented a rather costly and customised setup capable of tracking users’ positions around a multitouch table by placing 96 IR sensors around the table having similar setup like Medusa.

The task of identifying users have been researched by many. One of the earliest papers dated back in 1997 (Addlesee et al. 1997) implemented a smart floor system using Hidden Markov Model to perform user recognition and another system based on similar operation but implemented by measuring a ground reaction force (GRF) profile for each user. As technology is developing better, new approaches have been explored using new methods for example identifying users with unique tokens by combining optical recognition with RFID tags (Olwal & Wilson 2008), using a ring that flashes a unique ID sequence to the table (Roth et al. 2010), combining an electronic wrist- band with an analysis of hand orientation (Meyer & D. Schmidt 2010), recognizing tabletop users by their shoes (Richter et al. 2012), and many more.

Researchers have been studying how users interact with tabletop displays and a project by Wang et al. (Wang et al. 2009) has introduced an FO algorithm to try identify the owner of a touch on vision based multitouch tabletop display. It{Formatting Citation} is however not very practical as it requires touch gesture to be done following a specific rule called ‘oblique landing’. Zhang et al (Zhang et al. 2012) agrees that the unintuitive nature of this oblique landing constraint made it unreliable without extensive user training.

{Formatting Citation}

METHODOLOGY

Our prototype setup was built on a large 65-inch multitouch table supplied by Mechdyne Corporation located in the prototyping hall at IBM Visual and Spatial Technology Hub of the University of Birmingham, United Kingdom. The table’s measurement is 172cm (width) x 108cm (height) with screen’s width of 138cm and height of 76cm. We use in total of 12 Sharp IR distance sensors, 8 of which are from 10-80cm effective detection range (2Y0A21), and the remaining 4 are from 20-150cm detection range (2Y0A02). These sensors are preferred because of their better performance unsusceptible to ambient light without any influence on the color of reflective objects. All the sensors are connected to two units of I/O processing board named PhidgetInterfaceKit 8/8/8. Refer to diagram below for the sensors setup.

Fig. 1. Tracking IR sensor setup for multitouch table top display.

All the eight 10-80cm sensors are placed at the corners of the table, with each sensor aligned vertically to the tables edge like shown below.

Fig. 2. IR sensor placed at the end of table’s edge facing slightly outward about 35 degrees from the table.

The sensitivity of the 10-80cm IR sensor gets reduced drastically when the detection distance reaches 30cm and above. To overcome the slow performance when object of interest is 30cm and more from the sensor, we introduced longer range 20-150cm IR sensor as a complement for the longer sides of the table. The plan is the 10-80cm IR sensor (S4 in Fig.1) will perform the tracking of object in the range of 10cm to 30cm, and the 20-150cm IR sensor (S3 in Fig.1) will overtake the tracking job when the object of interest moves beyond 30cm from the sensor. This has resulted in smooth tracking of object moving along the side of the table.

The same setup was applied for the other end of the table (S5 & S6 in Fig.1). This is because when the object moves beyond 80cm, the sensors (S3 & S4 in Fig.1) will not be able to track the object smoothly anymore since the 20-150cm sensor’s performance gets deteriorated when the distance between object and sensor goes beyond 80cm (see Fig. 3b below) , hence the use of symmetric setup (S5 & S6 are symmetric to S3 & S4 in Fig.1).

Fig. 3a. Analog output vs. Distance to reflective object for 10-80cm IR sensor. Usable range for fast feedback (shaded area).

Fig. 3b. Analog output vs. Distance to reflective object for 20-150cm IR sensor. Usable range for fast feedback (shaded area).

Fig. 3c. Optimum performance of both types of IR sensors are indicated by dotted rectangle for 10-80cm sensor and dashed rectangle for 20-150cm sensor. Both ranges are overlaid to give a clear view of the combined distance tracking range used in the tracking algorithm.

Figures 3a and 3b above shows how the voltage difference of the sensor becomes insignificant when object gets farther from sensor, hence sensor becomes insensitive of voltage change. This has been observed true from a test where value returned by the sensor gets updated or refreshed in longer interval when object is moved further from sensor. Value gets updated faster when object is nearer to the sensor. With this limitation we have chosen the 10-80cm sensor to work on the 10-30cm distance range and the 20-150cm sensor to track object in a 30-70cm range.

The whole sensors implementation is programmed using C# and phidgets library for windows. We used two phidgets I/O boards namely PhidgetInterfaceKit 8/8/8 to connect all the sensors. The eight IR sensors are connected to first interface kit board and the remaining four sensors are connected to the second board. Basic algorithm of the implementation is presented in the pseudocode below (based on a full HD screen with 1920 x 1080 pixels resolution with physical width and height measurement of 138cm x 74cm).

(1.A)

(1.B)

where and are the distance values of Sensor no. 1 and no. 2 respectively, is half of the screen’s height in cm, is half of the screen’s height in pixel, and C is an average person’s width in cm and has a value 50.

(1.C)

(1.D)

(2.A)

(2.B)

where and are the distance values of Sensor no. 3 and no. 4 respectively, is half of the screen’s width in cm, is half of the screen’s width in pixel.

(2.C)

(2.D)

where is the screen’s height in pixel. is the sensor no.3 which is of type 10-80cm, and we choose 10-30cm as the effective distance range. which is of type 20-150cm sensor will take over the tracking job when distance goes beyond 30cm.

(3.A)

(3.B)

where and are the distance values of Sensor no. 3 and no. 4 respectively.

(3.C)

(3.D)

(4.A)

(4.B)

where and are the distance values of Sensor no. 7 and no. 8 respectively.

(4.C)

(4.D)

where is the screen’s width in pixel.

(5.A)

(5.B)

where and are the distance values of Sensor no. 9 and no. 10 respectively.

(5.C)

(5.D)

(6.A)

(6.B)

where and are the distance values of Sensor no. 11 and no. 12 respectively.

(6.C)

(6.D)

Fig. 4. The algorithm for the tracking of users P1, P2, P3, P4, P5 and P6 (refer to Fig.1).

If a previously detected body has not been updated for 500 ms then that body is no longer considered present. Using this technique, we can robustly track two people along each long side of the table and one person along each short side of the table.

For discrimination of touches on the display, we implemented two reliable methods to match touches to the user who initiated them. The first method uses a heuristic approach which we assume that a user will touch and perform interaction gestures only to objects within radius of his reach, so we setup a radius of around 40cm from the user’s body’s position that any touches within the radius shall belong to the user. 40 cm is chosen based on the distance between body and farthest reach of hand on the table’s surface without much bending of the body. For the second method we run our devised algorithm to map the touch points outside of the radius to users. A user would normally touch the far-reached object and drag the object to a position near the body before doing any other interaction gestures on the object. We anticipate this dragging action and our algorithm would draw a vector from the object towards the table’s edge. If the vector intersects with the user’s position, the algorithm will then decide that the touch belongs to the user. The confidence of the identification can be further increased by measuring the change in user’s position. If the object’s position is on the right side of the user, then he would move his body (not feet) a little to the right to reach for the object.

Fig. 5. Touch within the 40cm radius from user is assumed to belong to the user.

Fig. 6. Touch outside the 40cm radius from user. Our method computes for touch dragging vector and it will identify the owner of the touch.

ANALYSIS OF SIMULATION RESULTS

Our tracking system is currently able to track up to 6 persons at once. This is illustrated in Fig.1 where users are labeled P1, P2, P3, P4, P5 and P6. Maximum of 2 persons on each of the longer sides of the table, and maximum 1 on each of the short sides. The sensors start tracking a new user when he/she is approaching around 10cm to the table. The tracking performance is very smooth and accurate as long as the users are within around 10 cm from the table’s edges. The system continuously track users’ positions and aware when users are walking away from the table. The tracking is fast and smooth as the result of using a small number of sensors and making them co-operative so it only uses about 5-7% of the CPU resource.

Our tracking system is also a streamer server which broadcasts the actual position (X and Y coordinates) of the six persons on the screen in a UDP format with a broadcast interval of 50 millisecond. To demonstrate the robustness and smoothness of the tracking we have programmed two simple applications that 1) displays an object that follows the person when he is standing and walking around the table, and 2) acts as a client software (separate application than the tracking application) which is a pong game where paddles will appear at the exact position of users standing on the table’s sides. Subsequently the paddles will follow users body position as they move. Up to 6 users can play simultaneously. Screen captures of the said applications are shown below.

Fig. 7. User’s position is being tracked and is indicated by a circle. This is also a UDP streamer.

Fig. 8. A client application - Pong game receiving users’ positions around the table display

Our prototype poses a little limitation when users move beyond the effective detectable distance (10cm) from table’s edge. As users move beyond 10cm away from the table, the system calculates false positions of users. This case is illustrated in a figure below. We tackled this The limitation however disappears when users move away from the table’s edges beyond the sensors’ line of sight where users’ positions will now no longer exist.

Fig. 9. System might interpret user as moving to the right where in fact user is moving back away from the table. This happens only when user is moving away from the table and the issue dissappears when user goes beyond the maximum distance from table’s edge. As shown in the figure above, initially user is 12 cm from the left edge of the display, and as he moves back away from the table, the distance from table’s left edge is increasing to 16cm, and shortly then he will not be trackable.

CONCLUSION AND FUTURE WORK

In this paper we have presented a setup for tracking uses around a tabletop and methods to associate touches to users who initiated them during the interaction with the tabletop display. Our setup is very cost effective and simple to construct, making it mobile, easier to duplicate or move to another tabletop display. Using out setup, the tracking system could be made working on any tabletop display in less than an hour.

Potential applications include personalization of user experience where user could have personal space on the tabletop with personalised menus. Contents owned by the user such as pictures, videos or opened windows could be automatically moved to follow user’s current position. Objects owned will be locked to current user’s personal space and other persons will not be able to interact with these objects. This proximity system could also be used for greeting and auto hibernate. When no users are present, an attract application is displayed to invite users to come nearer and use the tabletop. Whenever this new user approaches the table, he/she will be greeted to use the application on the table. He/she could be shown a quick tutorial on some possible interaction styles available such as pinch to zoom, two-finger rotate, etc. Other users will learn these gestures from earlier users by visual observation.

With the system able to continuously track user’s presence, it could convey to users that the system recognizes their presence and behavior. System indicates the identification of a user’s presence by displaying a persistent visual representation of the user via a symbol.

Two-hand gesture could also be introduced as the system is able to disambiguate between touches made with left or right hands of a particular user. For example, when left hand touches down, touch-swiping right hand left and right will move selected 3D object left and right. When left hand not touching down, touch-swiping right hand left and right will rotate the 3D object on a Y-axis.

The tracking of user’s position could be beneficial for a 3D VR navigation interface. By using the body’s relative position from the middle of table’s long side, moving the body to the left or right will walk the avatar/camera to the left or right of the VR world. Tracking user’s position also allows for a physical game UI where user could use body’s position on table’s long side to avoid crashing into objects that are running/moving/rolling towards user (3D world). Another example of game UI is having a rotating object at the center of table firing bullet/missile/poisonous object towards characters that moves following users’ positions around the table.

We plan in our future research to improve user experience of visitors in museums and gallery spaces by means of providing them personalisation by combining user identification with the proximity sensing tabletop.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now