Generating Visual Text Messages Computer Science Essay

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Neelam Jaina, Aishwarya Kamblib, Nikita Jhunjhunwalac

a,cStudent, KJSIEIT ,

bStudent KJSIEIT

[email protected], [email protected], [email protected]

Abstract: Augmented Reality is a type of virtual reality that aims to duplicate the world's environment in a computer. Augmented reality has now entered our everyday life which also includes social networking websites. While communicating on these websites the user is unable to clearly express his/her feelings in case there is a language barrier between the sender and receiver. To avoid this we are using the technique of generating images from text. This technique can increase effect about visual communication, and gives user freedom of selection that can forecast kind of contents. In this technique we first preprocess the text message then Parts Of Speech (POS) Tagger identifies the keywords using which the user’s emotion is determined. Also from the keywords detected relevant thumbnail images are retrieved from the database. According to his predilection the user may select an avatar, background and thumbnail images from the given set of classified images and reposition them accordingly to generate an intermediate image. The Interactive Genetic Algorithm is then applied on this intermediate image to generate a set of re-colored images. Then the final image is selected by user which can be used for communication. Our augmented visual communication method is implemented on the desktop computers.

Key Words: Visual text messages, POS tagger, Interactive genetic Algorithm(IGA)

INTRODUCTION

This A social networking service is an online service, platform or site that focuses on facilitating the building of social networks or social relations among people who share interests, activities, backgrounds or real-life connections. While communicating on social networking websites one of the main problems faced by the user is the limit of characters within the text .Also the user is unable to clearly express his feelings if there is a language barrier between the sender and receiver. To avoid this we are using the technique of generating images from text.

In order to generate images from text it is first necessary to develop a method for connecting the text to images. By using a Parts of Speech Tagger we detect the keywords within the text. However, the text messages sometimes do not obey grammar rules and sometimes special characters like emoticons are also used in the text. Therefore it

is also necessary to handle these abnormalities within the text. This task is performed by a pre-processor. The pre-processing module deals with abbreviations, word spacing an emoticons before sending the results to the POS Tagger.

After detecting the keywords an intermediate image is generated by the system. In order to verify that the intermediate image correctly reflects the user’s feelings it is desirable to detect the emotional language used in the text. Also, to reflect the feelings and emotions of the author it is necessary to change the colors in the image. For this purpose a re-coloring algorithm is used. An interactive genetic algorithm is also used to learn about the user’s own color perception to perform optimization based on human evaluation.

In this paper, we propose a new augmented visual messaging approach in the field of social networking. We develop an author-friendly image generation method to reflect the author’s feelings and emotions present within the text. The rest of this paper is organized as follows. Section 2 describes existing system. Section 3 explains the need for this approach. Section 4 explains how to generate visual images for messaging. Section 5 presents various components required to develop the image from text. Section 6 discusses future scope and application of our approach to generate image from input text.

EXISTING SYSTEM

Some of the existing software’s for text to scene conversion are: CLOWNS [1] system generated spatial descriptions and simple animations of motion verbs from texts. NALIG processed simple fragments of texts regarding spatial relations between objects. CLOWNS and NALIG are both applicable to micro world [2].

One important system is WordsEye in which 3D models are generated from simple written descriptions [3]. This system is powerful but not intended to deal with realistic texts. CogVisys is another text-to-scene and scene-to-text converter. The SWAN system converted small fairytales into animated cartoons. Those systems cannot handle real life texts but are restricted to simple narratives.

Another important system is text-to-scene conversion system Carsim [4] which is to convert texts into animated images. This system automatically illustrates traffic accident newspaper reports. Although real texts are handled, emotions are not dealt with.

PROBLEM STATEMENT

Social networking websites have exploded in recent years, and can be used to connect people in both a personal and professional context. Across the world there are now a huge number of public and private online social networks, with the best-known including Facebook, Twitter, Orkut, etc.

While communicating on social networking websites one of the main problems faced by the user is the limit of characters within the text. Generally the users are not allowed to post a content which is above the specified limit. The users have to either remove certain portions of text or split it over multiple messages.

Also these websites mainly allow only English language for communication. This creates a language barrier between the sender and receiver. The user may not be able to express his feelings clearly if he is not well versed with the language.

Our system tries to overcome these drawbacks by simply converting the text messages into an image. Thus long text messages can be compressed into a single image. It also facilitates communication among people from different nationalities since there is no linguistic barrier in visual communication. One of the important features of this system is that it also considers the emotion in the text. Thus the image rightly depicts the user’s feelings and mood.

PROPOSED SYSTEM

We aim to implement the system of Generating Visual Text messages in English language for desktop computers. Algorithm to generate the image is as follows:

Algorithm-

Input the text from the user.

Process the text using a pre-processing modules to remove the abnormalities in the grammar and emoticons used in the text message.

Parse the text using POS Tagger to extract the keywords and retrieve the thumbnail images related to the keywords from the database.

Detect emotions in the text.

Generate an image consisting of avatars, objects and a background.

The user selects the image of his choice. If the user dislikes both the images, Interactive genetic algorithm (IGA) is used to train his color preferences.

Send the selected image to the receiver.

Thus, the two main important stages in this process are: extracting keywords from the text and matching them with the Database and analyzing the emotions in the text. Also

IGA plays an important role in training the color preferences of the user. Each of these components required to generate the image are explained in following section.

5. METHODOLOGY

Figure 1 show the image generation procedure which is performed using the following components.

Fig.1 Image Generation Procedure

5.1 Processing of Emoticons

Emoticons are frequently used in text messages. Since the text messages containing emoticons cannot be directly processed by the POS Tagger the text is first passed to

a pre-processing module which processes these emoticons. Emoticons are stripped from the text by the pre-processor.

Stripping out the emoticons causes the POS Tagger to learn from the other features present in the text.

5.2 Parts Of Speech Tagger

Parts of Speech Tagging is mainly done to identify nouns, verbs and the adjectives in the input text. Since nouns are most likely to correspond to objects in the database we need to extract noun phrase in the text. Verbs are required to identify the actions performed by these objects. Also the adjectives help in identifying the user’s emotion.

5.3 Emotion Detection Engine

After POS tagging the next step is to construct a dictionary to identify the emotions. The emotion detection engine requires the entire word to be properly tagged in order to keep a minimum response time. For every possible emotional word and its related intensity, the engine also requires particular marks. So we construct a database containing words used in our daily communication. The database includes three fields: word field, word category field and emotional tag field. Word field contains all the words and word category contains the corresponding tag. To extract emotional word as quickly as possible, the emotional tag field was added. The emotion detection engine is as shown in figure 2 [5].

Algorithm-

Split the input sentence into words.

Check the tag dictionary to find the tag category of each word.

If word is not found perform suffix and prefix analysis to derive its tag.

If it is an emotional word store its emotion category and the intensity of the emotion.

If the emotion is in negative form discard the word.

If the emotional word follows an adjective increase its intensity.

If the same sentence contains more than one emotional word and they are connected by a conjunction then combine the two emotion states.

If the sentence contains no emotion word then identify the emotion as neutral.

Fig.2 Emotion Detection Engine

5.4 Interactive Genetic Algorithm (IGA)

The interactive genetic algorithm is a Genetic Algorithm where the evaluation part of it is interactively handled by the user. Initially, the first generation of offspring is generated from the basic color templates. After that, the author evaluates the individuals. The evaluation scores are used as fitness values within the interactive genetic algorithm. From the evaluation results, the second generation of offspring is made from the operations of genetic algorithm such as selection, crossover, mutation, and copy. This procedure is iterated and finally terminated by the author’s decision when he finds a desirable offspring. After the termination of the interactive genetic algorithm, we can extract chromosome information for H, S and V. Finally, the author’s preferred color templates for each emotion are generated. Once the author’s preference is learned from the training phase, the author can generate his own style color image [1]. The image re-coloring process is as shown in figure 3 [6].

Fig 3. Flow chart for re-coloring image

6. SCOPE AND APPLICATIONS

Since their introduction, social network sites (SNSs) such as MySpace, Facebook, and Twitter have attracted millions of users, many of whom have integrated these sites into their daily practices.

Using images instead of text messages for communication greatly increases the effect of visual communication. It allows the user to select from a wide variety of thumbnail images, avatars and backgrounds. The emotion detection procedure is used to predict the user’s mood and Interactive Genetic Algorithm recolor the image according to his preferences.

This augmented communication method can be widely used in chatting environment or on social networking websites to allow the user to communicate with a large number of other users effectively and precisely.

7. CONCLUSION

In this paper, we propose a new approach to generate visual text messages. To generate an image from the text, we preprocessed the text to handle abnormalities in the grammar and special characters like emoticons. After that, we used the POS tagger to detect the keywords. These keywords are passed to emotion detection engine to detect the emotions. Then we retrieve the thumbnail images related to the keywords. By selecting the images for the background, avatar and objects, the intermediate visual image is generated. This image is then re-colored using interactive genetic algorithm.

In the future, we can extend our approach to the different languages. Also it can be implemented on smart phones.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now