Data Mining Using Intelligent Agents

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

The information retrieval can be obtained by the exploration of information storage, classification, extraction, indexing and browsing techniques. The main challenge of IR systems originates from the semantic gap problem, i.e. the semantic difference between a user’s query representation and the internal representation of an information item in a collection. Data mining techniques find their applicability in such scenario. Data mining concepts and techniques when applied to WWW with its existing technologies are termed as web mining. Web mining can change the way results are provided to user queries presently i.e. ranked list of keyword based results. This work focuses on proving agent-based framework for mining semantic web contents employing clustering techniques. Clustering will help provide user with query relevant cluster of web contents, which will better satisfy user requirement and will provide optimal utilization of web surfing time. In recent years, agents become a very popular paradigm in computing because of their flexibility, modularity and general applicability to a wide range of problems.

Technological developments in distributed computing, robotics and the emergence of object-orientation have given rise to such technologies to model distributed problem solving. The inherent parallelism and complexity of the classification and discovering patterns from large amounts of data can be delegated to intelligent software agents in this context. In this paper, the agent paradigm along with the main applications and the use of this technology in data mining are briefly discussed.

INTRODUCTION

There is increasing need of large numbers of documents to retrieve quickly and intelligently through worldwide information networks. These database of documents is non-structural mainly textual documents are used for retrieval. With the explosive growth of information sources available on the Internet, and on the business, government, and scientific databases, it has become increasingly necessary for users to utilize automated and intelligent tools to find the desired information resources, and to track, analyze, summarize, and extract "knowledge" from them. These factors have given rise to the necessity of creating server-side and client-side intelligent systems that can effectively mine for knowledge. Therefore, the inherent parallelism and complexity of the classification and discovering patterns from large amounts of data can be delegated to intelligent software agents. Agents, special types of software applications, have become a very popular paradigm in computing in recent years. Some of the reasons for this popularity is their flexibility, modularity and general applicability to a wide range of problems. Recent increase in agent-based applications is also because of the technological developments in distributed computing, robotics and the emergence of object-oriented programming paradigms. Advance in distributed computing technologies has given rise to use of agents that can model distributed problem solving. Besides, object-oriented programming paradigm introduced important concepts into software development process which are used in structuring agent-based approaches.

In this paper, the agent paradigm is briefly discussed with its definition and classifications. The communication framework is also presented. An agent model for learning and filtering information is examined. The concept of Web Mining and the place of existing agent-based applications in this scene is presented. Finally, possible utilization of agent paradigm in data mining process is analyzed and concluding remarks are also given.

WHAT IS AN AGENT?

Agents are defined as software or hardware entities that perform some set of tasks on behalf of users with some degree of autonomy [23]. In order to work for somebody as an assistant, an agent has to include a certain amount of intelligence, which is the ability to choose among various courses of action, plan, communicate, adapt to changes in the environment, and learn from experience. In general, an intelligent agent can be described as consisting of a sensing element that can receive events, a recognizer or classifier that determines which event occurred, a set of logic ranging from hard-coded programs to rule-based inference, and a mechanism for taking action [1] [4]. Other attributes that are important for agent paradigm include mobility and learning. An agent is mobile if it can navigate through a network and perform tasks on remote machines. A learning agent adapts to the requirements of its user and automatically changes its behavior in the face of environmental changes. For learning or intelligent agents, an event-condition-action paradigm can be defined [2].

In the context of intelligent agents, an event is defined as anything that happens to change the environment or anything of which the agent should be aware. For example, an event could be the arrival of a new mail, or it could be a change to a Web page. When an event occurs, the agent has to recognize and evaluate what the event means and then respond to it. This second step, determining what the condition or state of the world is, could be simple or extremely complex depending on the situation. If mail has arrived, then the event is self-describing, the agent may then have to query the mail system to find out who sent the mail, and what the subject is, or even scan the mail text to find keywords. All of this is part of the recognition component of the cycle. The initial event may wake up the agent, but the agent then has to figure out what the significance of the event in terms of its duties. If the mail is from the boss of the user, then the message can be classified as urgent. This gives the most useful aspect of intelligent agents- actions. The main issue in the use of intelligent agents is the concept of autonomy [16].

The user can give the responsibility of performing some time-consuming computer operations to this "smart" software. By this way, the user becomes free to move on other tasks and even disconnect from the computer while the software agent is running. Besides, the user does not have to learn how to do the computer operation. In fact, intelligent agents can act as a layer of software to provide the usability feature that many inexperienced users want from computer professionals. On the other hand, being autonomous to some degree as an assistant, agents are not totally independent of other agents where they share an environment [12]. When the agents are communicating and cooperating, it become possible to build organizations of agents where the net effect is greater than the sum of the parts. The type of interaction between the agents can then be described as cooperative when they share goals, self-interested when their goals are totally independent. Agents having mutually exclusive goals may be competitive, viewing other agents as opposing parties. However, the main idea that adds interest, richness and complexity to agent systems are the interactions between agents.

Agent-based systems can be designed by thinking the inherent interdependence between the agents. It is important to manage the cooperative agents through communication, negotiation, coordination and organizational division of tasks and responsibilities (CONCOR). When the agents are competitive, the challenge becomes to manage and minimize the negative

effects of the agents’ inherent independence. Besides, the capabilities, intelligence level of each agent, communication structures, and user interface should be considered. Nevertheless, the notion of an agent is meant to be a tool for analyzing systems, not an absolute characterization that divides the world into agents and non-agents [23].

A number of agent descriptions are examined in [9], and the list of attributes often found in agents are listed as: autonomous, goal-oriented, collaborative, flexible, self-starting, temporal continuity, character, communicative, adaptive, mobile. On the other hand, the main characteristics of the tasks, where the agent technology is found suitable for, include complexity, distribution and delivery, dynamic nature, information retrieval, high volume of data handling, routine, repetitive, time critical, etc. Some examples in which agent paradigms are frequently used include [12]:

taking the advantage of distributed computing resources such as multiprocessor applications and distributed artificial intelligence problems,

coordinating teams of interacting robots where each robot necessarily has a physically separate processor and is capable of acting independently and autonomously,

increasing system robustness and reliability in situations when an agent is destroyed, others can still carry out the task,

assisting users by reducing their work and information loads,

modeling groups of interacting experts, as in concurrent engineering and other joint decision-making processes,

simplifying modeling very complex processes as a set of interacting agents,

modeling processes that are normally performed by multiple agents, such as economic processes involving groups of buying and selling agents.

Types of Agents

The nature of intelligent agents is such that they are optimized to perform certain functions and tasks on behalf of a user or a computer system. [1] has given a mapping which is done by IBM onto a graph by considering intelligence and agency. The graph, which is shown in Figure 1, is used to compare different intelligent agents. On the intelligence axis, agents go from simply specifying user interfaces, to active reasoning, through rule-based expert systems, all the way up to agents that can learn as they go. Here, intelligence is defined as the ability of the agent to capture and apply application domain-specific knowledge and processing to solve problems.

Agency is defined as the degree of autonomy the software agent has in representing the user to other agents, applications and computer systems. At least, agents must run independently or asynchronously on the systems. In this respect, old PC-DOS terminate and stay resident (TSR) programs that check keystrokes are simple-minded agents. At the next level of agency, an intelligent agent represents the user and interact with the operating system. More advanced agents communicate with applications running on the system, and ultimately, interact with other intelligent agents. Several categories or types of agents have been defined, based on their abilities and, more often on the task they are designed to perform. The difference is their capabilities of doing that task. It seems that with intelligent agents, as with people, knowledge and adaptability is differentiating the successful ones from the less effective ones. The major categories recognized are filtering agents, information agents, user interface agents, office or workflow agents, system agents, brokering or commercial agents.

Filtering Agents is a the explosive growth of information generated each day, no one can have the time to read through all of them. Filtering agents, as their name implies, act as a filter that allows information of particular interest or relevance to users to get through, while eliminating the flow of useless or irrelevant information. The filtering agents work in several ways, in which the most widely used one is where the user provides a template or profile of the topics or subjects that are of interest. When presented with a list of documents in a database, a filtering agent scans the documents and ranks them based on how well their content matches the user’s area of interest.

On the other hand, the filtering agent can serve as an e-mail filter, automatically filing and disposing of messages based on their sender or on the information content. Filtering agents could also interact with other agents, if necessary. For example, a filtering agent could send an e-mail marked "Urgent!’ to a Notifier or Alarm agent, which would inform the user about the urgent message arrived. IBM’s IntelliAgent is an example of an e-mail filtering agent for use with Lotus Notes [1]. It provides a graphical rule editor and a simple inference engine for automating routine e-mail handling.

A filtering agent with learning ability could automatically adapt the user’s interest profile, refining it or broadening it, based on explicit feedback from the user, or by watching which articles or documents get saved and which get deleted. An agent architecture model which is used in implementation of a filtering application is presented.

Information Agents is the parallel agent type to the filtering agent, which cuts down the information received, is the information agent, which goes out and actively finds information for the user. Information agents which are used primarily on the Internet and World Wide Web, can scan through online databases, document libraries, or through directories in search of documents that might be of interest to the user [1]. As a research or intelligence gathering tool, information agents could provide an invaluable service, keeping the user informed of any developments in a field or of new web sites that contain information related to their area of interest.

User Interface Agents is a user’s skill may change from novice to expert when interacting with a desktop application. User interface agents are used to monitor the user interactions with the application and can control various aspects of that interaction, such as the level of prompting or the number of options available. For example, a new user typically needs lots of help and few choices. More experienced users, however, find that verbose help gets in the way, and they want to be able to easily access all features of a product. Coach is said to be a user interface agent that monitors the user’s interaction with a product and creates personalized help based on that interaction [24].

A user interface agent can learn in four different ways as defined by [16]. First, it can observe, monitor the user’s actions, find regularities and recurrent patterns and offer to automate these. A second source for learning is direct or indirect user feedback, whereby the user "grades" the agent on how well it performed the action. Third, the agent can be trained by the examples explicitly given by the user. Finally, in the fourth method, the

agent can learn through communications with other agents. All of these approaches imply supervised learning from a data mining perspective.

Office or Workflow Agents is an office management agent automates kinds of routine, daily tasks that take up so much time at the office. These tasks include scheduling meetings, sending faxes, holding meeting review information, and updating process documents. Some of these tasks can be considered under the "workgroup" or "workflow" software because they deal with documents and calendars. Whatever the name that ultimately gets attached to these agents, their role in automating common business functions are most likely produce some of the biggest efficiency gains of any intelligent agent applications.

System agents are software agents whose main job is to manage the operations of a computing system or a data communications network. These agents monitor for device failures or system overloads and redirect work to other parts in order to maintain a level of performance and/or reliability. As computer installations become more distributed, the importance of system agents rises. Network management agents have existed for years. For example, using Simple Network Management Protocol (SNMP), these agents reside on devices connected to the network and collect and report status information to the managing computer. An intelligent agent that processes information collected by SNMP agents and uses it to detect anomalies that typically precede a fault is proposed in [14]. The SNMP agents collect information about the network node through their management information base (MIB), which holds a set of variables pertinent to a particular node. The intelligent agents learn the normal behaviour of each measurement variable and combine the information in the probabilistic framework of a Bayesian Network. This gives a picture of the network’s health from the perspective of the network node, which can be used to trigger local corrective action or a

message to a centralized network manager. On the other hand, intelligent "system" agents are involved not only in monitoring the status of resources on the computer network, but also they are active managers of those resources. System agents must be proactive, responding not only to specific events in the environment, but taking the initiative to recognize the situations that call for preemptive actions [1]. Intelligent agents on a computer system could handle job scheduling to meet performance goals. They also could be used to automatically adapt the allocation of system resources to various classes of jobs. In this case, neural networks are used to model the relationships between the computer workload, available resources, and the resulting performance. Acting as an intelligent resource manager, a neural network controller could response to changes in the workload by reallocating the computer system resources to balance the impact on the response times of various job classes. Similar approaches have been used to balance work load across distributed computer systems, and to satisfy the quality of service levels in data communication networks.

Brokering or Commercial Agents is an agent that acts as a broker is a software program that takes a request from a buyer and searches for a set of possible sellers using the buyer’s criteria for the item of interest. When the potential sellers are found to satisfy the request, the broker agent can return results to the user, who chooses a seller, and manually executes the transaction. The agent can also automatically execute the transaction on behalf of the user. In this trade, both parties must trust their agent’s ability to protect interests and to meet their criteria. The agent’s internal knowledge must be opaque, since it must not be seen by the other agent. On the other hand, the agent’s identity must be verifiable, to make sure meeting a legitimate seller agent, and not to have invalid transaction in the user’s credit card. In this case of agent architecture, multi-agent based system issues become very important since many exciting applications involve the interaction of multiple agents. The communication framework, the knowledge representation and belief systems in multi-agent interaction environment has many research issues. The efforts for standardization and communication for agents represented in the following paragraphs.

Communication Framework for Agents is like an object, an agent provides a message-based interface independent of its internal data structures and algorithms. The primary concern is how to reach the agent through this interface by using a language that it can understand. As an exploration of communication, ARPA Knowledge Sharing Effort(1) have defined the components of an agent communication language (ACL) that satisfies this need. ACL is defined as having three parts: its vocabulary, an inner language called Knowledge Interchange Format (KIF), and an outer language called Knowledge Query Manipulation Language (KQML) [10]. An ACL message is a KQML expression in which the "arguments" are terms or sentences in KIF formed from words in the ACL vocabulary. The vocabulary of ACL is listed in a large and open-ended dictionary or words appropriate to common application areas. Each word in the dictionary has an English description and each word has formal annotations written in KIF for use by programs. KIF is a language that was designed for the interchange of knowledge between agents. Based on the predicate logic, KIF is a flexible knowledge representation language that supports the definition of objects, functions, relations, rule and meta-knowledge. While it is possible to design an entire communication framework in which all messages take the form of KIF sentences, this would be inefficient. The efficiency of communication can be enhanced by providing a linguistic layer in which context is taken into account. This function is accomplished by KQML. KQML is an evolving standard as an ACL and is defined to be both a message format and a message-handling protocol to support run-time knowledge sharing among agents [8] [20]. While this effort is found to be premature, it is certain that some sort of common dialect is needed for intelligent agents to communicate effectively. Agent architectures can fit into a much more general effort to support interactions among various software entities.

Numerous approaches and technologies exist to support inter-process, inter-application, and inter-communication over computer networks (e.g. DCE, TCP/IP, OMG/CORBA,OLE, ODBC, OpenDoc). Agent interactions- communications in ACL- can be layered on top of many of these protocols. Other implementations have been performed that allow agents to interact with non-agent applications or data sources. Recent studies in Java based application development technology have also important impacts on agent paradigm. A Java program can communicate with other programs using sockets and in an application it can be a separate thread of control. Java supports threaded applications and provides support for autonomy using both techniques. An agent can be formed by sending it an event, which is a message or method call which defines what happened or what action the agent must perform, as well as the data required to process the event. There have been many research in this area, and some of the Java-based Agent development environments include Aglets-from IBM, FTP Software Agent Technology, Voyager, Odyssey, JATLite, InfoSleuth, Jess, ABE (Agent Building Environment)/IBM. Some of them provides automation, portability, mobility but not intelligence. (1) ARPA Knowledge Sharing Effort is a consortium to develop conventions facilitating sharing and reuse of knowledge bases and knowledge based systems [Neches].

2. LEARNING AND INFORMATION FILTERING

A learning agent can adapt to its user’s likes and dislikes. It can learn which agents to trust and cooperate with, and which ones to avoid. A learning agent can recognize situations it has been in before and improve its performance based on prior experience. While more difficult to implement, a learning agent would obviously be much more valuable than a fixed-function agent. Learning provides the mechanism for an initially generic filtering agent to adapt and become a truly "personal" filtering agent. The ultimate goal for intelligent agents is have them learn as they perform tasks for the user. Depending on the technology used, learning could be done in a number of ways [2]:

rote learning (mechanical): copies example and exactly reproduces the behavior,

parameter or weight adjustment: adjust the weighting factors over time and improve the likelihood of a correct decision (neural network learning),

induction: process of learning by example where extracting the important

characteristics of the problem to allow generalizations of inputs. (decision

trees and neural networks both perform induction which can be used for classification or prediction problems),

clustering, chunking or abstraction of knowledge: detect common patterns and generalize to new situations,

clustering: look to the high-dimensional data and score them for similarity based on some criterion (similarity used as a way of assigning meaning to the group of samples).

The use of learning techniques to develop a profile of a user’s preferences not only eliminates the need for programming rules, but also allows the agent to adapt to changes.

Therefore, learning as applied to data mining, can be thought of as a way for intelligent agents to automatically discover knowledge rather than having it predefined using predicate logic, rules or some other representation.

An interface agent architecture, which is learning from observations is developed and applied to two different information filtering domains; classifying incoming mail messages (Magi) and identifying interesting USENet new articles (UNA) [21]. A graphical user interface (GUI) is used to interact with the underlying application. As it is used, observations are made from which the agent can produce a user profile. The learning interface agent architecture is given in Figure 2. These observations, consisting of articles and actions performed on them, are used to generate training examples, by passing them to the Feature Extraction Module. Features are extracted from the new articles are passed to the Classification stage, where the user profile is used to classify them. The results are then evaluated by the Prediction stage and a prediction is made. A classification in this context is an action that the agent believes should be performed on the message. The prediction stage evaluates the strength of the classification for each new message and generates a confidence rating, which is a measure of the certainty of the classification [22]. The classification and confidence rating together form the agent’s prediction. For the interest rating, Feature Extraction module identifies fields in the news articles and extracts the words based on their frequency within the text. The term values are used to generate the user profile and subsequently make predictions about the articles. Two different learning algorithms have been used within this architecture: a rule induction algorithm, CN2, and a k-nearest neighbor (k-NN) called IBPL [21]. CN2 generates human comprehensible rules by performing induction over training data containing specific features. On the other hand, IBPL is chosen to contrast the symbolic approach, and to overcome some of the problems encountered by CN2 during learning from textual data. In CN2, a large number of examples containing single values for each attribute must be generated. However, in IBPL, each attribute contains a set of one or more symbolic values.

3. WEB MINING

With the explosive growth of information available on the Internet, using some automated tools for finding the needed information and the necessity of developing server-side and client-side intelligent systems for mining the knowledge has become very important. The term Web Mining is defined as the discovery and analysis of useful information from World Wide Web [3]. This includes the automatic search of information resources available on-line, i.e., Web Content Mining, and the discovery of user access patterns, i.e., Web Usage Mining. The related classification of Web mining is given in Figure 2.

Web Content Mining is the unstructured characteristic of the information sources on the World Wide Web makes automated discovery of Web-based information difficult. The traditional search engines like Lycos, AltaVista, WebCrawler, provides some information to users but don’t provide structural information and categorization, filtering or interpretation of the documents. These factors caused many researchers to build more intelligent tools for information retrieval, such as intelligent Web agents, and to extend data mining techniques to provide a higher level of organization for semi-structured data available on the Web [7]. Agent-based approaches in Web mining include intelligent search agents, information filtering/ categorization agents and personalized Web agents. Several intelligent agents have been developed that search for relevant information using domain characteristics and user profiles to organize and interpret the discovered information. Agents, such as Harvest, FAQFinder, Information Manifold, OCCAM, and ParaSite are these kind of products. Many Web agents using various information retrieval techniques are also

developed to automatically retrieve, filter and categorize the Web documents. HyPursuit and BO(Bookmark Organizer) uses hierarchical clustering techniques to organize the collection of the Web documents retrieved. New clustering techniques which are based on generalizations of graph partitioning and capable of automatically discovering document similarities or associations are implemented for Web page categorization and feature selection [17]. Personalized Web agents are based on learning the user preferences and using collaborative filtering, such as WebWatcher, PAINT, Syskill & Webert, GroupLens and Filefly. Database approaches to Web mining have focused on techniques for organizing the semistructured data on the Web into more structured collections of resources like relational databases consisting of levels of Web repositories or metadata in a hierarchy. Consequently, using standard querying mechanisms and data mining techniques, the gathered information can be analyzed. For example, TSIMMIS extracts data from heterogeneous and semi-structured information sources and correlates them to generate an integrated database representation of the extracted information [3].

Figure 2. Taxonomy of Web Mining

Web usage mining is the automatic discovery of user access patterns from Web servers. Organizations collect large volumes of data in their daily operations, generated automatically by Web servers and collected in server access logs. Other sources of user information include referrer logs which contain information about the referring pages for each page reference, and user registration or survey data gathered via CGI scripts. Analyzing such data can help organizations determine the life time value of customers, cross marketing strategies across products, and effectiveness for promotional campaigns. It can also provide information on how to restructure a Web site to service effectively. User access patterns help in targeting advertisements to specific groups of users. More sophisticated systems and techniques for discovery and analysis of patterns are developing. For pattern discovery, techniques from artificial intelligence, data mining, psychology, and information theory are used to mine the knowledge from collected data.

For example, WEBMINER system automatically discovers association rules and sequential patterns from server access logs and it also proposes an SQL-like query mechanism for querying the discovered knowledge. A general Web Usage Mining architecture have been developed by [3] and partly implemented in WEBMINER system. Since our major perspective for Web mining is from the agent paradigm, the reader should refer to the related study for details of this architecture.

4. DATA MINING PROCESS AND AGENTS

In several steps through knowledge discovery, which include data preparation, mining model selection and application, and output analysis, intelligent agent paradigm can be used to automate the individual tasks. In data preparation, agent use can be especially on sensitivity to learning parameters, applying some triggers for database updates and handling missing or invalid data. In data mining model, we have seen the agent-based studies are implemented for classification, clustering, summarization and generalization which have learning nature and rule generation since current learning methods are able to find regularities in large data sets. An intelligent agent can use domain knowledge with embedded simple rules and using the training data it can learn and reduce the need for domain experts. In the interpretation of what is learned, a scanning agent can go through the rules and facts generated and identify items that can possibly contain valuable information. Data preparation in data mining involves data selection, data cleansing, data preprocessing, and data representation [1]. With the use of intelligent agents, several of these steps can possibly be automated. One possibility for automating the data selection step, is to perform automatic sensitivity analysis to determine which parameters should be used in learning. This would reduce the dependency of having domain expert available to examine the problem every time something changes in the environment.

Data cleansing could be automated through the use of an intelligent agent with a rule base. When a record is added or updated in a relational database, a trigger could call the intelligent agent to examine the transaction data. The rules in its rule base would specify how to cleanse missing or invalid data.

Data preprocessing also requires domain knowledge, since there is no way to know the semantics of the attributes and relationships like computed or derived fields. However, more standard preprocessing and data representation steps such as scaling or dimensionality reduction, symbol mapping, and normalization, which are usually specified by the data mining expert, could be automated using rules and basic statistical information about variables. Searching for patterns of interest by using learning and intelligence in classification, clustering, summarization and generalization can also be accomplished by intelligent agents. An agent can learn from a profile or from examples and feedback from user can be used to refine confidence in agent’s predictions. An intelligent agent can use domain knowledge with embedded simple rules and using the training data it can learn and reduce the need for domain experts. Data mining using neural networks and possible intelligent agent use in data mining process are discussed in [1]. In the understanding of what is learned, agent use can be only as a fixed-agent or simply a program in visualization. The major advantage of using intelligent agents in automation of data mining is indicated as their possible support for online transaction data mining. When new data is added to the database, an alarm or triggering agent can send events to the main mining application and to the learning task in it, so that new data can be evaluated with the already mined data. This automated decision support using triggers in data mining is called as "active data mining" by Agrawal and Psalia. Since, the main mining functions can be performed by using learning methods, the implementation and application of these methods by using intelligent agents will provide flexible, modular and delegated solution. Additionally, this paradigm can be used in the parallelization of the data mining algorithms according to its usability in distributed environments.

SUMMARY AND FUTURE DIRECTIONS

The exponential growth of available information requires to develop useful, efficient tools and software to assist users in reaching out the valuable ones. Special, flexible software programs, software agents can be used for automating the discovery of this information. With some degree of autonomy, agents can include a certain amount of intelligence to apply the domain-specific knowledge to retrieve, filter and classify information, find patterns and make predictions. This paper included a short survey of agent paradigm in the context of information retrieval, filtering, classification and

learning and possible use in data mining tasks. Agent-based approaches are becoming increasingly important because of their generality, flexibility, modularity and ability to take advantage of distributed resources. Agents are used for information retrieval, entertainment, coordinating systems of multiple robots, and modeling economic systems.

They are useful in reducing work and information overload, in complex tasks such as medical monitoring and battlefield reasoning. Agents provide an efficient framework for distributed computation where the retrieval of only relevant documents minimize the duration of the expensive network connection. There have been a lot of work done in the area of artificial intelligence and software agents but we generally looked through a data mining perspective and concentrated on classification, retrieval of the information, and learning. Present research going on Artificial Intelligence area including the agent paradigm is indicated by [5] are: " - autonomous agents and robots which integrates the other areas such as: knowledge representation, learning, decision-making, speech and language processing, image analysis, to create robust, active entities capable of independent, intelligent, realtime interactions with an environment over an extended period, - multiagent systems which identify the knowledge, representations, and procedures needed by agents to work together or around each other. " Continuing research issues include agent architectures, communication and coordination protocols, control negotiation, and reuse of the agents [12]. Agents used in Web are already changing the way of gathering information, conducting business, and have a great impact on our lives. As suggested in [13], because of the dynamic nature of the Internet, the growth of data and the heterogeneity of the services, extracting valuable information from the huge amount of stored data is becoming a task that can not be performed by users alone. Therefore, intelligent agent paradigm can be used in many applications with distributed nature and learning mechanisms.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now