Concept Class Description Characterization And Discrimination

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Chapter 2

TECHNICAL KEYWORDS

2.1 DATA MINING

Data are being collected and accumulated at a dramatic pace across a wide variety of fields. As the price of hard disc continues to drop, there is no difficulty in storage of data. Today we have overwhelming data: from business transaction and scientific data to satellite pictures, text reports and military intelligence. This data is having potentially useful information hidden in it that can be used for decision making.

The process of extracting the potentially useful hidden predictive information from data is called data mining [1]. In other words, data mining is discovery of patterns or regularities in the data. These patterns indicate how the physical world behaves and can be used for predicting what will happen in the future. The patterns should be meaningful, novel and understandable.

2.1.1 Data mining functionalities

Data mining functionalities are used to specify the kind of patterns to be found in data mining tasks. In general, data mining tasks can be classified into two categories: descriptive and predictive. Descriptive mining tasks characterize the general properties of the data in the database.

Data mining functionalities, and the kinds of patterns they can discover, are described below.

2.1.1.1 Concept/class description: characterization and discrimination

Data can be associated with classes or concepts. It can be useful to describe individual classes and concepts in summarized, concise, and yet precise terms. Such descriptions of a class or a concept are called class/concept descriptions. These descriptions can be derived via (1) data characterization, by summarizing the data of the class under study (often called the target class) in general terms, or (2) data discrimination, by comparison of the target class with one or a set of comparative classes (often called the contrasting classes), or (3) both data characterization and discrimination.

2.1.1.2 Association analysis

Association analysis is the discovery of association rules showing attribute-value conditions that occur frequently together in a given set of data. Association analysis is widely used for market basket or transaction data analysis.

2.1.1.3 Classification and prediction

Classification is the processing of finding a set of models (or functions) which describe and distinguish data classes or concepts, for the purposes of being able to use the model to predict the class of objects whose class label is unknown. The derived model is based on the analysis of a set of training data (i.e., data objects whose class label is known). It is the construction & use of a model to access the class of an unlabeled object, or to access the value or value ranges of an attribute that a given attribute is likely to have.

2.1.1.4 Clustering analysis

Unlike classification and predication, which analyze class-labeled data objects, clustering analyzes data objects without consulting a known class label. In general, the class labels are not present in the training data simply because they are not known to begin with. Clustering can be used to generate such labels. The objects are clustered or grouped based on the principle of maximizing the intraclass similarity and minimizing the interclass similarity. That is, clusters of objects are formed so that objects within a cluster have high similarity in comparison to one another, but are very dissimilar to objects in other clusters. Each cluster that is formed can be viewed as a class of objects, from which rules can be derived. Clustering can also facilitate taxonomy formation, that is, the organization of observations into a hierarchy of classes that group similar events together.

2.1.1.5 Evolution and deviation analysis

Data evolution analysis describes and models regularities or trends for objects whose behavior changes over time. Although this may include characterization, discrimination, association, classification, or clustering of time-related data, distinct features of such an analysis include time-series data analysis, sequence or periodicity pattern matching, and similarity-based data analysis.

2.2 DECISION TREE:

A decision tree is a flow-chart-like tree structure, where each internal node is denoted by rectangles, and leaf nodes are denoted by ovals. All internal nodes have two or more child nodes. All internal nodes contain splits, which test the value of an expression of the attributes. Arcs from an internal node to its children are labeled with distinct outcomes of the test. Each leaf node has a class label associated with it. A decision tree is constructed from a training set, which consists of data tuples. Each tuple is completely described by a set of attributes and a class label. Attributes can have discrete or continuous values. Decision trees are used to classify the data tuples whose class label is unknown. Based on the attribute values of the tuple, the path from root to a leaf can be followed. The class of the leaf is the class predicted by decision tree for that tuple.

2.2.1 Decision tree induction:

The task of constructing a tree from the training set has been called tree induction or tree building. Most existing tree induction systems adopt a greedy (i.e. non-backtracking) top-down divide and conquer manner [1]. Starting with an empty tree and the entire training set, following algorithm is applied on the training data (where each tuple is associated with a class label) until no more splits are possible.

Algorithm:

1) Create a node N.

2) If all the tuples in the partition are of the same class then return N as a leaf node labeled with that class.

3) If attributes list is empty then return N as a leaf node labeled with the most common class in samples.

4) Identify the splitting attribute so that resulting partitions at each branch are as pure as possible.

5) Label node N with splitting criterion which serves as test at that node.

6) If splitting attribute is discrete valued then remove splitting attribute from attribute list.

7) Let Pi be the partitions created based on the i outcomes on splitting criterion.

8) If any Pi is empty then attach a leaf with the majority class in the partition to node N.

9) Else recursively apply the complete process on each partition.

10) Return N.

2.2.2 Decision Tree Algorithms:

2.2.2.1 ID3:

ID3 algorithm introduced by J. R. Quinlan [2] is a greedy algorithm that selects the next attributes based on the information gain associated with the attributes. The attribute with the highest information gain or greatest entropy reduction is chosen as the test attribute for the current node.

The information gain measure is used to select the test attribute at each node in the tree. Such a measure is referred to as an attribute selection measure or a measure of the goodness of split. The attribute with the highest information gain or greatest entropy reduction is chosen as the test attribute for the current node. Entropy is a measure of the uncertainty associated with a random variable. If the class value of the data in a node is equally divided among all possible vales of the class value, the entropy of the node is maximum. If the class value of the data in a node is same for all data, entropy is minimum. The attribute thus selected minimizes the information needed to classify the samples in the resulting partitions and reflects the least randomness or "impurity" in these partitions. Such an information-theoretic approach minimizes the expected number of tests needed to classify an object and guarantees that a simple (but not necessarily the simplest) tree is found.

Let S be a set consisting of s data samples. Assume there are two classes, P and N. Let the set of examples S contain p elements of class P and n elements of class N . The amount of information, needed to decide if an arbitrary example in S belongs to P or N is defined as

Assume that using attribute A a set S will be partitioned into sets {S1, S2 , …, Sv} If Si contains pi examples of P and ni examples of N, the entropy, or the expected information needed to classify objects in all subtrees Si is

The encoding information that would be gained by branching on A

Gain (A) = I (p, n) – E (A)

For example, a sample database is represented in Table 2.1, the weather database. The class label attribute, Play, has two distinct values (namely yes, no), therefore, there are two distinct classes (m = 2). Let s1 correspond to the class yes and class s2 correspond to no. There are 9 samples of class yes and 5 samples of class no. To compute the information gain of each attribute, first the expected information needed to classify a given sample is computed. This is:

I (s1, s2) = I (9, 5) = - (9/14) log2 (9/14) - (5/14) log2 (5/14) = 0.940

Next, we need to compute the entropy of each attribute. Let's start with the attribute Outlook. We need to look at the distribution of yes and no samples for each value of Outlook. We compute the expected information for each of these distributions.

For Outlook=Sunny; s11 = 2 s21 = 3 I (s11; s21) = 0.971

For Outlook=Overcast: s12 = 4 s22 = 0 I (s12; s22) = 0

For Outlook=Rainy: s13 = 3 s23 = 2 I (s13; s23) = 0.971

Using Equation for entropy, the expected information needed to classify a given sample if the samples are partitioned according to age, is:

E (Outlook) = (5/14) I(s12,s21) + (4/14) I(s12,s22)+ (5/14) I(s13,s23)

E (Outlook) = (5/14) × (-(2/5) log2 (2/5) – (3/5) log2 (3/5))

+ (4/14) × (-(4/4) log2 (4/4) – (0/4) log2 (0/4))

+ (5/14) × (-(3/5) log2 (3/5) – (2/5) log2 (2/5)) = 0.694

Hence, the gain in information from such a partitioning would be:

Gain (Outlook) = I (s1, s2) – E (Outlook) = 0.247

Similarly, we can compute Gain (Temperature) = 0.029, Gain (Humidity) = 0.151, and Gain (credit rating) = 0.048. Since age has the highest information gain among the attributes, it is selected as the test attribute. A node is created and labeled with age, and branches are grown for each of the attribute's values. The samples are then partitioned accordingly, as shown in Figure 2.2. Notice that the samples falling into the partition for Outlook = Overcast all belong to the same class. Since they all belong to class yes, a leaf should therefore be created at the end of this branch and labeled with yes.

2.2.2.2 C4.5:

C4.5 is a successor of ID3. The C4.5 algorithm constructs the decision tree with a divide and conquers strategy as mentioned in Algorithm 2.2.1. In C4.5, each node in a tree is associated with a set of cases. At the beginning, only the root is present, with associated the whole training set. At each node the divide and conquer algorithm is executed, trying to exploit the locally best choice using gain ratio to select splitting attribute.

C4.5 made a number of improvements to ID3. Some of these are: Handling both continuous and discrete attributes – In order to handle continuous attributes, C4.5 creates a threshold and then splits the list into those whose attribute value is above the threshold and those that are less than or equal to it. Handling training data with missing attribute values – C4.5 allows attribute values to be marked for missing. The class probability distribution resulting from classifying a particular case with the decision tree is determined, the class with the highest probability is chosen as the predicted class. Handling attributes with differing costs. Pruning trees (see section 3.4) after creation – C4.5 goes back through the tree once it’s been created and attempts to remove branches that do not help by replacing them with leaf nodes. The algorithm pessimistic error pruning [2] is implemented in C4.5.

The information gain measure discussed in 2.2.2.1 is biased towards testes with many outcomes [3]. That is it prefers to select having large no of values. But in some cases such type of partitioning is useless ( e.g. partitioning on unique identifier) . Gain ratio [4] attempts to overcome this bias. First a split information value is defined as follows

Split (T) = -

This value represents the potential information generated by splitting the training data set T into s partitions, corresponding to the s outcomes of a test on attribute A. For each outcome, it considers the number of tuples having that outcome with respect to total number of tuples in T.

The gain ratio is defined as

GainRatio (A) = Gain (A) / Split (T)

The attribute with the maximum gain ratio is selected for splitting.

Example:

A test on Temperature splits the data of table shown in Table 2.1 in three partitions, namely Cool, Mild, Hot containing four, six and four tuples, respectively . To compute the gain ratio of Temperature we first compute the split info as shown below

SplitTemperature (T) = (-4/14) × log2 (4/14) – (6/14)× log2 (6/14) - (4/14)× log2 (4/14)

= 0.926

As Gain (Temperature) = 0.029

Gain Ratio (Temperature) = 0.029/0.926 = 0.031

2.3 GENETIC ALGORITHM:

Genetic Algorithms are a family of computational models inspired by evolution. These algorithms encode a potential solution to a specific problem on a simple chromosome-like data structure and apply recombination operators to these structures as to preserve critical information [5]. Genetic algorithms are often viewed as function optimizer, although the range of problems to which genetic algorithms have been applied are quite broad.

An implementation of genetic algorithm begins with a population of (typically random) chromosomes. One then evaluates these structures and allocated reproductive opportunities in such a way that these chromosomes which represent a better solution to the target problem are given more chances to `reproduce' than those chromosomes which are poorer solutions. The 'goodness' of a solution is typically defined with respect to the current population.

Genetic Algorithms are search algorithms that are based on concepts of natural selection and natural genetics as explained in [6]. Genetic algorithm was developed to simulate some of the processes observed in natural evolution, a process that operates on chromosomes (organic devices for encoding the structure of living being). The genetic algorithm differs from other search methods in that it searches among a population of points, and works with a coding of parameter set, rather than the parameter values themselves. It also uses objective function information without any gradient information. The transition scheme of the genetic algorithm is probabilistic, whereas traditional methods use gradient information. Because of these features of genetic algorithm, they are used as general purpose optimization algorithm. They also provide means to search irregular space and hence are applied to a variety of function optimization, parameter estimation and machine learning applications.

An implementation of genetic algorithm begins with a population of (typically random) chromosomes. One then evaluates these structures and allocated reproductive opportunities in such a way that these chromosomes which represent a better solution to the target problem are given more chances to ‘reproduce' than those chromosomes which are poorer solutions. The ‘goodness’ of a solution is typically defined with respect to the current population.

The framework genetic algorithm is as follows

Formulate initial population

Randomly initialize population

repeat

evaluate objective function

find fitness function

apply genetic operators

reproduction

crossover

mutation

until stopping criteria

2.3.1 Fitness Function:

As mentioned earlier, GAs mimic the survival-of-the-fittest principle of nature to make a search process. Therefore, GAs are naturally suitable for solving maximization problems. Minimization problems are usually transformed into maximization problem by suitable transformation. In general, a fitness function F(i) is first derived from the objective function and used in successive genetic operations. Fitness in biological sense is a quality value which is a measure of the reproductive efficiency of chromosomes. In genetic algorithm, fitness is used to allocate reproductive traits to the individuals in the population and thus act as some measure of goodness to be maximized. This means that individuals with higher fitness value will have higher probability of being selected as candidates for further examination. Certain genetic operators require that the fitness function be non-negative, although certain operators need not have this requirement. For maximization problems, the fitness function can be considered to be the same as the objective function or = )

2.3.2 GA operators:

The operation of GAs begins with a population of random individuals representing design or decision variables. The population is then operated by three main operators; reproduction, crossover and mutation to create a new population of points. The new population is further evaluated and tested till termination. If the termination criterion is not met, the population is iteratively operated by the above three operators and evaluated.

2.3.2.1 Reproduction:

Reproduction (or selection) is an operator that makes more copies of better individuals in a new population. Reproduction is usually the first operator applied on a population. Reproduction selects good individuals in a population and forms a mating pool. This is one of the reasons for the reproduction operation to be sometimes known as the selection operator. Thus, in reproduction operation the process of natural selection causes those individuals that encode successful structures to produce copies more frequently. To sustain the generation of a new population, the reproduction of the individuals in the current population is necessary. For better individuals, these should be from the fittest individuals of the previous population.

There exist a number of reproduction operators in GA literature, but the essential idea in all of them is that the above average strings are picked from the current population and their multiple copies are inserted in the mating pool in a probabilistic manner.

Roulette-Wheel Selection: The commonly-used reproduction operator is the proportionate reproduction operator where an individual is selected for the mating pool with a probability proportional to its fitness. Thus, the ith string in the population is selected with a probability proportional to Fi.

Stochastic remainder selection: A better selection scheme is also presented here the basic idea of this selection is to remove or copy the individuals depending on the values of reproduction counts. This is achieved by computing the reproduction count associated with each string. Reproduction count is computed based on the fitness value by stochastic remainder selection without replacement as it is superior to other schemes.

2.3.2.2 Crossover:

A crossover operator is used to recombine two individuals to get a better individual. In crossover operation, recombination process creates different individuals in the successive generations by combining material from two individuals of the previous generation. In reproduction, good individuals in a population are probabilistic-ally assigned a larger number of copies and a mating pool is formed. It is important to note that no new individuals are formed in the reproduction phase. In the crossover operator, new individuals are created by exchanging information among strings of the mating pool. The two individuals participating in the crossover operation are known as parent and the resulting individuals are known as children. It is intuitive from this construction that good sub-strings from parent strings can be combined to form a better child string, if an appropriate site is chosen. The effect of cross over may be detrimental or beneficial. Thus, in order to preserve some of the good strings that are already present in the mating pool, all strings in the mating pool are not used in crossover.

2.3.2.3 Mutation:

Mutation adds new information in a random way to the genetic search process and ultimately helps to avoid getting trapped at local optima. It is an operator that introduces diversity in the population whenever the population tends to become homogeneous due to repeated use of reproduction and crossover operators. Mutation may cause the chromosomes of individuals to be different from those of their parent individuals.

Mutation in a way is the process of randomly disturbing genetic information. This random scattering would result in better optima, or even modify a part of genetic code that will be beneficial in later operations. On the other hand, it might produce a weak individual that will never be selected for further operations.

2.4 Advantages of Genetic Algorithm:

GAs work with a coding of variables instead of the variables. The advantage of working with a coding of variables is that the coding discretizes the search space, even though the function may be since GAs require only function values at various discrete points, a discrete or discontinuous function can be handled with no extra burden. This allows GAs to be applied to a wide variety of problems. In GAs, previously found good information is emphasized using reproduction operator and propagated adaptively through crossover and mutation operators. Another advantage with a population-based search algorithm is that multiple optimal solutions can be captured in the population easily, thereby reducing the effort to use the same algorithm many times.

Chapter 3

INTRODUCTION

3.1 INTRODUCTION TO EDUCATIONAL DATA MINING:

Educational Data mining (EDM) is used to explore the data originating in the educational context. Educational systems collect and maintain a large amount of data, i.e. students’ data, teachers’ data, alumni data, educational resources’ data etc. EDM uses computational techniques to extract knowledge from data to answer educational questions. EDM is a relatively new research area which exploits statistical, machine-learning, and data-mining (DM) algorithms over the different types of educational data [7].

The data may come from different types of education systems:

1) Traditional Education system: In this system there is direct contact between the students and the teacher. Students’ record including the information such as attendance, grades may be kept manually or digitally. Students’ performance is the measure of this information.

2) Web based learning system: It is also known as e-learning, students can learn online In a web based system, various data about the students are automatically collected through logs.

3) Intelligent Tutoring System: Intelligent tutoring system (ITS) and adaptive educational hypermedia system (AEHS) are an alternative to the just put-it-on-the-web approach, trying to adapt teaching to the needs of each particular student.

Results of educational data mining can be used by different members of education system [7], [8] with different objectives. They can be used for students to improve the learning process. Teachers can use them to classify learners into groups, to identify the students who are likely to fail, to find most frequently made mistakes. Organizations or universities can use them to enhance the decision process higher learning institutions to find the ways of improving retention and grades, to help to admit students who will do well in university. Course developers can use them to evaluate and maintain courseware, to evaluate course contents. Administrators can use them to decide which courses to offer, to evaluate teachers and curricula. Educational

Figure 3.1: Applying data mining to the educational system [7]

researchers can use them to compare data mining techniques and recommend one for a specific task.

Educational data and problems have some special characteristics that require the issue of mining to be treated in a different way. Although most of the traditional DM techniques can be applied directly, others cannot and have to be adapted to the specific educational problem at hand. Furthermore, specific DM techniques can be used for specific educational problems. Different data mining techniques that can be applied in education systems are clustering, classification, outlier detection, association rule mining and sequential pattern mining.

3.2 OBJECTIVE

This study enables to create the model that predicts the academic performance of the engineering students in contact education system. The students at risk can be identified in advance. The performance prediction is very important because of the fact that the number of engineering seats and colleges are increasing in India and the inferior students are also enrolled in engineering courses. So the results of the universities for engineering courses are going down. If we know in advance which students are likely to fail, the colleges or the teachers can take the necessary actions (like increasing tuition hours per week) to improve the results. This will finally help in improving placements. Good placement is one of the key factors that will help the college to attract students. Students, teachers and institute can use the results to improve overall learning process.

3.3 MOTIVATION:

Data mining is the process of extracting the potentially useful hidden predictive information from data. Data mining has been found very useful in the business domain to increase the profit. Data mining can be used for decision making in educational system. Decision trees are flowchart like structure that is easy to interpret. Most of the decision trees algorithm are greedy and follow divide and conquer technique. One major drawback of greedy search is that it usually leads to sub-optimal solutions. Hence, other approach that has been used is the induction of decision trees through Genetic Algorithms. Instead of local search, GAs perform a robust global search in the space of candidate solutions. So decisions trees are induced using both approaches, generic algorithm and C4.5 (greedy algorithm) to predict engineering students result.

3.4 LITERATURE SURVEY:

Z. J. Kovacic presented a case study on educational data mining in [9] to identify up to what extent the enrolment data can be used to predict student’s success. The algorithms CHAID and CART were applied on student enrolment data of information system students of open polytechnic of New Zealand to get two decision trees classifying successful and unsuccessful students. The accuracy obtained with CHAID and CART was 59.4 and 60.5 respectively.

M. Ramaswami and R. Bhaskaran [10] used the CHAID prediction model to analyze the interrelation between variables that are used to predict the outcome on the performance at higher secondary school education in India. The features like medium of instruction, marks obtained in secondary education, location of school, living area and type of secondary education were the strongest indicators for the student performance in higher secondary education. This CHAID prediction model of student performance was constructed with seven class predictor variables with accuracy 44.69%.

Thai-Nghe, Drumond, Krohn-Grimberghe, Schmidt-Thieme [11] have used recommender system technique in educational data mining to predict student performance.

In India, after higher secondary education students have to take crucial decision which branch to choose so that there will be good chances of placement. Elayidom, Idikkula, J. Alexander, A. Ojha [12] created the decision tree which helps admission seekers to choose a branch with high industrial placement. The data was supplied by National Technical Manpower Information System (NTMIS) via Nodal center. Data was compiled by them from feedback by graduates, post graduates, diploma holders in engineering from various engineering colleges and polytechnics located within the state during the year 2000-2003. The standard database is processed to get a table, in which corresponding to each input combination, the percentage placement is computed.

Nghe, Janecek, and Haddawy [13] compared the accuracy of decision tree and Bayesian network algorithms for predicting the academic performance of undergraduate and postgraduate students at two very different academic institutes: Can Tho University (CTU), a large national university in Viet Nam, and the Asian Institute of Technology (AIT), international university in Thailand. It was found that decision trees are 3-12% more accurate than Bayesian Networks.

V. P. Bresfelean, M. Bresfelean and N. Ghisoiu [14] found that students success depends on students choice in continuing their education with post university studies or other specialization attribute, students admittance grade and the fulfillment of their prior expectation regarding their present specialization.

A. Merceron and K. Yacef [15] presented how pedagogically relevant knowledge can be discovered from web-based educational system. The authors built the decision trees from the student data of Logic-ITA web based tutoring tool used at Sydney university to generate if then rules which predict student marks he is likely to achieve.

Baradwaj and Pal [16] obtained the university students data like attendance, class test, seminar and assignment marks from the students’ previous database, to predict the performance at the end of the semester.

A. Nandeshwar and S. Chaudhari [17] mined students’ data stored in data warehouse. The data was preprocessed and feature selection is done with the help of a wrapper ( wrapper included J48 tree learner and Naïve Bayes learner as part of attribute selection process) . Ranking of attributes in order of importance is generated. J48, Naïve Bayes and RIDOR classification techniques are used to create the model. It was found that financial aid was the most important factor that attracted students to enroll. Therefore financial aid can be used as a controlling factor for increasing the quality on incoming student.

Romero et al. [18] tested genetic algorithms on the Web-based Hypermedia Course and they show that genetic algorithm is a good alternative for extracting a small set of comprehensible rules.

Kalles and Pierrakeas [19] have analyzed students academic performance throughout the academic year, as measured by the homework assignments, attempted to derive short rules that explain and predict success or failure in the final exams using genetic algorithm based induction of decision trees.

Kalles and Xenos [20] used combination of genetic algorithm and decision trees (GATREE) on students’ data (at HOU) to suggest a quality control system in an educational context.

J. Bala et. al.[21] used GA to search the space of all possible subsets of large set of features. For a given subset a decision tree is generated using ID3. The classification performance of the decision tree on unseen data is used as the measure of fitness for the given feature set, which in turn is used by GA to evolve better feature sets. The process is repeated until a feature subset is found with satisfactory classification performance.

B. Bhardwaj and Pal [22] used Bayes classification technique to performance of BCA students (UP, India). Most of the feature selected focus on socio-economic background of the student. It was found that the factors like students’ grade in SSC, living location, medium of teaching , mother’s qualification, students’ other habits, family annual income and students family status were highly correlated with student academic performance.

Akinola, Akinkunmi, Alo [23] used ANN backpropagation algorithm is used on the sample data of computer science students( University of Ibadam , Nigeria). Results show that candidates with good background in physics and mathematics will perform efficiently in computer programming and the pre-higher institution qualification would contribute immensely to the performance of students in their chosen course of studies.

Bresfelean worked on the data collected through the surveys from senior undergraduate students at the faculty of economics & Business administration in Cluj-Napoca [24]. Decision tree algorithms in the WEKA tool, ID3 and J48 were applied to predict which students are likely to continue their education with the postgraduate degree. The model was applied on two different specializations students’ data and an accuracy of 88.68 % and 71.74 % was achieved with C4.5.

P. Cortez and A. Silva [25] worked on secondary students’ data to predict their grade in contact education system. Past Performance as well as socio-economic information was collected and results were obtained using different classification techniques. It was found that the tree based algorithms outperformed the methods like Neural Networks and SVM.

S. Ghosh et. al. [26] used genetic algorithm to find all the frequent itemsets from given data sets.

R. Barros et. al.[27] presented the survey of evolutionary algorithms like genetic algorithm and genetic programming and reviewed applications of evolutionary algorithms for decision tree induction in different domains, such as software estimation, software modules protection and cardiac imaging data. Advantages and drawbacks of decision tree induction using evolutionary algorithms are also discussed along with the discussion of objective function, crossover and mutation operator selection, parameters setting for the same.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now