An Evolutionary Hybrid Neural Network For Gambling

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

The work completed in this project was inspired by a second year project in artificial neural networks where students chose their own application to model. After researching into greater depth topics such as: neural networks; artificial intelligence; and programming, the potential of artificial neural networks was evident. Scouring a multitude of sources produced a wide variety of applications that have been presented previously to solve various modelling and prediction problems. Particularly inspirational applications included such things as prediction attempts such as for Oscar winners [1] and for NFL results [2]. This project however sought after a result that was more a unique solution by including an evolutionary approach to learning of a complex prediction problem, and combining it with a vision to apply it to a money making application in the context of betting.

The application that was decided to offer the best chance to make money was that of football match result betting within the Barclays Premier League. A number of reasons led to this decision after careful consideration of other possible applications. Some of the main reasons for selecting football match betting were: the substantial quantity of data that was able to be accessed in addition to this data being highly detailed and from many previous years. Also there are certainly some obvious visual correlations between results and certain variables such as opposition strength, league performance etc. that are certainly going to be relevant. The Barclays Premier League was chosen as the league to focus prediction to keep as being one of the leading leagues in world football, the data associated with it is accurate and plentiful. By narrowing the data used to just one league and prediction to a single league, it eliminated the chance for different leagues patterns to interfere with the prediction of a different league. This was quite crucial as every league has its own degree competitiveness and inter-team relationships. All this combined gave hope that with the addition of some less obvious variables, presented in the right way, it could become possible to more accurately predict outcomes of such football matches than current methods. Current prediction methods include using betting website odds and football pundits, or general personal analysis s of face data.

The reason this neural network provides a useful alternative than simply looking at the favourites from betting websites is that often the odds of betting websites don’t necessarily reflect the chances of the team to win due to the use of different variables to generate their odds. One major variable that differs is the odds change based on the betting patterns of the bettors. Bettors generally place money on who they consider to be the favourite due to personal biases or because of support of a particular team. Another way odds are affected is when bettors hope for a big payout and so bet on those who have attractively long odds despite their knowledge that the team is weaker. These and other betting patterns combined greatly affect the odds given by betting websites as the websites aren’t interested in who wins, they are looking to constantly balance adds to make the most money. So the proposal for a neural network to eliminate human biases is a valid idea. Based on research and experimentations pre-project, to make a profit gambling on football games it is was found that is necessary to predict at an accuracy that is less than many other applications to consistently turn a profit due the nature of gambling. This alongside the previously mentioned reasons was an enough encouragement to take on the project with the confidence of producing a successful outcome.

Previous Work Review

As briefly mentioned in the introduction there were a few particular papers that brought forward some interesting ideas and applications for artificial neural networks and artificial intelligence for prediction applications. Andrew D. Blaikie et.al [1] worked towards a solution for the prediction of NFL football results using a multi-layer perceptron neural network. A big part of the work was on comparing which inputs (various statistics form NFL games) had what effect on the outcome of the game and less so on predicting winners in advance to make money. Iain Pardoe [2] used artificial intelligence pattern recognition techniques to predict the winners of the Oscars throughout its history. Great success came of this project whereby successful prediction rates were achieved of 70% across a 30 year period in 3 of the 4 categories and 90% in the other. Additionally there are many systems kept private online on various websites such as [4] and [5] that boast various success rates using what are described as neural networks or artificial intelligent systems. Additionally Dr Alan McCabe used an MLP for the National Rugby League with good success rate [6]. Overall these research journals and websites provided enthusiasm that prediction problems could be solvable at a significant rate.

Hardware and Software Requirements

The project’s hardware considerations were quite small due to the nature of the project. As such the only hardware requirement was a PC capable of running Microsoft Visual Studio 2012. Microsoft Visual Studio was the chosen integrated development environment (IDE) to code, test and debug the artificial neural network. The memory requirements in terms of RAM and processor speed are quite modest so a particularly high-end machine is not required.

Social, Legal and Ethical Considerations

Regarding any ethical concerns around the development of the system it can be said that this project doesn’t involve work with any people outside of the project and so has no real ethical issues surrounding the development. However due to the gambling aspect of the project, there are some legal concerns regarding the age of people who could be exposed to the project. As the program can be viewed as a gambling tool, age restrictions are likely to apply to the software if it was to be commercialised. Also territorial concerns need to be considered, as some countries and territories prohibit gambling explicitly and so the promotion of gambling, or production of gambling tools could violate certain territories laws. This again is only if software was to be made publicly available.

Safety

The project had minimal safety considerations due to there being very little in terms of dangerous hardware. All typical concerns regarding long working hours at a computer such as posture, aches, headaches were managed with regular breaks, stretches etc. Nevertheless a risk form was completed and submitted that discussed any potential problems that were foreseen. These were regarding the unlikely dangers of working and using a computer.

Data Collection

The first stage was to collect as much data as possible that could be used as potential inputs for the neural network. This data needed to be abundant to maximise the chance of finding patterns and minimise the effects of any anomalous results.

The majority of data was collected from two football statistics websites. The first website [6] detailed past results on a season by season basis, and provided spreadsheets containing a huge range of relevant and irrelevant data, so it was necessary to process this. The spreadsheets were organised into factual data like the date and teams involved; statistical data regarding the events of the game and gambling data pertaining to various odds across bookmakers regarding the best odds for each result.

Another website [7] provided some similar data for the prediction like past scores and form but in addition, it crucially provided a succinct comparison of past match odds across the internet providing only the best odds for each result as opposed to every websites odds. This was obviously crucial when trying to make money as the smallest increase in odds could make the difference between a set of predictions being profitable or not. The margin of success (amount of money made) was expected to be reasonably small before project so factors such as this were incredibly important to maximise the performance. Together these two websites formed the basis for all the data used in the neural network.

Data Processing

As mentioned previously the original data for the training sets of the neural network needed to be processed to remove irrelevant data and to find useful data variables that would have significance on the output of the neural network. The first stage was the selection and rejection of particular variables. Once the specific data was chosen that was deemed worth analysing the next step was to pre-process it so it was fit for the neural network to use effectively.

7.1. Data Rejection

In terms of the rejection of data, certain data variables were excluded as not needed without any real testing. This case arose due to the vast quantity of data that was provided in the spreadsheets which would have had no significant bearing on the result or would be difficult to numerically incorporate into the system. Some of these excluded variables included the time of kick off, the referee presiding, and the date and time of the game etc. These variables clearly seem to be at best loosely related to any outcome of the game and serve only to slow the program and hinder the learning process; as such they were discounted and removed.

7.2. Data Selection

With regard the data deemed at least acceptable of testing, the remaining options were tested for correlation, redundancy and error using MATLAB. It was possible to compare two variables at a time and graphing them against the match result with MATLAB which allowed it to be shown if certain variables had strong relationships with the outcomes of games.

Figure 1. Distribution of results when the % of the home teams gained points are plotted vs. the % of the away teams points.

For example taking the amount of points each team has as two variables a graph could be presented that highlighted individual colours for each point on a scatter graph that represented a win, loss or draw for a particular team. Using a method such as this it would be clear to see whether the coloured clusters formed or if planes could be drawn in the graph space that reasonably separated particular results. Figure 1 shows this example and indicates some correlation between these two variables and the result. Heavy clustering within the ovals marked on the graph show values for each input take give strong indications of the output.

Figure 2. Another angle of the distribution of results when the % of the home teams gained points are plotted vs. the % of the away teams points.

Figure 2 shows a different look at the same graph placed on to a 2D graph with the colour indications remaining the same. It shows some interesting points. The dividing line included is best examined when ignoring the green dots representing a draw. The red coloured dots represent wins, which are heavily clustered towards the side especially as the decimal percentage of home points gained approaches 1. Likewise with the blue away points a similar pattern shown. The y-intercept of the line added is above zero, indicating that home advantage plays a role within the outcome as lower home points have a greater chance of a win than the corresponding lower away points. This throughout sport is a fairly common known truth so is expected.

Upon the completion on many tests similar to the ones performed above, six variables were selected to be used as the inputs for the system that had strongest correlations to correct outputs. These were the percentage of points gained from the total possible for both the home and away team. This was chosen ahead of just simply their position in the league or amount of points as this wouldn’t take into account a few considerations that it ought to, like a difference in games played where this can fluctuate and cause big discrepancies in early part of the season or at other times in the season where teams are closely matched and a large position change in the table could be separated by a few points. Additionally the form of the home and away teams as a function of points gained in the last three games was used, and finally the specific form in the past 3 games when the home team has been at home and the away team has been away. Together these six variables formed the inputs of the system.

In terms of the output, the only output required was the result of the match; this was to be numerically represented for a home win, draw, or away defeat. This was to be set at values of 1, 0.5 and 0 respectively. The reason for this is explained in the following section.

Data Pre-processing

Before putting the data into the training sets the input and output data was scaled between 0 and 1 for all variables. This was achieved using a simple scaling formula as shown in (1).

This was to ensure no variable carried any extra weight in the network than it should do. Also the form of the output had been selected to be between 0 and 1 which allowed for a consistency to run through the network. Additionally the output of the program is almost synonymous to a function of probability. The function being that a probability of 1 is a certain win for the home team and as the probability tends towards 0, it follows logically that the output should predict a defeat. Continuing this premise, a mid value where the network is almost undecided at 0.5 seems logical to predict a draw where neither team is determined to be stronger, or probable, to beat one another.

Artificial Neural Network Implementation

Artificial neural networks are mathematical representations or models that are inspired from biological neural networks. The great advantage of neural networks is their ability to model complex systems where it is difficult to produce a way to represent outputs as a simple function of the inputs. The pattern finding ability of the artificial neural networks allows for the solution to the presented for the problem analysed in this situation. A key part of the neural network is the way it continually learns the problem by experimenting on parameters within the network to find a good solution. There are a vast amount of different neural networks and many combinations of techniques within those networks that could have been used. The neural network of choice that was to be the foundation of this project was a multi-layer perception (MLP). This was then tested and experimented on before being improved and expanded using firstly a genetic algorithm, then a particle swarm optimisation approach was considered and finally a hybrid of the two techniques was used within the original MLP framework. This section explains how the artificial network was expanded on as the project progressed.

8.1. Multilayer Perceptron (MLP)

This type of network consists of a series of layers of neurons that are connected to one another through a series of connections that have associated weights [8]. The first layer of neurons is the input layer, and in this application there were 6 inputs into the system. The middle layer is known as the hidden layer, which can have as many neurons in it as desired within it. The final layer is the output layer, which in this case is of a single neuron. Figure 3 shows the original neural model of this project’s system.

Input Layer

Hidden Layer

Output Layer

Weights

Figure 3. An overview of the Multi-layer Perceptron Neural Network used.

This output neuron has a single connection from each neuron in the hidden layer. So the output is the sum of each of these hidden neurons. Now to connect the layers, connections are used that have associated numerical weights. Each connection has its own individual weight that can be altered over time as the network tries to learn the best way to convert the inputs to the outputs. By changing all of the weights to their optimal value across the network, the network will be able to learn to change inputs to outputs with the best accuracy. The mathematics of find the output is explained overleaf.

Equation (2) shows the output of the system where ‘jmax’ is the amount of hidden neurons, ‘j’ is the current hidden neuron and ‘jo’ is the corresponding weight between the hidden neuron and the output. These summed together form the output.

The hidden neuron values themselves are calculated in equation (3) where ‘ij’ is the address of the weight between input ‘i’ and hidden neuron ‘j’. This is calculated by summing all respective connections going into the hidden neuron. This value is then submitted to a sigmoidal activation function in (4) which keeps the output between 0 and 1.

The way in which the weights change is based on the error between the given input and the expected output. This is an example of back propagation. After the initial values of the all the neurons in the system are calculated including the outputs at the hidden level, the error for each pattern is recorded. This simply done in equation (5)

This process is repeated for every pattern and then the total error of the system is found by summing the square of the error for each pattern and then taking the square root as shown in (6). This is the number that is hoping to be minimised by the system.

The next step is to change the weights to a better set of values to reduce this root mean square (RMS) error. To do this firstly the Hidden-Output (HO) layer is taken. The amount that the weight needs to be changed is found by using the equation (7) where the learning rate (LR) is a pre-defined variable between 0 and 1. The Input–Hidden (IH) layer weight changes are then calculated in (8), where the learning rate is again a pre-defined variable with a value between 0 and 1. The derivation for the weight changes formulae is a standard for back propagation is found is taken from [9].

These weight changes are then taken from the original weights for the respective situations and the whole process is repeated until the max number of epochs has passed. There are some problems with this method of learning which are discussed later in the report. These disadvantages led for the introduction of a modified learning method.

8.2. Genetic Algorithm (GA)

Genetic Algorithms are search heuristics that try to mimic biological evolution. Like neural networks, there are a wide range of different genetic algorithms that are in the family of evolutionary algorithms that try to find solutions to optimisation problems such as the one presented in this scenario. Genetic algorithms use a population of potential solutions to try to solve the problem optimally. Each member of this population contains different genes which in this case represent the values of the weights within the artificial network.

The genetic algorithm worked by starting with a fixed amount of population members with genes which contained the values for all weights within the system. Every generation the genes would be ranked in order of effectiveness in terms of minimising the fitness function being the root mean of squared errors (6). The best member (solution) was kept form the population using ideas from elitism [10], where as the others would be altered by mutations within the gene, this being the changing of the weights. This amount changed was dependant on a roulette wheel selection. This is where the higher ranking genes suffered a smaller chance of any mutation and the weaker genes had a much greater chance of mutation. This meant that the better gene would be assigned a low probability of random fluctuation of each of its genotypes; whereas the least fitting genes had a much higher probability of each of its genotypes being altered. This helped the neural network learn the problem more accurately and quickly as well as exploring more of the error space reducing chances of being stuck in local minima and increasing the chance of finding the global minima, though not necessarily definitely finding it.

Figure 4. A simple overview of the gene in its constituent parts.

Figure 4 shows that all the weights are within this gene as genomes. There can be as many genes in the system as needed and more genes obviously corresponds to more chance of finding a good solution but slows down the learning process in terms of computing time a great deal. Though if a suitable solution is found in a very early epoch then the learning process could end the learning loop and begin the prediction stage, however there is no guarantee of this solution being found early. Nevertheless this was a good improvement over the existing MLP in terms of learning accurately.

8.3. Particle Swarm Optimisation (PSO)

The particle swarm approached takes similar ideas of that of the genetics algorithm approach but with a few distinct differences. PSO techniques use a large swarm of particles or solutions that occupy various points in the error space which again are randomly initialised. These particles again have their own associated weights that correspond to their position in the error space. These particles seek minima within the error landscape by being rewarded for travelling down negative gradients within the space like back propagation’s method of descending the gradient. If a particle is successful in traversing in a correct direction, then an increase in the particles velocity component is applied which is similar to the learning rate in the MLP [11]. This velocity component is responsible for the direction that it travels and is high when the gradient is high but slows down when the gradient shallows out.

However when learning slows down i.e. the particles converge to a minimum, unlike the MLP or GA some particles are randomly given are kick out of the minima they occupy by applying a large random velocity component to the said particles even if they happen to be the best. This is done to escape the possibility of the swarm finding a local minimum before the global. The new particles then begin searching again hoping to find better minima. This is done by including a random velocity component that is usually 0 but is altered when the normal velocity component gets lower. After a set amount of iterations or when the mean squared error reaches an acceptable value the learning can stop as in the other techniques mentioned.

Fusion of Techniques

After incorporating both of these systems a final idea of fusing aspects of these two techniques into one to provide a unique solution to the problem was used. This was done by taking the genetic approach of mutating the weights or genomes of a solution and keeping the best gene. The others were subject to change through back propagation. The network kept the random velocity components of the PSO to avoid local minima traps and used controlled mutation using the velocity components as learning rates, which were weighted depending on the amount of variation required. The amount of variation required relates to what was previously mentioned in the Particle Swarm Optimisation section where high gradient meant high velocity and small gradient meant reduced velocity. The following section is run through of the program to help further explain this fusion solution

Program Structure

The program was coded in C++ and compiled in the Microsoft Visual Studio 2012 environment. The program goes through a variety of stages initially setting up the foundation of the network. It then goes on to learn the problem before going on to make future predictions based on the program. Additionally it can display the results of the profit/loss based on predictions made beforehand, depending on whether you are predicting or checking previous predictions.

9.1. Pre-Program Tasks

Before the program is run, a few steps are to be undertaken if this run through is for prediction of future games; which can be done as soon as the previous set of results of the league are known. A different set up is required if you are calculating the profit or loss of a previous prediction. Both set-ups are explained on the following page.

Prediction Set-Up:

Find the set of fixtures to be predicted

Work out the inputs based on the available data

Save the file as a text document within the program directory

Edit the program code to include each of the files made above

Find the best odds for each result for betting after program using [7]

Profit/Loss Checking:

Find the best odds for each result

Save a file of the odds for each result in the program directory

9.2. Initialisation

The program starts by initialising all the starting weights for each particle in the network using a simple random number generator function. Settings such as the amount or particles, hidden neurons, learning rates and the amount of fixtures to be predicted etc are already determined at the initialisation stage.

9.3. Learning Stage

The learning stage was the critical stage in the development of the neural network. Each particle had all its associated weights for every connection stored in an array. The program proceeded to then calculate the root mean square error for each particle and then stored these values for each one. This selection of particles was scanned for the one providing the best solution and this particle was marked for no mutation.

The other particles were subjected to mutation subject to how close they were to the optimum solution using the normal back propagation technique to find the error. The learning rates used to change the amount the weight change would be was not fixed like in the MLP. So if the error was high, the learning rate velocity component was high meaning large variation on the weight would be greater, and if the error was low the variation would be low. If the variation across all particle was getting too low, random weight changes (caused by addition of sudden large velocity components) was introduced to search other minima in case this was a local one. This series of finding the best particle then mutating the others continued until the epoch limit was reached.

Post Processing

After the neural net had finished learning the matches to be predicted were imported. The predicted results were calculated using the networks final settings from the learning process. When the predicted results were displayed there were some obvious discrepancies. The output of the neural net gave an output as a value anywhere close to the range of 0 to 1 and sometime just outside this range as opposed to the 0, 0.5 or 1 as presented to it in the real training outputs. As such the network gave its best estimation to the correct answer as sort of probability. This meant decisions needed to be made as to where to cut off each distinct result. I.e. the value when a predicted draw becomes a predicted win. Again after a considerable amount of testing and analysis the better results were seen at the following intervals shown in (9):

(9)

The program can be run multiple times to get a series of predictions and then be averaged to get a more stable prediction if required.

Profit/Loss Calculation

To calculate the profit and loss of previous predictions the network simply after the predictions have been made, followed a series of steps are followed. Firstly the best odds for each result are read into the program from a saved text document. The program reads these odds into an array. Then a simple calculation is performed based on the desired bet amount multiplied by the odds of the result predicted. A running total is maintained as the program calculates through all the results. For times when the prediction was incorrect a simple negation of the betting stake is applied to the running profit/loss total. The total profit/loss is then displayed at the end of the program along with each individual outcome.

Results

The results of the project have been displayed in various formats showing particular outcomes. Figure 5 shows how the network developed as it was improved over the project time comparing learning rates and prediction rates. It can be seen the as the network pushes to the final hybrid it continually improves in both prediction and learning

Figure 5. A comparison of the success rates of learning and prediction for various enhancements to the neural network

Results in Figure 6 show the average profit per game which shows that the relatively small improvements on prediction accuracy can have marked improvement in terms of profit made per game.

The results in Figure 7 highlight the profit across eight weeks of prediction across 10 games a week at £5.00 per bet. The results are very positive and consistently produce a profit across the weeks except for one week when a minor loss was recorded of -£0.75. This is based on a total stake of £50.00. A total of £292.90 was recorded as profit across these eight weeks, giving an average profit of £36.61per week. Or a £3.61 profit on every game on average. This means on average a 73% return profit on top of the stake.

Figure 6. Average profit seen across tested neural networks

Figure 7. Profit/Loss of Neural Network across 8 weeks

10.1. Erroneous Result Classification

It was not expected that the prediction of the network would ever reach 100% due to unpredictable elements of sport that can occur regardless of prior knowledge about the teams and game. As such it was important to classify errors in particular groups to see where the program was expected to get a correct result and where a big upset occurred that you would expect the program not to pick up on. The main classification groups were:

Misclassification: The neural network simply was mistaken due to being on the boundary of two particular results so it was ‘to close to call’ and couldn’t make strong enough decision

Upset: The winning team produced a shock result that would have not been expected to be predicted. This could have occurred due to red cards/ injuries or poor referee decision that the neural network could foresee.

Performance

The performance of the network can be measured in a few different metrics. The key measures seem to be in the ability to learn the problem based on the training data but also in terms of how accurate the predictions are when compared to other solutions such as betting sites and football pundits etc

11.1. Learning Performance

The neural network was experimented with on a variety of learning rates, amount of hidden layers and other program specific variables during the final hybrid solution. Under optimum variables settings the hybrid program went on to learn consistently at around an 80%+ average which provided a solid base for prediction. The earlier networks had slightly lower learning rates as can be seen in Figure 5 and so show the hybrid model as a good performer.

11.2. Prediction Performance

The performance of the prediction was successful though not always meeting the learning percentage in terms of accuracy. Partly this was due to the limited of predictions made per week (usually 10) meaning a few incorrect results drop the percentage dramatically. The prediction rate however wasn’t directly synonymous with the amount of profit made due to the individual odds of each match playing a huge role in profitability.

So a more effective way to measure the performance of the prediction was to compare with existing methods such as football pundits and betting favourites on various websites. Though the most effective method it would seem is to see if the program’s predictions would result in a profit which was the ultimate aim of the project.

So with regards to leading pundits from Sky and the BBC Sport sections, the program consistently outperformed them in terms of the amount of correct results per week. With regards to the betting websites it was more competitive with the betting sites occasionally matching or rarely bettering the amount of correct results, though with regards to the key issue of making money the program successfully met its brief and did outperform them consistently. The odds of each game played being this key difference, especially the fact that the best odds were consistently picked from across all websites giving a little edge.

Discussion

The relevant merits of each system are worth discussing and comparing despite the clear trend in the results showing the superior system as the hybrid.

12.1. Multi-layer Perceptron (MLP)

The MLP with no enhancement still managed to learn the problem at a rate that was at a profitable level, albeit a low level profit. Across longer testing with more weeks of results this may well have declined as the weeks were unpredictable in terms of always making a profit. The learning speed was reasonably fast to its optimum level in terms of epochs and computing time however as mentioned this was at the expense of learning accuracy.

12.2. Genetic Algorithm

The addition of a genetic algorithm helped with the speed of learning the problem in terms of epochs taken though did increase computer memory usage and so time. However the accuracy over the MLP was markedly improved, and again resulted in a system that was profitable

12.3. Particle Swarm Optimisation

The particle swarm optimisation like the GA improved on learning accuracy, time taken in epochs and the profitable nature of the system. However the PSO was more memory intensive especially when a larger amount of particles were used. In comparison the to the GA there was not a marked difference in profitability

12.4. Hybrid

The hybrid network was definitely a success in terms of nearly all of the metrics it was measured against. Though the actual physical time and memory taken by the program was increased the hybrid combination of the genetic algorithm and particle swarm optimisation produced very high learning rates for a complex problem such as this reaching around 80% regularly. Also the profitability of the network resulted in a profit in 7 out of 8 weeks with the 1 week of loss coming in at a small -£0.75.

Conclusion

Overall the program learns to a good standard the problem presented as shown by the solid 80% learning success rate. An additional reason to feel confident is the succession of profits that the program has made should real money have been staked on the predicted outcomes. As such it is clear that although it is impossible to obtain a perfect model it is possible to push towards very accurate percentage of certainty that is enough to provide a way to make money from football betting. The PSO and GA hybrid learns the problem quickly and accurately enough to give enough confidence that the program has been used in real-life to make money and will continue to be used and developed.

Future Work

There are many exciting possibilities to develop the project further in the future. A few key areas stand out as particular areas to focus future developments. In terms of the program’s structure and performance itself, there is much more scope for further experimentation by incorporating other genetic algorithms or similar learning techniques. The continual addition to the database of past results to work from could also enhance learning and prediction. The parameters of the program itself, although have been tested, could still be tested further due to the huge combinations of parameters available. Different target functions as opposed to mean square error could be explored as well.

In terms of the visual and ease of use of the program a lot of work could be done to make it more engaging by developing an attractive GUI. Additionally being able to write predictions and the profit/loss to a spreadsheet would make the overall system superior. As the focus on the project was to make a working neural network that successfully predicted results, this part of the project took a back seat but would have been an extra should there have been more time available. Overall looking towards the future the project gives great promise and excitement going forward and definitely will be continued to be worked on

Acknowledgements

I would like to acknowledge Richard Mitchell and Paul Minchinton for their supervision, encouragement and ideas throughout the project. Additionally I would like to thank William Harwin for his recommendation to look at the Oscar Prediction work carried out in [1]



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now