What Is Machine Learning

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Standard machine learning algorithms are non-interactive: they input training data and output a model. Usually, their behavior is controlled by parameters that let the user tweak the algorithm to match the properties of the domain for example, the amount of noise in the data. In practice, even users familiar with how the algorithm works must resort to trial and error to find optimal parameter settings. Most users have limited understanding of the underlying techniques and this makes it even harder for them to apply learning schemes effectively.

Machine learning is that the science of obtaining computers to act while not being explicitly programmed. within the past decade, machine learning has given us self-driving cars, sensible speech recognition, recommendation systems on websites, effective net search, and a vastly improved understanding of the human genome.

What is neural network ?

The work flow for the neural network design process has seven primary steps:

1 Collect data

2 Create the network

3 Configure the network

4 Initialize the weights and biases

5 Train the network

6 Validate the network

7 Use the network

This topic discusses the basic ideas behind steps 2, 3, 5, and 7. The details of these steps come in later topics, as do discussions of steps 4 and 6, since the fine points are specific to the type of network that you are using. The Neural Network Toolbox software uses the network object to store all of the information that defines a neural network. This topic describes the basic components of a neural network and shows how they are created and stored in the network object. After a neural network has been created, it needs to be configured and then trained. Configuration involves arranging the network so that it is compatible with the problem you want to solve, as defined by sample data. After the network has been configured, the adjustable network parameters (called weights and biases) need to be tuned, so that the network performance is optimized. This tuning process is referred to as training the network. Configuration and training require that the network be provided with example data. This topic shows how to format the data for presentation to the network. It also explains network configuration and the two forms of network training: incremental training and batch training.

Artificial intelligence technique that mimics the operation of the human brain known as as nerves and neurons, and includes of densely interconnected laptop processors operating synchronous, coincident (in the parallel). A key feature of neural networks is that they are programmed to 'learn' by sifting data repeatedly, yearning for relationships to make mathematical models, and automatically correcting these models to refine them continuously. additionally referred to as neural web.

A modeling technique based on the observed behavior of biological neurons and used to mimic the performance of a system. It consists of a bunch of components that begin out connected throughout a random pattern, and, based upon operational feedback, are molded into the pattern required to urge the required results. it's utilized in applications like robotics, diagnosing, forecasting, image processing and pattern recognition.

The most intelligent device on earth, the "Human brain" is that the driving force that has given us the ever-progressive species diving into technology and development as on a usual progresses.

What is artificial intelligence ?

It's a is a district of computer science that emphasizes the creation of intelligent machines that job and react like humans. a number of the activities computers with artificial intelligence are designed for embody speech recognition, learning, coming up with and drawback solving.

It is a subfield of computer science involved with the ideas and strategies of symbolic inference by laptop and symbolic information illustration to be used in creating inferences. AI will be seen as an endeavor to model aspects of human thought on computers. it's conjointly generally outlined as attempting to unravel by computer any drawback that an individual's will solve faster.

In simple world the artificial intelligence is the the branch of core computer science involved with creating computers behave like humans.

Motivation towards recommendation system

The recommendation system is a this generation hot topic for the web marketing and for the on line commerce. Shopping on line is growing with rapid speed and most of e commerce websites focusing more on various recommendation systems to improve there sales. With rapid growth of the e commerce the data mining is become field of research in the recommendation system.

The recommendation system work on the user past patterns in general to bring the better results when user on the website, and collaborative filtering technique improve the suggestions level to next level.The growth of e-commerce and on line industry now days focused more on the recommendation systems to engage more user and interlink user on the websites. Now user can easily link with more right things / people with the help of the these recommendation systems. In the last decade various solutions are successfully implemented in various commercial environment. The key ideas and concept recommendation systems used in from the machine learning, neural network and artificial intelligence.

The machine learning as branch of artificial intelligence concept and algorithms well used in the recommendation systems to make it more better in term of empirical data and sensor data. Also in term of the neural network software agent concept and statistical estimations, classification optimization are used. Adopting the recommendation system in website is reach the more relevant user "rating" and "preferences" that help user to engage on the website and help to increase the revenue of the business. To build the recommendation system different approaches adopted, mainly content based approaches and collaborative filtering approach. The new improved techniques in recommendation system made websites more advanced and more productive and more profitable for the both web users and business owners. Recommendation system change the traditional view of finding relevant information the website. The use of the artificial intelligence traditional ideas and algorithms in recommendation system made more efficient to bring the better results to user.

Recommendation systems examples

Facebook.com – Worlds biggest social network

It is launched in 2004 and has a 900 million active user daily. Website aim is to connect the friends, college, school, business groups, college, events, pages, organizations, companies and web groups.

Example - " Find more Friends / Suggest Friends / Suggest Members /Add More Friends "

Its a more efficient example to evaluate the recommendation system. Above all options in the face book bring more friends and mutual friends together under the one root map. With the help of these user can connect more related friends from school and college and more mutual friends.

Amazon.com – Worlds biggest e-commerce website.

The website contain almost 2 lakes products on the website. The recommendation system play key role in growing of the sales and also help to user to make appropriate selection.

The web store bring the more relevant product in front of the user under select category and make user to easy to move to appropriate item on the website.

Linkedin.com – Worlds biggest professional network.

The linkedin website believe in the linking more professional people in the own network. The data mining techniques and artificial intelligence core ideas used more effectively on this website to link with more proper professional people on the website.

Youtube.com – Worlds biggest video sharing website.

The youtube.com is started in 2005, Now youtube is one of the top video watching website. When user start accessing videos one after another youtube recommendation system get active and next click it bring more relevant videos in front of the user so user can get go to more related videos and user can engage more with the website and watch more videos and spend more time on the website.

FlipKart.Com – India's biggest e commerce website.

Website started in 2007 and adopted already existed recommendation systems on the e-commerce website and its is best example for the categorized recommendation on the website. The user can easily browse the more frequent and reach more liked related products on the user behavior the website. They designed a such algorithm that optimize the user behavior on the website and user get more niche products on the website.

Content Optimization –

When user visit on the website he started looking at some articles and images, in the content optimization user can view the same most related his selected topic images and new links to text articles on the same page. The ideas is behind this concept is taken by the machine learning, we need such algorithm behind this that learn from the collected information of the user and serve the further information in the more efficient way in front of the user.

Different scenarios

The web has become the central distribution channel for data from ancient sources such as news retailers yet as rapidly growing user-generated content. Developing effective algorithmic approaches to delivering such content when users visit net portals could be a elementary downside that has not received a lot of attention. Search engines use automated ranking algorithms to come the most relevant links in response to a user’s keyword query; likewise, on-line ads are targeted using automated algorithms. In distinction, portals that cater to users who browse a web site are usually programed manually. this is often as a result of content is tougher to assess for relevance, topicality, freshness, and personal preference; there's a large vary within the quality; and there are not any reliable quality or trust-metrics example - such as, say, Page-rank or Hub/Authority weights for URLs.

Manual programming of content ensures top quality and maintains the editorial "voice" that users come with the location. On the opposite hand, it's expensive to scale because the number of articles and therefore the range of web site pages we tend to would like to program grow. A data-driven machine learning approach will facilitate with the size issue, and that we obtain to mix the strengths of the editorial and algorithmic approaches by algorithmically optimizing content programming inside high-level constraints set by editors. The system we tend to describe is currently deployed on a significant net portal, and serves many hundred million user visits per day.

The usual approach of ranking articles shown to users uses feature based mostly models. This doesn't work well in our state of affairs thanks to the dynamic nature of our application. In fact, our content pool is tiny but changing rapidly, article lifetimes are short and there's wide variability in article performance sharing a standard set of options. this is often in sharp distinction to the approach followed as an example in net search and on-line advertising, where the content pool being matched to queries is big and real time tracking is tough. In fact, such off-line analysis might give a decent initialization in

our state of affairs however on-line tracking thereafter is crucial for sensible performance. This on-line side opens up new modeling challenges additionally to classical feature based mostly prediction, as we tend to discuss during this my paper work.

At the basic level, the goal of the recommendation systems is very simple. The more critical challenges grow when look at better recommendation in future. The machine learning is still challenges in constructing sequential design of the user preferences that maximize some of its utility function. For example. The US based company Yahoo serving his web users articles on his like and dislike, Those articles priority will change automatic on user past behavior and related subjects he viewed and same time trying to increase the click rates on the website.

Our approach relies on tracking per article performance in close to real time through on-line models. We describe the characteristics and constraints of our application setting, discuss our style selections, and show the importance and effectiveness of coupling on- line models with an easy randomization procedure. we have a tendency to discuss the challenges encountered in an exceedingly production on-line content-publishing surroundings and highlight issues that deserve careful attention. Our analysis of this application additionally suggests number of future analysis avenues.

Each of the model have some strengths and own weaknesses. The machine learning, neural network and artificial intelligence challenges and algorithms concepts help to improve the overall recommendation system in different level and in the output we moved towards the excellent results.

Machine learning is used in the collecting the explicit data and implicit data.

Example for Explicit data collection using machine learning

Questioning the user to rank the the item.

Questioning the user to re rank the items.

Questioning user yes/no.

Questioning like or dislike.

Example for implicit data collection using machine learning

Collecting the user on line viewed item information.

Keeping track and record of the old user purchase.

Re serving the relevant information in front of the user.

Analyzing user likes and dis likes.

The advanced recommendation system collect and compare user simile and dissimilar data and evaluate and the collaborative filtering system approach improve and increase the strength of the recommendation and help to discover appropriate information. A serving theme is an automatic or manual algorithm that decides that article to indicate at completely different positions of our module for a given user. before our system, articles were chosen by human editors; we talk to this because the editorial serving theme. A random sample of the user population is referred to as a bucket.

We currently discuss the problems that build it tough to make predictive models during this setting. We tried the usual approach of building off-line models primarily based on retrospective knowledge collected whereas using the editorial serving theme. User options included Age, Gender, Geo-location and Inferred interests based on user visit patterns. For articles, we tend to used options primarily based on URL, article class for example., Sports, Movies, Health, Entertainment and title keywords. However, this approach performed poorly. the explanations include the dynamic nature of our setting and also the incontrovertible fact that retrospective knowledge collected from non-randomized serving schemes are confounded with factors that are laborious to regulate for Also, our initial studies revealed high variability in recommendation system some common options for example., Sports articles, Entertainment articles . we tend to achieved far better performance by seeking quick convergence the using on-line models to the simplest article for a given user or user segment ; a lost opportunity or failure to detect the simplest article quickly may be pricey and also the value will increase with the margin or difference between the simplest and selected articles . we tend to currently discuss a number of the challenges we had to handle.

The as we dicussed early about the sorting of the articles on the website focus on the better sorting algorithms. The algoriths begins with the supply entity A, the item to that similarity is sought, like the initial entry purpose provided by the user, and a retrieval strategy R, that is an ordered list of similarity metrics B1..Bm. The task is to come back a fixed-size ranked list of target entities of length n, B1..n, ordered by their similarity to S. Our initial task is to get an unranked set of candidates B1..j from the merchandise database. This retrieval method is mentioned within the next section.

Similarity assessment is an alphabetic type, employing a list of buckets. every bucket contains a group of target entities. The bucket list is initialized so the primary bucket B1 contains all of B1..j. a kind is performed by applying the foremost vital metric S1, corresponding to the foremost vital goal within the retrieval strategy. The result's a replacement set of buckets B1..k, every containing things that are given an equivalent integer score by A1.

Starting from B1, we have a tendency to count the contents of the buckets till we have a tendency to reach n, the amount of items we are going to ultimately come back, and discard all remaining buckets. Their contents can never create it into the result list. This method is then repeated with the remaining metrics until there are n singleton buckets remaining (at that purpose any sorting would have no effect) or till all metrics are used.

Different approaches

Collaborative Filtering

Recommendation systems use variations of Collaborative Filtering (CF) for formulating suggestions of things relevant to users diffrent interests. However, CF desires expensive computations that grow polynomially with the amount of users and things among the database. ways in which proposed for handling this scalability downside and dashing up recommendation formulation are based totally on approximation mechanisms and, albeit they improve performance, most of the time result in accuracy degradation. we have a tendency to tend to propose how for addressing the scalability downside based totally on incremental updates of user-to-user similarities

Collaborative Filtering (CF), the prevalent recommendation approach, has been successfully used to identify users which is able to be characterised as "similar" in step with their logged history of previous transactions. However, the applicability of CF is proscribed as a results of the sparsity downside, that refers to a state of affairs that transactional data are lacking or are insufficient. In a shot to produce high-quality recommendations even when data are sparse, we have a tendency to tend to propose how for assuaging sparsity using trust inferences.

Content-Based Filtering

The content based mostly filters consider an entire vary of things, from the particular text within the message, to the domains, to the IP addresses those domains and URLs purpose to. they give the impression of being at the hidden structure of an email. they give the impression of being at what’s within the body of the message and what’s within the headers. There isn’t one little bit of a message that content filters ignore.

In content primarily based recommendation every item is repre-sented by a feature vector or an attribute profile. The fea-tures hold numeric or nominal values representing certain aspects of the item like color, price etc. a spread of dis-tance measures between the feature vectors is additionally used

to compute the similarity of two things. The similarity val-ues are then used to urge a ranked list of advised items. If one considers Euclidian or cosine similarity; implic-itly equal importance is asserted on all choices. But, human judgment of similarity between two things generally provides different weights to totally completely {different} attributes.

Hybrid Recommender Systems

The Hybrid recommender systems combine individual systems to avoid sure aforementioned limitations of these systems. throughout this paper, we've a bent to proposed distinctive generalized switching hybrid recommendation algorithms that blend machine learning classifiers with the collaborative filtering recommender systems. Experimental results on a pair of totally totally different information sets, show that the proposed algorithms are scalable and provide higher performance

Hybrid recommender systems mix recommendation elements of various varieties to attain improved performance. several such hybrids are designed however recent studies show that hybrids using case- based mostly recommendation are rare. This paper shows how a spread of various hybrids will be made employing a case-based recommender jointly part, and describes a series of experiments during which n variety of various hybrids are designed and evaluated. Cascade and have augmentation hybrids are shown to own the very best accuracy over a spread of various profile sizes.

Different Algorithms

For the recommendation systems various algorithms are used to solve different level problems. Now days more research focused on the building hybrid recommendation systems using mix of all above algorithms.

K-Nearest Neighbor

The nearest neighbor algorithm and thus the closely connected repetitive nearest neighbor algorithm may well be a greedy algorithm for locating candidate solutions to the traveling salesman draw back. the closest neighbor algorithm begins at the first city in your list of cities to travel to. It then selects the closest city to travel to next. From the remaining unvisited cities it selects city closest to town a pair of and thus on.

K-nearest-neighbor algorithm is one of the foremost basic and simple classification ways and can be one of the first selections for a classification study when there is little or no previous knowledge concerning the distribution of the data. K-nearest-neighbor classification was developed from the requirement to perform discriminant analysis when reliable parametric estimates of likelihood densities are unknown or troublesome to figure out.

Pearson Correlation

A correlation may be a variety between -1 and +1 that measures the degree of association between a pair of variables (call them A and B). A positive value for the correlation implies a positive association (large values of A tend to be associated with huge values of B tiny|and little|and tiny} values of A tend to be associated with tiny values of B). A negative value for the correlation implies a negative or inverse association (large values of A tend to be associated with very little values of Y and vice versa).

We can categorize in three varieties.

The positive correlation – the alternative variable contains an inclination to conjointly increase;

The negative correlation – the alternative variable contains an inclination to decrease;

The no correlation – the alternative variable does not tend to either increase or decrease.

Rocchio Relevance Filtering

The Rocchio Algorithm is that the classic algorithm for implementing relevance feedback. It models the best manner of incorporating relevance feedback information.

Correlation may be a statistical technique which can show whether or not or not and also the manner strongly pairs of variables are connected. The use of the and purpose of the Pearson's correlation statistic is to measure the degree of association between numerical variables.

Optimization Of The Recommendation System

Collaborative filtering (CF) is that the foremost successful recommendation technique, which has been used in type of numerous applications like recommending movies, articles, products, Web pages . The collaborative filtering is built on the idea that a good due to predict the preference of the active shopper for a target product is to hunt out various shoppers who have similar preferences, and then use those similar consumer’s preferences for that product to make a prediction.

Collaborative filtering can be optimized using the Genetic Algorithm (GA) and Hybrid GA-CF Model

New problem and challenges

With technology growth and large scale data mining made recommendation system more advanced and also the system facing its own level problems and challenges.

Lack of Input data

Perhaps the most important issue facing recommender systems is that they have lots of information to effectively create recommendations. The a lot of item and user knowledge a recommender system must work with, the stronger the possibilities of obtaining smart recommendations. however it may be a chicken and egg drawback - to induce smart recommendations, you would like lots of users, thus you'll get lots of information for the recommendations.

Floating knowledge

Changing trends - individuals Like and Dislikes.

Rapid Changing User Preferences

User behavior changing on the things on the web site.

Unpredictable things

Unpredictable outputs those invariably unique and new.

There are several alternative problems that may happen with recommender systems - some provide up too several 'lowest common denominator' recommendations, some do not support The Long Tail enough and simply suggest obvious things, outliers may be a tangle, and so on.

Conclusion -

Recommender systems are a really necessary artificial intelligence technology for serving to intelligence analysts deal with information overload. Recommender can filter out plenty of of the information that is not relevant for an analyst when performing a given task and thus produce the work of the analyst easier. Collaborative and hybrid recommender allow analysts to automatically build the foremost of the knowledge data and skill of other analysts, making them especially fascinating to be utilized by novice analysts.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now