Posts

Showing posts from March, 2021

Linear Regression with multiple variables

  Linear Regression with multiple variables: The multiple linear regression explains the relationship between  one continuous dependent variable  ( y ) and  two or more independent variables  ( x 1,  x 2,  x 3… etc) . Note that it says  CONTINUOUS   dependant variable. Since   y   is the sum of   beta ,   beta 1   x 1,   beta 2   x 2   etc , the resulting   y   will be a number, a continuous variable, instead of a “yes”, “no” answer (categorical). For example, with linear regression, I would be trying to find out   how much Decibels of  noise is being produced, and not if it’s noisy or not (Noisy | Not).

Linear Regression with single variable

Image
Linear Regression Algorithm From Andrew Ng’s Machine learning course The above diagram conveniently gives a brief overview of the Linear Regression Algorithm. We feed our learning algorithm with some data set(training set), which then outputs a function called Hypothesis. Hypothesis approximates a target function for mapping inputs to outputs. For linear regression in one variable, our hypothesis function is of the form — h(x) = θ0 + θ1x where θ0 and θ1 are the parameters.

Concept Learning

  Concept Learning We learn our surrounding through 5 senses — eye, ear, nose, tongue and skin. We learn a lot of things during the entire life. Some of them are based on experience and some of them are based on memorization. On the basis of that we can divide learning methods into five types: Rote Learning (memorization):  Memorizing things without knowing the concept/ logic behind them. Passive Learning (instructions):  Learning from a teacher/expert. Analogy (experience):  Learning new things from our past experience. Inductive Learning (experience):  On the basis of past experience, formulating a generalized concept. Deductive Learning:  Deriving new facts from past facts. Tom Mitchell defines the concept learning as  —  “Problem of searching through a predefined space of potential hypotheses for the hypothesis that best fits the training examples” For detailed concept learning :

Data Preprocessing

  Data Preprocessing In any Machine Learning process, Data Preprocessing is that step in which the data gets transformed, or  Encoded , to bring it to such a state that now the machine can easily parse it. In other words, the  features  of the data can now be easily interpreted by the algorithm. For detailed data preprocessing :

Decision Tree

Decision Tree Classification Algorithm A decision Tree is a  Supervised learning technique  that can be used for both classification and Regression problems, but mostly it is preferred for solving Classification problems. It is a tree-structured classifier, where  internal nodes represent the features of a dataset, branches represent the decision rules  and  each leaf node represents the outcome. In a Decision tree, there are two nodes, which are the  Decision Node  and  Leaf Node.  Decision nodes are used to make any decision and have multiple branches, whereas Leaf nodes are the output of those decisions and do not contain any further branches. The decisions or the test are performed on the basis of features of the given dataset. It is a graphical representation for getting all the possible solutions to a problem/decision based on given conditions. It is called a decision tree because similar to a tree, it starts with the root node, which expands on further branches and constructs a