Logistic Regression is one of the AI calculations utilized for tackling characterization issues. It is utilized to appraise the likelihood if an example has a place with a class. In the event that the assessed likelihood is more prominent than the edge, the model predicts that the case has a place with that class, or, in all likelihood, it predicts that it doesn’t have a place with the class as demonstrated in fig 1. This makes it a double classifier. Strategic relapse is utilized where the estimation of the reliant variable is 0/1, valid/bogus, or yes/no.
Assume we are intrigued to know whether an up-and-comer will finish the selection test. The consequence of the up-and-comer relies on his participation in the class, educator understudy proportion, information on the instructor, and interest of the understudy in the subject area on the whole autonomous factors, and the result is needy variable. The estimation of the outcome will be yes or no. In this way, it is a twofold characterization issue.
Are Logistic Regression and Linear Regression are same?
Linear Regression models the connection between subordinate variables and free factors by fitting a straight line as demonstrated in Fig 4.
In Linear Regression, the estimation of anticipated Y surpasses from 0 and 1 territory. As examined before, Logistic Regression gives us the likelihood and the estimation of likelihood consistently lies somewhere in the range of 0 and 1. Along these lines, Logistic Regression utilizes sigmoid capacity or strategic capacity to change over the yield between [0,1]. The calculated capacity is characterized as:
1/(1 + e^-esteem)
Where e is the base of the regular logarithms and worth is the real mathematical worth that you need to change. The yield of this capacity is consistently 0 to 1.
The condition of direct relapse is
Strategic capacity is applied to change the yield over to 0 to 1 territory
We need to reformulate the condition with the goal that the straight term is on the correct side of the equation.
log(P(Y=1)/1?P(Y=1))= B0+B1X1+… +BpXp
where log(P(Y=1)/1?P(Y=1)) is called chances proportion.