# Serence

## Logistic Regression - Code

I have written logistic regression before. But I only introduce some basic knowledge about it. In this article, I will publish the code of the machine learning algorithm in python. One with scikit-learn library; Other is written based on Gradient descent algorithm.

If you start to learn Logistic Regression recently, I recommend you to read Logistic Regression first. The following code is partially inspired from 【机器学习】求解逻辑回归参数（三种方法代码实现）

# First Code

from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split

X = bc.data
y = bc.target
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2) # Split training and test data

lr = LogisticRegression()
lr.fit(X_train,y_train) # Train the model
print(lr.score(X_test,y_test))


# Second code

import numpy as np

def sigmoid(theta1, x, theta0): # Define sigmoid function
z = (theta1 * x + theta0).astype("float_")
return 1.0 / (1.0 + np.exp(-z)) # Return a list of the results predicted by sigmoid function

class LogisticRegression:
theta_1 = 0 # Define the coefficient
theta_0 = 0
flag = 0 # Distiguish whether the model was trained

def gradientDecline(self, theta1, x, theta0, y): # Define the Gradient Decline method
sp = sigmoid(theta1,x,theta0)
return 1/len(y) * np.sum((y - sp) * x), 1/len(y) * np.sum(y - sp)

def fit(self, x, y,alpha = 0.2,trials = 20): # alpha is the learning rate, trials is the trials
theta1 = 0.1
theta0 = 0.2

for i in range(0,20):
theta1 = theta1 - alpha * n1
theta0 = theta0 - alpha * n0

print("a = " + str(theta1))
print("b = " + str(theta0))

self.theta_1 = theta1
self.theta_2 = theta2
self.flag = 1


EOF