Turn-key PCB assembly services in prototype quantities or low-volume to mid-volume production runs

Regularized Logistic regression

Previously we have tried logistic regression without regularization and with simple training data set. Bu as we all know, things in real life aren’t as simple as we would want to. There are many types of data available the need to be classified. Number of features can grow up hundreds and thousands while number of instances may be limited. Also in many times we might need to classify in more than two classes. The first problem that might rise due to large number of features is over-fitting. This is when learned hypothesis hΘ (x) fit training data too well (cost J(Θ) ≈ 0), but it fails when classifying new data samples. In other words, model tries to distinct each training example correctly by drawing very complicated decision boundary between training data points. As you can see in image above, over-fitting would be green decision boundary. So how to deal with over-fitting problem? There might be several approaches: Reducing number of features; Selecting different classification model; Use regularization. We leave first two out of question, because selecting optimal number of features is a different topic of optimization. Also we are sticking with logistic regression model for now, so changing classifier is…

Continue reading