Search
  • ankitrathi

Statistical Learning Notes (Series)


This is the pilot post of blog post series ‘Statistical Learning Notes’, this post covers the context, table of content & links to upcoming posts of this series topic-wise.

  1. Statistical Learning

  2. Linear Regression

  3. Classification

  4. Resampling Methods

  5. Linear Model Selection and Regularization (coming soon…)

  6. Moving Beyond Linearity

  7. Tree-Based Methods

  8. Support Vector Machines

  9. Unsupervised Learning

Want to learn more? visit www.ankitrathi.com

I have worked on various data science projects till date, but I feel that theoretically I still need to cover a lot of ground, that too when the field itself is evolving. In order to do that, I am reading couple of books on data science and making notes to refer in future.

Recently, I have published similar series on Probability & Statistics for Data Science:

https://towardsdatascience.com/probability-statistics-for-data-science-series-83b94353ca48

This series will cover my notes on ‘Introduction to Statistical Learning (ISLR)’, here I have tried to give intuitive understanding of key concepts and how these concepts are connected to statistical learning.

https://towardsdatascience.com/probability-statistics-for-data-science-series-83b94353ca48

This blog-post series will be covering following topics:

1. Statistical Learning blog-post introduces the basic terminology and concepts behind statistical learning.

2. Linear Regression & Classification blog-posts cover classical linear methods for regression and classification.

3. Resampling Methods blog-post discuss various methods estimate the accuracy of a number of different methods in order to choose the best one.

4. Linear Model Selection and Regularization blog-post consider a host of linear methods, which offer potential improvements over standard linear regression.

6. Moving Beyond Linearity blog-post introduces us to the world of non-linear statistical learning.

7. Tree-Based Methods blog-post covers tree-based methods including bagging, boosting, and random forests.

8. Support Vector Machines blog-post as the name suggests covers intuitive understanding of SVMs.

9. Unsupervised Learning blog-post considers a setting in which we have input variables but no output variable.

If you are looking for similar learning curve, stay tuned.

Thank you for reading my post. I regularly write about Data & Technology on LinkedIn & Medium. If you would like to read my future posts then simply ‘Connect’ or ‘Follow’. Also feel free to listen to me on SoundCloud.

#Statistics #StatisticalLearning #Probability #DataScience #MachineLearning

3 views

Call

T: +91 9891XXX969  

Follow me

  • Facebook Clean
  • Twitter Clean
  • White Google+ Icon

©  2020  Ankit Rathi