IN CASE YOU MISSED IT!

BigData News Sunday, April 1 Machine learning, Introductory python material, Deep learning frameworks & more…

[vc_row][vc_column][vc_column_text]

BigData News TLDR / Table of Contents

[/vc_column_text][/vc_column][/vc_row][vc_row el_id=”Top-10-IPython-Notebook-Tutorials-for-Data-Science-and-Machine-Learning”][vc_column width=”1/2″][vc_separator][vc_column_text]

Top 10 IPython Notebook Tutorials for Data Science and Machine Learning

[/vc_column_text][vc_column_text el_class=”topfeed-summary-list”]

  • A list of 10 useful Github repositories made up of IPython (Jupyter) notebooks, focused on teaching data science and machine learning.
  • This post is made up of a collection of 10 Github repositories consisting in part, or in whole, of IPython (Jupyter) Notebooks, focused on transferring data science and machine learning concepts.
  • They go from introductory Python material to deep learning with TensorFlow and Theano, and hit a lot of stops in between.
  • So here they are: 10 useful IPython Notebook Github repositories in no particular order: – – This warmup notebook is from postdoctoral researcher Randal Olson, who uses the common Python ecosystem data analysis/machine learning/data science stack to work with the Iris dataset.
  • This is an eclectic mix, put together by John Wittenauer, with notebooks for Python implementation of Ng’s Coursera course exercises, Udacity’s TensorFlow-oriented deep learning course exercises, and the Spark edX course exercises.

[/vc_column_text][vc_column_text el_class=”topfeed-tags”]Tags: machine learning, introductory Python material, deep learning, IPython Notebook, IPython Notebook Github[/vc_column_text][/vc_column][vc_column width=”1/2″][vc_separator][vc_column_text el_class=”topfeed-tweet”]

[/vc_column_text][vc_column_text el_class=”topfeed-embedly”]Top 10 IPython Notebook Tutorials for Data Science and Machine Learning[/vc_column_text][/vc_column][/vc_row][vc_row el_id=”Comparing-Deep-Learning-Frameworks-A-Rosetta-Stone-Approach”][vc_column width=”1/2″][vc_separator][vc_column_text]

Comparing Deep Learning Frameworks: A Rosetta Stone Approach

[/vc_column_text][vc_column_text el_class=”topfeed-summary-list”]

  • When we originally created the repo, there were many little tips and tricks we had to use to ensure we were using the same model between frameworks and it was done in an optimal way.
  • Of course, while it is tempting to compare different frameworks with these metrics such as speed and inference time, theyarent meant to suggest anything about the overall performance of the frameworksince they omit important comparisons such as: help and support, availability of pre-trained models, custom layers and architectures, data-loaders, debugging,…
  • There are many popular deep learning frameworks that are leveraged in the community, and this is one effort to help AI developers and data scientists leverage different deep learning frameworks as applicable.
  • A related effort is theOpen Neural Network Exchange (ONNX)which is an open source interoperability standard for transferring deep learning models between frameworks.
  • In contrast, the repo we are releasing as a full version 1.0 today is like a Rosetta Stone for deep learning frameworks, showing the model building process end to end in the different frameworks.

[/vc_column_text][vc_column_text el_class=”topfeed-tags”]Tags: deep learning frameworks, different frameworks, deep-learning frameworks, , [/vc_column_text][/vc_column][vc_column width=”1/2″][vc_separator][vc_column_text el_class=”topfeed-tweet”]

[/vc_column_text][vc_column_text el_class=”topfeed-embedly”]Comparing Deep Learning Frameworks: A Rosetta Stone Approach[/vc_column_text][/vc_column][/vc_row][vc_row el_id=”Interpreting-Machine-Learning-Models-An-Overview”][vc_column width=”1/2″][vc_separator][vc_column_text]

Interpreting Machine Learning Models: An Overview

[/vc_column_text][vc_column_text el_class=”topfeed-summary-list”]

  • This post summarizes the contents of a recent O’Reilly article outlining a number of methods for interpreting machine learning models, beyond the usual go-to measures.
  • An article on machine learning interpretation appeared on O’Reilly’s blog back in March, written by Patrick Hall, Wen Phan, and SriSatish Ambati, which outlined a number of methods beyond the usual go-to measures.
  • I approach complex machine learning model interpretability as an advocate of automated machine learning, since I feel the two techniques are flipsides of the same coin: if we are going to be using automated techniques to generate models on the front-end, then devising and employing appropriate ways to simplify and…
  • If the surrogate model is created by training, say, a simple linear regression or a decision tree with original input data and predictions from the more complex model, the characteristics of the simple model can then be assumed to be an accurately descriptive stand-in of the more complex model.
  • Sensitivity analysis — this technique helps to determine whether intentionally perturbed data, or similar data changes, modify model behavior and destabilizes the outputs; it is also useful for investigating model behavior for particular scenarios of interest or corner cases – – Global variable importance measures — typically the domain of…

[/vc_column_text][vc_column_text el_class=”topfeed-tags”]Tags: human domain knowledge, Surrogate models, variable importance measures, surrogate model, Wen Phan[/vc_column_text][/vc_column][vc_column width=”1/2″][vc_separator][vc_column_text el_class=”topfeed-tweet”]

[/vc_column_text][vc_column_text el_class=”topfeed-embedly”]Interpreting Machine Learning Models: An Overview[/vc_column_text][/vc_column][/vc_row]