Data Analysis with R Workshops by Azzurro.io
Previous
RANDOM
Take machine learning to the next level with Udacity
Next
Technology

Warning: Cannot modify header information - headers already sent by (output started at /home2/volodja/public_html/wp-content/themes/implicit/header.php:2) in /home2/volodja/public_html/wp-content/themes/implicit/functions/ajax.php on line 118

{"content":255}

Conversations on Analyzing Data with Udacity

Overview
Provider

Udacity

Dates

Open on-going enrollment

Duration

1 months

Location

Online

Price_Front_Page

Free

Price

Free

Type

Online course

Language

English

Requirements

We recommend you take [Machine Learning 1: Supervised Learning](https://www.udacity.com/ud675) prior to taking this course.

This class will assume that you have programming experience as you will be expected to work with python libraries such as numpy and scikit. A good grasp of probability and statistics is also required. Udacity's [Intro to Statistics](https://www.udacity.com/course/st101), especially [Lessons 8, 9 and 10](https://www.udacity.com/course/viewer#!/c-st101/l-48738235/m-48688822), may be a useful refresher.

An introductory course like Udacity's [Introduction to Artificial Intelligence](https://www.udacity.com/course/cs271) also provides a helpful background for this course.

Image Credits

Proftrack

You will learn about and practice a variety of Unsupervised Learning approaches, including: randomized optimization, clustering, feature selection and transformation, and information theory.

You will learn important Machine Learning methods, techniques and best practices, and will gain experience implementing them in this course through a hands-on final project in which you will be designing a movie recommendation system (just like Netflix!).

Lesson 1: Randomized optimization

– Optimization, randomized

– Hill climbing
– Random restart hill climbing
– Simulated annealing
– Annealing algorithm
– Properties of simulated annealing
– Genetic algorithms
– GA skeleton
– Crossover example
– What have we learned
– MIMIC
– MIMIC: A probability model
– MIMIC: Pseudo code
– MIMIC: Estimating distributions
– Finding dependency trees
– Probability distribution

Lesson 2: Clustering
– Clustering and expectation maximization
– Basic clustering problem
– Single linkage clustering (SLC)
– Running time of SLC
– Issues with SLC
– K-means clustering
– K-means in Euclidean space
– K-means as optimization
– Soft clustering
– Maximum likelihood Gaussian
– Expectation Maximization (EM)
– Impossibility theorem

Lesson 3: Feature Selection
– Algorithms
– Filtering and Wrapping
– Speed
– Searching
– Relevance
– Relevance vs. Usefulness

Lesson 4: Feature Transformation
– Feature Transformation
– Words like Tesla
– Principal Components Analysis
– Independent Components Analysis
– Cocktail Party Problem
– Matrix
– Alternatives

Lesson 5: Information Theory
– History
-Sending a Message
– Expected size of the message
– Information between two variables
– Mutual information
– Two Independent Coins
– Two Dependent Coins
– Kullback Leibler Divergence

Unsupervised Learning Project

If you have followed this course, please share your review below.

Users Review
Content
Delivery
Value for Money
You have rated this
Comments



Leave a Review

OR LOGIN WITH :