Course Hive
Courses
Summaries
Continue with Google
or

MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning, Spring 2018

4.0 (0)
4 learners

What you'll learn

This course includes

  • 28 hours of video
  • Certificate of completion
  • Access on mobile and TV

Course content

1 modules • 36 lessons • 28 hours of video

MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning, Spring 2018

36 lessons • 28 hours
  • Course Introduction of 18.065 by Professor Strang07:04
  • An Interview with Gilbert Strang on Teaching Matrix Methods in Data Analysis, Signal Processing,...08:07
  • Lecture 1: The Column Space of A Contains All Vectors Ax52:15
  • Lecture 2: Multiplying and Factoring Matrices48:26
  • 3. Orthonormal Columns in Q Give Q'Q = I49:24
  • 4. Eigenvalues and Eigenvectors48:56
  • 5. Positive Definite and Semidefinite Matrices45:27
  • 6. Singular Value Decomposition (SVD)53:34
  • 7. Eckart-Young: The Closest Rank k Matrix to A47:16
  • Lecture 8: Norms of Vectors and Matrices49:21
  • 9. Four Ways to Solve Least Squares Problems49:51
  • Lecture 10: Survey of Difficulties with Ax = b49:36
  • Lecture 11: Minimizing ‖x‖ Subject to Ax = b50:22
  • 12. Computing Eigenvalues and Singular Values49:28
  • Lecture 13: Randomized Matrix Multiplication52:24
  • 14. Low Rank Changes in A and Its Inverse50:34
  • 15. Matrices A(t) Depending on t, Derivative = dA/dt50:52
  • 16. Derivatives of Inverse and Singular Values43:08
  • Lecture 17: Rapidly Decreasing Singular Values50:34
  • Lecture 18: Counting Parameters in SVD, LU, QR, Saddle Points49:00
  • 19. Saddle Points Continued, Maxmin Principle52:13
  • 20. Definitions and Inequalities55:01
  • Lecture 21: Minimizing a Function Step by Step53:45
  • 22. Gradient Descent: Downhill to a Minimum52:44
  • 23. Accelerating Gradient Descent (Use Momentum)49:02
  • 24. Linear Programming and Two-Person Games53:34
  • 25. Stochastic Gradient Descent53:03
  • 26. Structure of Neural Nets for Deep Learning53:17
  • 27. Backpropagation: Find Partial Derivatives52:38
  • Lecture 30: Completing a Rank-One Matrix, Circulants!49:53
  • 31. Eigenvectors of Circulant Matrices: Fourier Matrix52:37
  • Lecture 32: ImageNet is a Convolutional Neural Network (CNN), The Convolution Rule47:19
  • 33. Neural Nets and the Learning Function56:07
  • 34. Distance Matrices, Procrustes Problem29:17
  • 35. Finding Clusters in Graphs34:49
  • Lecture 36: Alan Edelman and Julia Language38:11

You may also be interested in

FAQs

Suggest a Youtube Course

Our catalog is built based on the recommendations and interests of students like you.

Course Hive
Download now and unlock unlimited audiobooks — 100% free
Explore Now