Never stop talking " STOP the Gaza Genocide "
  • رقم الدرس : 135
  • 00:14:13
  • (ML 17.9) Smirnov transform (Inverse transform sampling) - general case

  • تشغيل
Loading...

دروس الكورس

  1. 1- (ML 1.1) Machine learning - overview and applications
  2. 2- (ML 1.2) What is supervised learning?
  3. 3- (ML 1.3) What is unsupervised learning?
  4. 4- (ML 1.4) Variations on supervised and unsupervised
  5. 5- (ML 1.5) Generative vs discriminative models
  6. 6- (ML 1.6) k-Nearest Neighbor classification algorithm
  7. 7- (ML 2.1) Classification trees (CART)
  8. 8- (ML 2.2) Regression trees (CART)
  9. 9- (ML 2.3) Growing a regression tree (CART)
  10. 10- (ML 2.4) Growing a classification tree (CART)
  11. 11- (ML 2.5) Generalizations for trees (CART)
  12. 12- (ML 2.6) Bootstrap aggregation (Bagging)
  13. 13- (ML 2.7) Bagging for classification
  14. 14- (ML 2.8) Random forests
  15. 15- (ML 3.1) Decision theory (Basic Framework)
  16. 16- (ML 3.2) Minimizing conditional expected loss
  17. 17- (ML 3.3) Choosing f to minimize expected loss
  18. 18- (ML 3.4) Square loss
  19. 19- (ML 3.5) The Big Picture (part 1)
  20. 20- (ML 3.6) The Big Picture (part 2)
  21. 21- (ML 3.7) The Big Picture (part 3)
  22. 22- (ML 4.1) Maximum Likelihood Estimation (MLE) (part 1)
  23. 23- (ML 4.2) Maximum Likelihood Estimation (MLE) (part 2)
  24. 24- (ML 4.3) MLE for univariate Gaussian mean
  25. 25- (ML 4.4) MLE for a PMF on a finite set (part 1)
  26. 26- (ML 4.5) MLE for a PMF on a finite set (part 2)
  27. 27- (ML 5.1) Exponential families (part 1)
  28. 28- (ML 5.2) Exponential families (part 2)
  29. 29- (ML 5.3) MLE for an exponential family (part 1)
  30. 30- (ML 5.4) MLE for an exponential family (part 2)
  31. 31- (ML 6.1) Maximum a posteriori (MAP) estimation
  32. 32- (ML 6.2) MAP for univariate Gaussian mean
  33. 33- (ML 6.3) Interpretation of MAP as convex combination
  34. 34- (ML 7.1) Bayesian inference - A simple example
  35. 35- (ML 7.2) Aspects of Bayesian inference
  36. 36- (ML 7.3) Proportionality
  37. 37- (ML 7.4) Conjugate priors
  38. 38- (ML 7.5) Beta-Bernoulli model (part 1)
  39. 39- (ML 7.6) Beta-Bernoulli model (part 2)
  40. 40- (ML 7.7.A1) Dirichlet distribution
  41. 41- (ML 7.7.A2) Expectation of a Dirichlet random variable
  42. 42- (ML 7.7) Dirichlet-Categorical model (part 1)
  43. 43- (ML 7.8) Dirichlet-Categorical model (part 2)
  44. 44- (ML 7.9) Posterior distribution for univariate Gaussian (part 1)
  45. 45- (ML 7.10) Posterior distribution for univariate Gaussian (part 2)
  46. 46- (ML 8.1) Naive Bayes classification
  47. 47- (ML 8.2) More about Naive Bayes
  48. 48- (ML 8.3) Bayesian Naive Bayes (part 1)
  49. 49- (ML 8.4) Bayesian Naive Bayes (part 2)
  50. 50- (ML 8.5) Bayesian Naive Bayes (part 3)
  51. 51- (ML 8.6) Bayesian Naive Bayes (part 4)
  52. 52- (ML 9.1) Linear regression - Nonlinearity via basis functions
  53. 53- (ML 9.2) Linear regression - Definition & Motivation
  54. 54- (ML 9.3) Choosing f under linear regression
  55. 55- (ML 9.4) MLE for linear regression (part 1)
  56. 56- (ML 9.5) MLE for linear regression (part 2)
  57. 57- (ML 9.6) MLE for linear regression (part 3)
  58. 58- (ML 9.7) Basis functions MLE
  59. 59- (ML 10.1) Bayesian Linear Regression
  60. 60- (ML 10.2) Posterior for linear regression (part 1)
  61. 61- (ML 10.3) Posterior for linear regression (part 2)
  62. 62- (ML 10.4) Predictive distribution for linear regression (part 1)
  63. 63- (ML 10.5) Predictive distribution for linear regression (part 2)
  64. 64- (ML 10.6) Predictive distribution for linear regression (part 3)
  65. 65- (ML 10.7) Predictive distribution for linear regression (part 4)
  66. 66- (ML 11.1) Estimators
  67. 67- (ML 11.2) Decision theory terminology in different contexts
  68. 68- (ML 11.3) Frequentist risk, Bayesian expected loss, and Bayes risk
  69. 69- (ML 11.4) Choosing a decision rule - Bayesian and frequentist
  70. 70- (ML 11.5) Bias-Variance decomposition
  71. 71- (ML 11.6) Inadmissibility
  72. 72- (ML 11.7) A fun exercise on inadmissibility
  73. 73- (ML 11.8) Bayesian decision theory
  74. 74- (ML 12.1) Model selection - introduction and examples
  75. 75- (ML 12.2) Bias-variance in model selection
  76. 76- (ML 12.3) Model complexity parameters
  77. 77- (ML 12.4) Bayesian model selection
  78. 78- (ML 12.5) Cross-validation (part 1)
  79. 79- (ML 12.6) Cross-validation (part 2)
  80. 80- (ML 12.7) Cross-validation (part 3)
  81. 81- (ML 12.8) Other approaches to model selection
  82. 82- (ML 13.1) Directed graphical models - introductory examples (part 1)
  83. 83- (ML 13.2) Directed graphical models - introductory examples (part 2)
  84. 84- (ML 13.3) Directed graphical models - formalism (part 1)
  85. 85- (ML 13.4) Directed graphical models - formalism (part 2)
  86. 86- (ML 13.5) Generative process specification
  87. 87- (ML 13.6) Graphical model for Bayesian linear regression
  88. 88- (ML 13.7) Graphical model for Bayesian Naive Bayes
  89. 89- (ML 13.8) Conditional independence in graphical models - basic examples (part 1)
  90. 90- (ML 13.9) Conditional independence in graphical models - basic examples (part 2)
  91. 91- (ML 13.10) D-separation (part 1)
  92. 92- (ML 13.11) D-separation (part 2)
  93. 93- (ML 13.12) How to use D-separation - illustrative examples (part 1)
  94. 94- (ML 13.13) How to use D-separation - illustrative examples (part 2)
  95. 95- (ML 14.1) Markov models - motivating examples
  96. 96- (ML 14.2) Markov chains (discrete-time) (part 1)
  97. 97- (ML 14.3) Markov chains (discrete-time) (part 2)
  98. 98- (ML 14.4) Hidden Markov models (HMMs) (part 1)
  99. 99- (ML 14.5) Hidden Markov models (HMMs) (part 2)
  100. 100- (ML 14.6) Forward-Backward algorithm for HMMs
  101. 101- (ML 14.7) Forward algorithm (part 1)
  102. 102- (ML 14.8) Forward algorithm (part 2)
  103. 103- (ML 14.9) Backward algorithm
  104. 104- (ML 14.10) Underflow and the log-sum-exp trick
  105. 105- (ML 14.11) Viterbi algorithm (part 1)
  106. 106- (ML 14.12) Viterbi algorithm (part 2)
  107. 107- (ML 15.1) Newton's method (for optimization) - intuition
  108. 108- (ML 15.2) Newton's method (for optimization) in multiple dimensions
  109. 109- (ML 15.3) Logistic regression (binary) - intuition
  110. 110- (ML 15.4) Logistic regression (binary) - formalism
  111. 111- (ML 15.5) Logistic regression (binary) - computing the gradient
  112. 112- (ML 15.6) Logistic regression (binary) - computing the Hessian
  113. 113- (ML 15.7) Logistic regression (binary) - applying Newton's method
  114. 114- (ML 16.1) K-means clustering (part 1)
  115. 115- (ML 16.2) K-means clustering (part 2)
  116. 116- (ML 16.3) Expectation-Maximization (EM) algorithm
  117. 117- (ML 16.4) Why EM makes sense (part 1)
  118. 118- (ML 16.5) Why EM makes sense (part 2)
  119. 119- (ML 16.6) Gaussian mixture model (Mixture of Gaussians)
  120. 120- (ML 16.7) EM for the Gaussian mixture model (part 1)
  121. 121- (ML 16.8) EM for the Gaussian mixture model (part 2)
  122. 122- (ML 16.9) EM for the Gaussian mixture model (part 3)
  123. 123- (ML 16.10) EM for the Gaussian mixture model (part 4)
  124. 124- (ML 16.11) The likelihood is nondecreasing under EM (part 1)
  125. 125- (ML 16.12) The likelihood is nondecreasing under EM (part 2)
  126. 126- (ML 16.13) EM for MAP estimation
  127. 127- (ML 17.1) Sampling methods - why sampling, pros and cons
  128. 128- (ML 17.2) Monte Carlo methods - A little history
  129. 129- (ML 17.3) Monte Carlo approximation
  130. 130- (ML 17.4) Examples of Monte Carlo approximation
  131. 131- (ML 17.5) Importance sampling - introduction
  132. 132- (ML 17.6) Importance sampling - intuition
  133. 133- (ML 17.7) Importance sampling without normalization constants
  134. 134- (ML 17.8) Smirnov transform (Inverse transform sampling) - invertible case
  135. 135- (ML 17.9) Smirnov transform (Inverse transform sampling) - general case
  136. 136- (ML 17.10) Sampling an exponential using Smirnov
  137. 137- (ML 17.11) Rejection sampling - uniform case
  138. 138- (ML 17.12) Rejection sampling - non-uniform case
  139. 139- (ML 17.13) Proof of rejection sampling (part 1)
  140. 140- (ML 17.14) Proof of rejection sampling (part 2)
  141. 141- (ML 18.1) Markov chain Monte Carlo (MCMC) introduction
  142. 142- (ML 18.2) Ergodic theorem for Markov chains
  143. 143- (ML 18.3) Stationary distributions, Irreducibility, and Aperiodicity
  144. 144- (ML 18.4) Examples of Markov chains with various properties (part 1)
  145. 145- (ML 18.5) Examples of Markov chains with various properties (part 2)
  146. 146- (ML 18.6) Detailed balance (a.k.a. Reversibility)
  147. 147- (ML 18.7) Metropolis algorithm for MCMC
  148. 148- (ML 18.8) Correctness of the Metropolis algorithm
  149. 149- (ML 18.9) Example illustrating the Metropolis algorithm
  150. 150- (ML 19.1) Gaussian processes - definition and first examples
  151. 151- (ML 19.2) Existence of Gaussian processes
  152. 152- (ML 19.3) Examples of Gaussian processes (part 1)
  153. 153- (ML 19.4) Examples of Gaussian processes (part 2)
  154. 154- (ML 19.5) Positive semidefinite kernels (Covariance functions)
  155. 155- (ML 19.6) Inner products and PSD kernels
  156. 156- (ML 19.7) Operations preserving positive semidefinite kernels
  157. 157- (ML 19.8) Proof that a product of PSD kernels is a PSD kernel
  158. 158- (ML 19.9) GP regression - introduction
  159. 159- (ML 19.10) GP regression - the key step
  160. 160- (ML 19.11) GP regression - model and inference