【机器学习】13 Sparse linear models

发布于:2025-08-31 ⋅ 阅读:(15) ⋅ 点赞:(0)

本章目录

13 Sparse linear models 421
13.1 Introduction 421
13.2 Bayesian variable selection 422
13.2.1 The spike and slab model 424
13.2.2 From the Bernoulli-Gaussian model to 0 regularization 425
13.2.3 Algorithms 426
13.3 l1 regularization: basics 429
13.3.1 Why does l1 regularization yield sparse solutions? 430
13.3.2 Optimality conditions for lasso 431
13.3.3 Comparison of least squares, lasso, ridge and subset selection 435
13.3.4 Regularization path 436
13.3.5 Model selection 439
13.3.6 Bayesian inference for linear models with Laplace priors 440
13.4 l1 regularization: algorithms 441
13.4.1 Coordinate descent 441
13.4.2 LARS and other homotopy methods 441
13.4.3 Proximal and gradient projection methods 442
13.4.4 EM for lasso 447
13.5 l1 regularization: extensions 449
13.5.1 Group Lasso 449
13.5.2 Fused lasso 454
13.5.3 Elastic net (ridge and lasso combined) 455
13.6 Non-convex regularizers 457
13.6.1 Bridge regression 458
13.6.2 Hierarchical adaptive lasso 458
13.6.3 Other hierarchical priors 462
13.7 Automatic relevance determination (ARD)/sparse Bayesian learning (SBL) 463
13.7.1 ARD for linear regression 463
13.7.2 Whence sparsity? 465
13.7.3 Connection to MAP estimation 465
13.7.4 Algorithms for ARD * 466
13.7.5 ARD for logistic regression 468
13.8 Sparse coding * 468
13.8.1 Learning a sparse coding dictionary 469
13.8.2 Results of dictionary learning from image patches 470
13.8.3 Compressed sensing 472
13.8.4 Image inpainting and denoising 472

github下载链接https://github.com/916718212/Machine-Learning-A-Probabilistic-Perspective-.git


网站公告

今日签到

点亮在社区的每一天
去签到