GLAM: Grupo de lectura aprendizaje de máquinas
Este grupo de lectura está diseñado para alumnos de pre- y post-grado que quieran adquirir conocimientos avanzados de aprendizaje de máquinas (AM), tanto para complementar su investigación mediante la incorporación de técnicas de AM en sus aplicaciones y experimentos, como también para estudiar y desarrollar nuevos métodos de AM para aplicaciones generales.
La agenda de charlas para este semestre está a continuación.
GLAM #1: The Gaussian Process Convolution Model (Felipe Tobar, 22/9)
a) Tobar, Bui and Turner, "Learning Stationary Time Series using Gaussian Processes with Nonparametric Kernels", Neural information processing systems, 2015.
GLAM #2: Sparse Gaussian Processes (Christopher Ley, 29/9) Presentación, resumen
a) M. Bauer, M. van der Wilk and , C. Rasmussen, "Understanding Probabilistic Sparse Gaussian Process Approximations" arXiv:1606.04820v1
b) Edward Snelson and Zoubin Ghahramani. Sparse gaussian processes using pseudo-inputs. In Neural Information Processing Systems, volume 18, 2006.
c) Joaquin Quiñonero-Candela and Carl Edward Rasmussen. A unifying view of sparse approximate gaussian process regression. The Journal of Machine Learning Research, 6:1939–1959, 2005. d) Michalis K. Titsias. Variational learning of inducing variables in sparse gaussian processes. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, pages 567–574, 2009.
GLAM #3: Warped Gaussian Processes (Gonzalo Ríos, 6/10) Presentación, resumen
a) Snelson, E., Rasmussen, C. E., & Ghahramani, Z. (2004). Warped gaussian processes. Advances in neural information processing systems, 16, 337-344.
b) Lázaro-Gredilla, M. (2012). Bayesian warped Gaussian processes. In Advances in Neural Information Processing Systems (pp. 1619-1627).
GLAM #4: Multi-output Gaussian processes (Gabriel Parra, 13/10) Presentación, resumen
a) Mauricio A. Alvarez and Neil D. Lawrence. Sparse convolved Gaussian processes for multi-output regression. Advances in Neural Information Processing Systems 21, pages 57–64. MIT Press, Cambridge, MA, 2009
b) Mauricio A. Alvarez, Lorenzo Rosasco, and Neil D. Lawrence. Kernels for Vector-Valued Functions: A Review. Foundations and Trends in Machine Learning 4, no.3 (2012): pages 195-266.
GLAM #5: Identificación de Modelos Espacio-estado con Procesos Gaussianos y Kernels (Alejandro Bernardín, 20/10) Presentación, resumen
a) F. Tobar, P. Djuric and D. Mandic Unsupervised State-Space Modeling Using Reproducing Kernels, IEEE Transactions on Signal Processing, 2015.
a') F. Tobar and D. Mandic, A particle filtering based kernel HMM predictor. In IEEE ICASSP 2014, pp. 7969-7973).
b) R. D. Turner, M. Deisenroth, and C, State-Space Inference and Learning with Gaussian Processes. In AISTATS 2010, pp. 868-875.
GLAM #6: Métodos de Monte Carlo (Donato Vásquez, 27/10) Presentación, resumen
a) C. Andrieu ,N de Freitas A. Doucet y M. I. Jordan, An Introduction to MCMC for Machine Learning, Kluwer Academic Publishers, 2003
b) S. Brooks, A. Gelman, G. L. Jones y Xiao-Li Meng, Handbook of Markov Chain Monte Carlo, Chapman & Hall/CRC, 2011
c) T. Chen, E. B. Fox, C. Guestrin, Stochastic Gradient Hamiltonian Monte Carlo, MODE Lab, University of Washington, Seattle, WA, 2014
GLAM #7: Sequential Monte Carlo Methods: Particle Filtering (Joaquín Rojas, 3/11) Presentación, resumen
a) Gordon, Salmond, Smith (1993): "A Novel Approach to Nonlinear and Non-Gaussian Bayesian State Estimation", IEE Procedings
b) Pitt, Shephard (1999): "Filtering Via Simulation: Auxiliary Particle Filters", Journal of the American Statistical Association
GLAM #8: Deep Neural Networks (Matías Silva, 17/11) Presentación
GLAM #9: Random Forests (Romain Gouron, 24/11) Presentación, resumen
GLAM #10: Probabilistic Graphical models (Ignacio Reyes, 1/12) Presentación