-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.jemdoc
126 lines (112 loc) · 20 KB
/
index.jemdoc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
= Matthias Seeger
~~~
{}{img_left}{files/seeger_photo_small_portr.jpg}{alt text}{347}{384}{}
*Principal Machine Learning Scientist at Amazon* \n
\n
Contact: /mseeger/ \[@\] gmail \[DOT\] com, /matthis/ \[@\] amazon \[DOT\] com \n
\n
[https://scholar.google.com/citations?user=V-lc8A8AAAAJ&hl=en \[Google Scholar\]] \n
[https://dblp.org/pers/hd/s/Seeger:Matthias_W= \[dblp\]] \n
[https://www.linkedin.com/in/matthias-seeger-3010b765/ \[LinkedIn\]] \n
[https://github.com/mseeger \[GitHub\]]
~~~
~~~
{Short Bio}
Matthias W. Seeger received a Ph.D. from the [https://www.ed.ac.uk/informatics/ School of Informatics], Edinburgh university, UK, in 2003 (advisor [https://homepages.inf.ed.ac.uk/ckiw/ Christopher Williams]). He was a research fellow with [https://people.eecs.berkeley.edu/~jordan/ Michael Jordan] and [https://www.stat.berkeley.edu/~bartlett/ Peter Bartlett], University of California at Berkeley, from 2003, and with [https://www.is.mpg.de/person/bs Bernhard Schoelkopf], Max Planck Institute for Intelligent Systems, Tuebingen, Germany, from 2005. He led a research group at the University of Saarbruecken, Germany, from 2008, and was assistant professor at the [https://www.epfl.ch/schools/ic/ Ecole Polytechnique Federale de Lausanne] from fall 2010. He joined Amazon as machine learning scientist in 2014. He received the [https://www.amazon.science/blog/icml-test-of-time-paper-shows-how-times-have-changed ICML Test of Time Award] in 2020.
~~~
~~~
{Research Interests}
For a long while, my interests centered around Bayesian learning and decision making with probabilistic models, from gaining understanding to making it work in large scale practice. I have been working on theory and practice of Gaussian processes and Bayesian optimization, scalable variational approximate inference algorithms, Bayesian compressed sensing, and active learning for medical imaging. I also worked on demand forecasting, hyperparameter tuning (Bayesian optimization) applied to deep learning (NLP), and AutoML.
More recently, I am getting excited about large language models and related data creation and annotation challenges. I am one of the scientists behind [https://aws.amazon.com/q/aws/ Amazon Q], which transforms the way customers build, optimize, and operate applications and workloads on AWS.
~~~
~~~
{Publications}
*Conference:*
- (2023) D. Salinas, J. Golebiowski, A. Klein, M. Seeger, C. Archambeau. Optimizing Hyperparameters with Conformal Quantile Regression. /International Conference on Machine Learning: 29876-29893/. [http://proceedings.mlr.press/v202/salinas23a/salinas23a.pdf \[pdf\]]
- (2022) D. Salinas, M. Seeger, A. Klein, V. Perrone, M. Wistuba, C. Archambeau. Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research. /AutoML Conference/. [https://openreview.net/forum?id=BVeGJ-THIg9 \[openreview\]]
- (2022) A. Makarova, H. Shen, V. Perrone, A. Klein, J. B. Faddoul, A. Krause, M. Seeger, C. Archambeau. Automatic Termination for Hyperparameter Optimization. /AutoML Conference/. [https://openreview.net/forum?id=BNeNQWaBIgq \[openreview\]]
- (2021) E. Lee, D. Eriksson, V. Perrone, M. Seeger. A Nonmyopic Approach to Cost-Constrained Bayesian Optimization. /Uncertainty in Artificial Intelligence: 568-577/. [https://proceedings.mlr.press/v161/lee21a/lee21a.pdf \[pdf\]]
- (2021) L. Tiao, A. Klein, M. Seeger, E. Bonilla, C. Archambeau, F. Ramos. BORE: Bayesian Optimization by Density Ratio Estimation. /International Conference on Machine Learning: 10289-10300/. [http://proceedings.mlr.press/v139/tiao21a/tiao21a.pdf \[pdf\]]
- (2021) V. Perrone, H. Shen, A. Zolic, I. Shcherbatyi, A. Ahmed, T. Bansal, M. Donini, F. Winkelmolen, R. Jenatton, J. B. Faddoul, B. Pogorzelska, M. Miladinovic, K. Kenthapadi, M. Seeger, C. Archambeau. Amazon SageMaker Automatic Model Tuning: Scalable Black-box Optimization. /Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining: 3463-3471/. [https://dl.acm.org/doi/pdf/10.1145/3447548.3467098 \[pdf\]]
- (2020) L. Tiao, A. Klein, M. Seeger, C. Archambeau, E. Bonilla, F. Ramos. Bayesian Optimization by Density Ratio Estimation. /NeurIPS 2020 Workshop on Meta-learning/. [files/bayesian-optimization-by-density-ratio-estimation.pdf \[pdf\]]
- (2020) C. Nguyen, T. Hassner, M. Seeger, C. Archambeau. LEEP: A New Measure to Evaluate Transferability of Learned Representations. /International Conference on Machine Learning 37/. [http://proceedings.mlr.press/v119/nguyen20b/nguyen20b.pdf \[pdf\]]
- (2019) V. Perrone, H. Shen, M. Seeger, C. Archambeau, R. Jenatton. Learning Search Spaces for Bayesian Optimization: Another View of Hyperparameter Transfer Learning. /Neural Information Processing Systems 32: 12751-12761/. [http://papers.nips.cc/paper/9438-learning-search-spaces-for-bayesian-optimization-another-view-of-hyperparameter-transfer-learning.pdf \[pdf\]]
- (2018) V. Perrone, R. Jenatton, M. Seeger, C. Archambeau. Scalable Hyperparameter Transfer Learning. /Neural Information Processing Systems 31: 6846-6856/. [http://papers.nips.cc/paper/7917-scalable-hyperparameter-transfer-learning.pdf \[pdf\]]
- (2018) S. Rangapuram, M. Seeger, J. Gasthaus, L. Stella, Y. Wang, T. Januschowski. Deep State Space Models for Time Series Forecasting. /Neural Information Processing Systems 31: 7796-7805/. [http://papers.nips.cc/paper/8004-deep-state-space-models-for-time-series-forecasting.pdf \[pdf\]], [https://github.com/awslabs/gluon-ts \[code\]]
- (2017) J. Boese, V. Flunkert, J. Gasthaus, T. Januschowski, D. Lange, D. Salinas, S. Schelter, M. Seeger, Y. Wang. Probabilistic Demand Forecasting at Scale. PVLDB 10(12): 1694-1705/. [http://www.vldb.org/pvldb/vol10/p1694-schelter.pdf \[pdf\]]
- (2017) R. Jenatton, C. Archambeau, J. Gonzalez, M.Seeger. Bayesian Optimization with Tree-structured Dependencies. /International Conference on Machine Learning 34: 1655-1664/. [http://proceedings.mlr.press/v70/jenatton17a/jenatton17a.pdf \[pdf\]]
- (2016) M. Seeger, D. Salinas, V. Flunkert. Bayesian Intermittent Demand Forecasting for Large Inventories. *Oral* at /Neural Information Processing Systems 29: 4646-4654/. [http://papers.nips.cc/paper/6313-bayesian-intermittent-demand-forecasting-for-large-inventories.pdf \[pdf\]]
- (2015) Y. J. Ko, M. Seeger. Expectation Propagation for Rectified Linear Poisson Regression. /Asian Conference on Machine Learning 7/. [https://infoscience.epfl.ch/record/214372/files/Ko49.pdf \[pdf\]]
- (2014) M. Khan, Y. J. Ko, M. Seeger. Scalable Collaborative Bayesian Preference Learning. /Artificial Intelligence and Statistics 17: 475-483/. [https://infoscience.epfl.ch/record/196605/files/khan14.pdf \[pdf\]]
- (2013) M. Khan, A. Aravkin, M. Friedlander, M. Seeger. Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models. /International Conference on Machine Learning 30/. [https://infoscience.epfl.ch/record/186800/files/final_paper_967.pdf \[pdf\]]
- (2012) M. Seeger, G. Bouchard. Fast Variational Bayesian Inference for Non-Conjugate Matrix Factorization Models. /Artificial Intelligence and Statistics 15/. [https://infoscience.epfl.ch/record/174931/files/aistats2012camera-ready_submitted.pdf \[pdf\]]
- (2012) Y. J. Ko, M. Seeger. Large Scale Variational Bayesian Inference for Structured Scale Mixture Models. /International Conference on Machine Learning 29/. [https://infoscience.epfl.ch/record/177211/files/icml12_struct_sparse.pdf \[pdf\]]
- (2011) M. Seeger, H. Nickisch. Fast Convergent Algorithms for Expectation Propagation Approximate Bayesian Inference. /Artificial Intelligence and Statistics 14/. [http://proceedings.mlr.press/v15/seeger11a/seeger11a.pdf \[pdf\]]
- (2010) M. Seeger. Speeding up Magnetic Resonance Image Acquisition by Bayesian Multi-Slice Adaptive Compressed Sensing. /Neural Information Processing Systems 23: 1633-1641/. [https://papers.nips.cc/paper/3712-speeding-up-magnetic-resonance-image-acquisition-by-bayesian-multi-slice-adaptive-compressed-sensing \[pdf\]]
- (2010) M. Seeger. Gaussian Covariance and Scalable Variational Inference. /International Conference on Machine Learning 27/. [https://infoscience.epfl.ch/record/161304/files/icml10_seeger.pdf \[pdf\]]
- (2010) N. Srinivas, A. Krause, S. Kakade, M. Seeger. Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. /International Conference on Machine Learning 27/. [https://icml.cc/Conferences/2010/papers/422.pdf \[pdf\]] *ICML 2020 Test of Time Award*
- (2009) M. Seeger. Sparse Linear Models: Variational Approximate Inference and Bayesian Experimental Design. /Journal of Physics: Conference Series, 197(012001)/. [https://infoscience.epfl.ch/record/161307/files/jpconf9_197_012001.pdf \[pdf\]]
- (2009) H. Nickisch, M. Seeger. Convex Variational Bayesian Inference for Large Scale Generalized Linear Models. /International Conference on Machine Learning 26: 761-768/. [https://infoscience.epfl.ch/record/161308/files/icml09_nickisch_seeger.pdf \[pdf\]]
- (2009) M. Seeger, H. Nickisch, R. Pohmann, B. Schoelkopf. Bayesian Experimental Design of Magnetic Resonance Imaging Sequences. /Neural Information Processing Systems 21: 1441-1448/. [https://papers.nips.cc/paper/3558-bayesian-experimental-design-of-magnetic-resonance-imaging-sequences.pdf \[pdf\]]
- (2009) D. Nguyen-Tuong, J. Peters, M. Seeger. Local Gaussian Process Regression for Real Time Online Model Learning. /Neural Information Processing Systems 21/. [https://papers.nips.cc/paper/3403-local-gaussian-process-regression-for-real-time-online-model-learning.pdf \[pdf\]]
- (2008) M. Seeger, H. Nickisch. Compressed Sensing and Bayesian Experimental Design. /International Conference on Machine Learning 25/. [https://infoscience.epfl.ch/record/161310/files/compr_sense_revised.pdf \[pdf\]]
- (2008) S. Gerwinn, J. Macke, M. Seeger, M. Bethge. Bayesian Inference for Spiking Neuron Models with a Sparsity Prior. /Neural Information Processing Systems 21: 529-536/. [https://papers.nips.cc/paper/3300-bayesian-inference-for-spiking-neuron-models-with-a-sparsity-prior.pdf \[pdf\]]
- (2007) M. Seeger. Cross-Validation Optimization for Large Scale Hierarchical Classification Kernel Methods. /Neural Information Processing Systems 20: 1233-1240/. [https://papers.nips.cc/paper/3044-cross-validation-optimization-for-large-scale-hierarchical-classification-kernel-methods.pdf \[pdf\]], [https://github.com/mseeger/klr \[code\]]
- (2007) M. Seeger, F. Steinke, K. Tsuda. Bayesian Inference and Optimal Design in the Sparse Linear Model. /Artificial Intelligence and Statistics 11/. [https://infoscience.epfl.ch/record/161313/files/aistats07.pdf \[pdf\]]
- (2007) M. Seeger, S. Gerwinn, M. Bethge. Bayesian Inference for Sparse Generalized Linear Models. /European Conference on Machine Learning 2007: 298-309/. [https://infoscience.epfl.ch/record/161313/files/aistats07.pdf \[pdf\]]
- (2006) S. Kakade, M. Seeger, D. Foster. Worst-Case Bounds for Gaussian Process Models. /Neural Information Processing Systems 19/. [https://papers.nips.cc/paper/2798-worst-case-bounds-for-gaussian-process-models.pdf \[pdf\]]
- (2006) Y. Shen, A. Ng, M. Seeger. Fast Gaussian Process Regression Using KD-Trees. /Neural Information Processing Systems 19/. [https://papers.nips.cc/paper/2835-fast-gaussian-process-regression-using-kd-trees.pdf \[pdf\]]
- (2005) Y.-W. Teh, M. Seeger, M. Jordan. Semiparametric Latent Factor Models. /Artificial Intelligence and Statistics 10/. [https://infoscience.epfl.ch/record/161317/files/aistats05.pdf \[pdf\]]
- (2003) N. Lawrence, M. Seeger, R. Herbrich. Fast Sparse Gaussian Process Methods: The Informative Vector Machine. /Neural Information Processing Systems 16: 609-616/. [https://papers.nips.cc/paper/2240-fast-sparse-gaussian-process-methods-the-informative-vector-machine.pdf \[pdf\]]
- (2003) M. Seeger, C. Williams, N. Lawrence. Fast Forward Selection to Speed Up Sparse Gaussian Process Regression. /Artificial Intelligence and Statistics 9/. [https://infoscience.epfl.ch/record/161318/files/aistats03-final.pdf \[pdf\]]
- (2002) M. Seeger. Covariance Kernels from Bayesian Generative Models. /Neural Information Processing Systems 15: 905-912/. [https://papers.nips.cc/paper/2133-covariance-kernels-from-bayesian-generative-models.pdf \[pdf\]]
- (2001) C. Williams, M. Seeger. Using the Nystroem Method to Speed Up Kernel Machines. /Neural Information Processing Systems 14: 682-688/. [https://papers.nips.cc/paper/1866-using-the-nystrom-method-to-speed-up-kernel-machines.pdf \[pdf\]]
- (2001) M. Seeger, J. Langford, N. Megiddo. An Improved Predictive Accuracy Bound for Averaging Classifiers. /International Conference on Machine Learning 18: 290-297/. [https://infoscience.epfl.ch/record/161321/files/averaging_icml.pdf \[pdf\]]
- (2000) C. Williams, M. Seeger. The Effect of the Input Density Distribution on Kernel-based Classifiers. /International Conference on Machine Learning 17: 1159-1166/. [https://infoscience.epfl.ch/record/161323?ln=en \[link\]]
- (2000) M. Seeger. Bayesian Model Selection for Support Vector Machines, Gaussian Processes and Other Kernel Classifiers. /Neural Information Processing Systems 13: 603-609/. [https://papers.nips.cc/paper/1722-bayesian-model-selection-for-support-vector-machines-gaussian-processes-and-other-kernel-classifiers.pdf \[pdf\]]
*Journal:*
- (2012) N. Srinivas, A. Krause, S. Kakade, M. Seeger. Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting. /IEEE Transactions on Information Theory, 58: 3250-3265/. [https://infoscience.epfl.ch/record/177246/files/srinivas_ieeeit2012.pdf \[pdf\]]
- (2011) M. Seeger, H. Nickisch. Large Scale Bayesian Inference and Experimental Design for Sparse Linear Models. /SIAM Journal on Imaging Sciences, 4(1): 166-199/. [https://infoscience.epfl.ch/record/164038/files/siims_finalpaper.pdf \[pdf\]]
- (2010) M. Seeger, D. Wipf. Variational Bayesian Inference Techniques. /IEEE Signal Processing Magazine, 27(6): 81-91/. [https://infoscience.epfl.ch/record/161294/files/final_paper.pdf \[pdf\]]
- (2010) M. Seeger, H. Nickisch, R. Pohmann, B. Schoelkopf. Optimization of k-Space Trajectories for Compressed Sensing by Bayesian Experimental Design. /Magnetic Resonance in Medicine, 61(1): 116-126/. [https://www.ncbi.nlm.nih.gov/pubmed/19859957 \[PubMed\]]
- (2008) M. Seeger. Cross-Validation Optimization for Large Scale Structured Classification Kernel Methods. /Journal of Machine Learning Research, 9: 1147-1178/. [http://www.jmlr.org/papers/volume9/seeger08b/seeger08b.pdf \[pdf\]], [https://github.com/mseeger/klr \[code\]]
- (2008) M. Seeger. Bayesian Inference and Optimal Design in the Sparse Linear Model. /Journal of Machine Learning Research, 9: 759-813/. [http://www.jmlr.org/papers/volume9/seeger08a/seeger08a.pdf \[pdf\]]
- (2008) M. Seeger, S. Kakade, D. Foster. Information Consistency of Nonparametric Gaussian Process Methods. /IEEE Transactions on Information Theory, 54(5): 2376-2382/. [https://infoscience.epfl.ch/record/161300/files/infcons_ieeeit08.pdf \[pdf\]]
- (2007) F. Steinke, M. Seeger, K. Tsuda. Experimental Design for Efficient Identification of Gene Regulatory Networks using Sparse Bayesian Models. /BMC Systems Biology, 1(51)/ [https://infoscience.epfl.ch/record/161460/files/bmcpaper.pdf \[pdf\]]
- (2004) M. Seeger. Gaussian Processes for Machine Learning. /International Journal of Neural Systems, 14(2): 69-106/. [https://infoscience.epfl.ch/record/161301/files/bayesgp-tut.pdf \[pdf\]]
- (2002) M. Seeger. PAC-Bayesian Generalization Error Bounds for Gaussian Process Classification. /Journal of Machine Learning Research, 3: 233-269/. [http://www.jmlr.org/papers/volume3/seeger02a/seeger02a.pdf \[pdf\]]
*Book Chapters:*
- (2007) M. Seeger. Gaussian Process Belief Propagation. In /G. Bakir, T. Hofmann, B. Schoelkopf (eds.); Predicting Structured Data: 301-318/ [https://infoscience.epfl.ch/record/161325/files/gpbp.pdf \[pdf\]]
- (2006) M. Seeger. A Taxonomy for Semi-Supervised Learning Methods. In /O. Chapelle, B. Schoelkopf, A. Zien (eds.); Semi-Supervised Learning: 15-32/ [https://infoscience.epfl.ch/record/161326/files/chapter2.pdf \[pdf\]]
*Technical Reports:*
- (2021) R. Grazzi, V. Flunkert, D. Salinas, T. Januschowski, M. Seeger, C. Archambeau. Meta-Forecasting by combining Global DeepRepresentations with Local Adaptation. [https://arxiv.org/abs/2111.03418 \[arxiv\]]
- (2021) A. Makarova, H. Shen, V. Perrone, A. Klein, JB. Faddoul, A. Krause, M. Seeger, C. Archambeau. Automatic Termination for Hyperparameter Optimization. [https://arxiv.org/abs/2104.08166 \[arxiv\]]
- (2020) A. Klein, L. Tiao, T. Lienart, C. Archambeau, M. Seeger. Model-based Asynchronous Hyperparameter and Neural Architecture Search. [https://arxiv.org/abs/2003.10865 \[arxiv\]], [https://github.com/awslabs/syne-tune \[code\]]
- (2020) E. Lee, V. Perrone, C. Archambeau, M. Seeger. Cost-aware Bayesian Optimization. [https://arxiv.org/abs/2003.10870 \[arxiv\]]
- (2019) V. Perrone, I. Shcherbatyi, R. Jenatton, C. Archambeau, M. Seeger. Constrained Bayesian Optimization with Max-Value Entropy Search. [https://arxiv.org/abs/1910.07003 \[arxiv\]]
- (2019) M. Seeger. Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting. Addendum. [files/info_theo_regret_addendum.pdf \[pdf\]]
- (2017) M. Seeger, S. Rangapuram, Y. Wang, D. Salinas, J. Gasthaus, T. Januschowski, V. Flunkert. Approximate Bayesian Inference in Linear State Space Models for Intermittent Demand Forecasting at Scale. [https://arxiv.org/abs/1709.07638 \[arxiv\]]
- (2017) M. Seeger, A. Hetzel, Z. Dai, E. Meissner, N. Lawrence. Auto-Differentiating Linear Algebra. [https://arxiv.org/abs/1710.08717 \[arxiv\]], [https://github.com/apache/incubator-mxnet \[code\]]
- (2010) N. Srinivas, A. Krause, S. Kakade, M. Seeger. Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. [https://arxiv.org/abs/0912.3995 \[arxiv\]]
- (2010) M. Seeger, H. Nickisch. Large Scale Bayesian Inference and Experimental Design for Sparse Linear Models. [https://arxiv.org/abs/0810.0901 \[arxiv\]]
- (2008) M. Seeger, S. Kakade, D. Foster. Addendum to Information Consistency of Nonparametric Gaussian Process Methods. [files/ieeeinf_addendum.pdf \[pdf\]]
- (2005) M. Seeger, Y.-W. Teh, M. Jordan: Semiparametric Latent Factor Models. [https://infoscience.epfl.ch/record/161465/files/slfm-long.pdf \[pdf\]]
- (2005) M. Seeger. Expectation Propagation for Exponential Families. [https://infoscience.epfl.ch/record/161464/files/epexpfam.pdf \[pdf\]]
- (2004) M. Seeger. Low Rank Updates for the Cholesky Decomposition. [https://infoscience.epfl.ch/record/161468/files/cholupdate.pdf \[pdf\]], [https://github.com/mseeger/chollrup \[code\]]
- (2004) M. Seeger, M. Jordan. Sparse Gaussian Process Classification With Multiple Classes. [https://infoscience.epfl.ch/record/161467/files/ivmmulti.pdf \[pdf\]]
- (2000) M. Seeger. Learning with Labeled and Unlabeled Data. [https://infoscience.epfl.ch/record/161327/files/review.pdf \[pdf\]]
*PhD Theses:*
- (2017) Y. J. Ko. Applications of Approximate Learning and Inference for Probabilistic Models. /Ecole Polytechnique Federale, Lausanne (M. Grossglauser, M. Seeger, advisors)/. [https://infoscience.epfl.ch/record/227482?ln=en \[link\]]
- (2003) M. Seeger. Bayesian Gaussian Process Models: PAC-Bayesian Generalisation Error Bounds and Sparse Approximations. /University of Edinburgh, UK (C. Williams, advisor)/. [https://infoscience.epfl.ch/record/161461?ln=en \[link\]]
*Patents:*
- (2022) S. Rangapuram, J. Gasthaus, T. Januschowski, M. Seeger, L. Stella. Artificial intelligence system combining state space models and neural networks for time series forecasting. /US Patent 11,281,969/.
- (2020) M. Seeger, G. Duncan, J. Gasthaus. Intermittent demand forecasting for large inventories. /US Patent 10,748,072/.
~~~
~~~
{Lecture Notes}
- (2012) Pattern Classification and Machine Learning, taught at EPFL [files/pcml_notes.pdf \[pdf\]]
~~~
~~~
{Software}
Together with colleagues at Amazon (David Salinas, Aaron Klein, Martin Wistuba), I created and maintain [https://github.com/awslabs/syne-tune \[Syne Tune\]], a package for state-of-the-art distributed hyperparameter optimization. Together with Asmus Hetzel and Zhenwen Dai, I introduced linear algebra operators (Cholesky decomposition, LU decomposition, singular value decomposition) into [https://github.com/apache/incubator-mxnet \[MXNet\]].
~~~