Google Scholar
Fred Roosta 

Fred (Farbod) Roosta

 

  • ARC DECRA Fellow
  • School of Mathematics and Physics, University of Queensland, Australia
  • Email: fred.roostauq.edu.au
  • Office: Priestley Building (67) - Room 447
  • Phone: +61 7 336 53259

News

2020

  • July, 2020: Our paper on Newton-ADMM, a distributed GPU-accelerated second-order optimization method, has been accepted in the Proceedings of the ACM/IEEE Supercomputing Conference (SC20) - 18% acceptance rate.
  • June, 2020: Our paper on central limit theorems and concentration inequalities for out-of-sample extensions of the adjacency and Laplacian spectral embeddings has been accepted to the Journal of Machine Learning Research (JMLR), with minor revision.
  • June, 2020: Our paper on distributed non-convex Newton-type optimization methods has been accepted in ICML, 2020! Congratulations to my PhD student Rixon Crane for his second paper in an A* conference!
  • February, 2020: Our recent paper on stochastic normalizing flows, an extension of continuous normalizing flows for maximum likelihood estimation and variational inference, using stochastic differential equations is available.
  • February, 2020: Our new paper on studying Gaussian processes arising from infinitely wide neural networks with ELU and GELU activations as well as analysing the fixed-point dynamics of iterated kernels is avaikable.
  • January, 2020: Our paper on the theory and application of the reproducing Stein kernel for a posteriori correction of Monte-Carlo sampling algorithms is now available on arXiv.
  • January, 2020: Our paper on the theoretical and practical properties of Monte-Carlo sampling algorithms by implicit discretizations of Langevin SDE has been accepted to the Journal of Machine Learning Research (JMLR), with minor revision.

2019

  • December, 2019: Our paper on an empirical study of second-order optimization methods in deep learning has been accepted to the SIAM International Conference on Data Mining (SDM20) - 19% acceptance rate.
  • December, 2019: Our new paper, which extends the existing results on Gaussian process iterpretation of multi-layer perceptrons to richer families of priors, is available.
  • November, 2019: Our new paper on the application of RandNLA and leverage score sampling within the context of big-data time series is now available.
  • November, 2019: I will be serving as a senior program committee member for the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020).
  • October, 2019: Our paper on central limit theorems and concentration inequalities for out-of-sample extensions of the adjacency and Laplacian spectral embeddings is now available.
  • September, 2019: Our paper on stability analysis of Newton-MR with respect to Hessian approximations is now available.
  • September, 2019: DINGO Does Vancouver! Our paper on distributed Newton-type methods for optimization of invex objectives has been accepted to NeurIPS 2019!
  • June, 2019: Congratulations to my PhD student, Rixon Crane, for being awarded the Best Poster Award at AMSI Optimise this year.
  • June, 2019: I was selected as a top 5% reviewer for ICML 2019.
  • May, 2019: After 17 months and 21 days, our paper on non-convex Newton-type methods with inexact Hessian information has been accepted to Mathematical Programming.
  • May, 2019: Our paper on the invariance kernels of multilayer perceptrons during training has been accepted to the International Joint Conference on Artificial Intelligence (IJCAI 2019) - 18% acceptance rate.
  • April, 2019: Our paper on the theoretical and practical properties of Monte-Carlo sampling algorithms by implicit discretizations of Langevin SDE is now available on arXiv.