Grad Courses Taken
Aalto University
Special Course on Latent Variable Modeling and Bayesian Matrix Factorization, by Xiangju Qin and Paul Blomstedt: Latent variable models (LVMs) are powerful and flexible tools for learning hidden structure underlying data objects in an unsupervised fashion. They provide a compact, meaningful representation of the inputs. Bayesian matrix factorization is a general class of LVMs which factorizes a data matrix into a product of two lowrank matrices. LVMs (and BMF) can be used for many purposes in machine learning, such as clustering, pattern recognition, dimensionality reduction, feature extraction, and predicting missing values. Latent variable modeling is a very broad research topic. This seminar course provides a gateway to this interesting topic through an introductory lecture for several widely used LVMs (e.g. factor analysis and multiview learning) and basic principles of the methods as well as discussion of more recent advances in the field.
Seminar Course on Approximate Bayesian Computation, by Henri Vuollekoski:
Approximate Bayesian Computation (a.k.a. ABC, likelihoodfree inference) is a new class of computational inference methods that can be used when the likelihood function is difficult to evaluate or unknown, and one has a simulator for generating data that (hopefully) resemble observations when generated with correct parameters. The underlying intuition is that similar model parameters are likely to generate similar data, but the practice is of course a bit more complex. ABC has applications from medicine to particle physics, and is expected to revolutionize computional sciences that cannot apply traditional statistical methods.
Gaussian Processes  Theory and Applications, by Michael Andersen: This seminar course gives an introduction to the field of Gaussian processes and provide a theoretical background for Gaussian processes including both modelling and inference aspects. The seminar includes Gaussian process regression and classification as well as give examples of how Gaussian processes can be used as building blocks in more complex models.
Kernel Methods in Machine Learning, by Juho Rousu: Marginbased models and kernels. Classification and Support vector machines. Ranking and preference learning. Unsupervised learning with kernels. Kernels for structured data. Multilabel classification. Semisupervised learning. Predicting structured output. Convex optimization methods.
Bogazici University
Cognitive Science, by Albert Ali Salah: An introductory to cognitive science. In each week, we covered a different topic including brain, sensation&cognition, language, attention, learning and memory.
Bayesian Statistics & Machine Learning, by Ali Taylan Cemgil: The main focus in this course was graphical models, in particular inference and parameter learning in Hidden Markov Model and Kalman Filter. We also had a poster session at the end of the semester, where I showed an application of Mixed Memory Markov Models. So, here the outcome of this course, or my first poster.
Pattern Recognition, by Ethem Alpaydın: The course was about traditional machine learning. We covered first 9 chapters of Ethem Hoca's Introduction to Machine Learning book: parametricsemiparametricnonparametric methods for classification, regression and clustering, PCA, LDA, decision trees, and HMM's. We also worked on a term project, in which I compared three Markov models, namely, Hidden Markov Model, Mixed Memory Markov Model and Factorial Markov Model. Resulting report is here.
Graphs & Network Flows, by Caner Taşkın: Shortest path, maxflow&mincut, min cost flow/circulation problems and problems with side constraints were the main subjects. I compared execution times of four different algorithms for maximum flow problem in various types of networks in my term project and here is the report.
Monte Carlo Methods, by Ali Taylan Cemgil: A followup course to Bayesian Statistics. We investigated random number generation, rejection sampling, importance sampling, discrete space Markov chains, MetropolisHastings algorithm, Gibbs sampler and sequential Monte Carlo. That was I guess my favorite course among all in this list. As a term project, I built Mixed Memory Hidden Markov Model for time series analysis, which is briefly described in projects sections, and implemented parameter learning via Gibbs sampler, Metropolis algorithm and particle filter. Here is the my poster.
Artificial Neural Networks, by Ethem Alpaydın: The rest of Ethem Hoca's Introduction to Machine Learning book is covered in this course. In particular, we studied multilayer perceptrons, recurrent & convolutional neural networks, kernel machines & support vector machines, mixture of experts and reinforcement learning. My term project was about clustering and prediction in time series. I implemented self organized maps and hierarchical clustering for the first part and 4 types of neural networks for the second part: Tappeddelay multilayer perceptron, Elman RNN, Full lag Elman RNN and Tappeddelay Jordan RNN. Here is my report.
Principles of Artificial Intelligence, by Albert Ali Salah: Intelligent & problem solving agents, problem types and search methods, heuristics, genetic algorithms, constraint satisfaction problems and games were the main topics of the course. We designed agents that plays splendor in Prolog. My term paper was about Bayesian approaches in human cognition and here is the report. Also, see my term project report for Introduction to AI course, titled as Philosophy of Artificial Intelligence.
