PhD student at the Department of Computer and Systems Sciences at Stockholm University. My main reserach interest includes machine learning (especially focused on classification and time series) with applications in health informatics.

In figure 1, you can find an illustration of a classification algorithm (a radial basis function with the Lloyd-algorithm) at work, separating red and blue crosses. The black line is the target function and the bold blue line is the hypothesis learned by the algorithm.

Figure 1: RBF learned for $K=2,3\ldots 36$ using $\theta = (\Phi^T\Phi)^{-1}\Phi^Ty$ with \[\Phi = \begin{bmatrix} e^{-\gamma \lVert x_1 - \mu_1 \rVert^2} & \ldots & e^{-\gamma \lVert x_1 - \mu_K \rVert^2} \\ e^{-\gamma \lVert x_2 - \mu_1 \rVert^2} & \ldots & e^{-\gamma \lVert x_2 - \mu_K \rVert^2} \\ & \vdots & \\ e^{-\gamma \lVert x_N - \mu_1 \rVert^2} & \ldots & e^{-\gamma \lVert x_N - \mu_K \rVert^2} \end{bmatrix}\]

Figure 2 shows a different algorithm (the random forest) produce a slightly different classification boundary for the same target function (but a different data set). The figure is produced using the Octave bindings for rr (which can be found below when released)

Figure 2: Random Forest learned using bagging and $10,20\ldots100$ trees while
inspecting $\log_2(m)+1$ features at each node


GitHub and Launchpad.