Jan Freyberg

Machine learning and cognitive neuroscience


Research

I currently work on applied machine learning. I am in the AI for Good team at Element AI, and work on implementing existing or new algorithms to support humanitarian and environmental work. Our team is building platforms and tools to support NGOs in their work.

In the past, I worked in applying machine learning in a commercial setting at Faculty, a London-based startup that, amongst other things, completes consulting projects for organisations.

Before that, I studied visual perception in clinical populations. Many psychiatric conditions feature unusual sensory processing, which can be caused by the same underlying processes that are causing core symptoms. I ran behavioural studies to tap into these underlying processes.

Blog

I blog mostly about neuroscience, stats, data visualisation - but I may go into other technical things later on too. You can find all my posts here, and you can subscribe here. Here are my recent posts:

Why coverage doesn't cover pytorch backward calls.

Having recently switched to using pytorch for modeling, after primarily building neural networks in tensorflow / keras, I have been enjoying how easy it is to write new (automatically differentiable) functions and layers. I did this recently: I wanted to...

The most momentous year in history?

I really like history, and one of the things I sometimes spend my time doing is reading historical articles on Wikipedia. The other day, this got me thinking about whether the amount of historical fact on wikipedia can be analysed...

Project-specific cookiecutter templates for reproducible work

I recently read a really good blogpost by Enrico Glerean titled Project management == Data management. In it, he explains best practices for managing data, and standardising project file structures and layouts. One of the tools he mentioned was cookiecutter,...

Visualising the bootstrap with shiny

Bootstrapping is a really useful statistical tool. It relies on re-sampling, with replacement, from a sample of data you have acquired. The idea is that by re-sampling your sample over and over again, you simulate running studies over and over...

Research Projects


Binocular Rivalry

Binocular rivalry occurs when the two eyes see different images (either using mirrors, polarisation, or colored glasses). Rather than seeing these images superimposed, observers see a rhythmic alternation between them, with periods of superimposition or piecemeal images in between.

This alternation is thought to be driven by mutual inhibition between the two images. I have therefore tried to use it as a proxy measure of the Excitation/Inhibition ratio in the brain.

Read more:

“Slower Rate of Binocular Rivalry in Autism.” Journal of Neuroscience (PDF)

“Reduced Perceptual Exclusivity during Object and Grating Rivalry in Autism.” Journal of Vision (Open access)

Try it

Binocular Adaptation

Given the interesting finding of a slower rate of binocular rivalry in autism, I was interested in whether adapting to an image - the process in which neural responses to an image reduce when seeing it for an extended period - was causing it.

I developed a paradigm in which I adapted the viewer to an image before binocular rivalry in two ways: at a very basic level, and at a slightly higher level.

It showed no real difference in adaptation between an autistic and a control group, and also proved to be a neat way to study contributions of different levels of the visual hierarchy to rivalry. You can find a draft of this project here, and try the code yourself below.

Try it

Visual Crowding

Visual crowding occurs when lots of objects are close to each other in peripheral vision. While it’s easy to see that there are objects in your peripheral vision, it becomes harder to identify each individual object.

This presents a fundamental limit to perceiving all objects in our visual field. Since perception of many details is sometimes linked to autism, I recently checked if crowding is weaker in autism.

In some ways this represents a follow-up to an earlier project I was involved in in which we found slightly more narrowly focused attention in autism. (PDF)

You can find the resulting paper here:

“Typical magnitude and spatial extent of crowding in autism.” (Open Access)

Try it

Face Categorisation

I’m also interested in face categorisation. At the moment, I am mostly curious about the early stages of face perception, and in particular the simple distinction between a face and a different object. I’m using an experimental paradigm from the Face Categorisation Lab, who were kind enough to send me their stimuli. The experiment is designed to detect a signature of face processing using steady-state visual evoked potentials.

It does so by flashing random images rhythmically. Periodically, the image flashed will be a face, and at the frequency at which faces are shown, a clear response can be detected.

Try it

CV

I'm planning to add an interactive CV here, but it's not working yet. For now, I only have a PDF.

CV

Side Projects


Eye Tracking

I’ve worked on some code that allows you to do eye tracking more easily with psychopy and tobii eyetrackers.

This is primarily minor tweaks to a script written by Sogo Hiroyuki, but also includes a few functions that make life much easier - such as a function that mirrors the eyes on the display to let you move the screen and participant in the right position, and a smoother calibration function.

Try it

SSVEP Analysis

I’ve also developed a package that makes it easier to analyse steady-state evoked potentials. Primarily because I work with steady-state visually evoked potentials (SSVEPs), it’s called ssvepy, but it works with any evoked-frequency / frequency-tagging data. Based on MNE-python data structures, you simply initialise a data class with epoched data as well as your stimulation frequencies, and it automatically calculates your signal-to-noise ratio at stimulation frequencies as well as harmonics (and, in the future, intermodulation frequencies).

You can check out the example notebook in the documentation, where you’ll also find the installation instructions.

Try it

Neuroimaging Widgets

At the 2017 Brainhack Global, I helped out on a project initially started by Bjoern Soergel from Cambridge. We both enjoy using jupyter notebooks, and since we were working on a neuroimaging project at the time, we thought it would be neat to have interactive widgets displaying neuroimaging data in the notebook.

We turned this into a package that works for standard neuroimaging plotting functions, relying on ipywidgets.

Try it