sparkling-brain

EEG SSVEP Classifier [The Basics]

SSVEP is often used in brain computer interfaces (BCIs). BCIs are devices that use neural activity to help people communicate with or control external systems. They are especially popular in rehabilitation and enhancement research.

Among the many BCI paradigms, motor imagery, P300 and SSVEP are the most common. Here, we explore SSVEPs: what they are and how they work. We will also introduce examples of online and offline SSVEP classifiers developed using the Mentalab Explore system.

What is SSVEP?

Steady state visually evoked potentials (SSVEPs) are brain signals that occur in response to a visual stimulus flickering at a fixed frequency. A rhythmic stimulus can entrain brain activity in the occipital lobe, which is commonly associated with the visual cortex.

If done correctly, when a person looks at a target stimulus, the frequency of their neural activity will correlate with the frequency of the flickering stimulus (Middendorf et al., 2000).

As such, it is possible to use EEG signals to determine which stimulus a participant is looking at in almost real-time. As long as a selection of stimuli are all flickering at different rates, the SSVEP will match with only one stimulus; the one the participant is looking at.

SSVEPs are an important tool in neuroscience, clinical neurology and psychology, where they serve as an objective biomarker, especially in visual system studies. They are also popular in the field of BCI because of their reliability, accuracy, high information transfer rate, easy setup, and minimal training requirements.

Blue eye
SSVEP demonstrate that when you look at a flickering stimulus, the neural activity in your brain “flickers” too.
Dependent or independent BCI?

SSVEP-based BCI is known as “dependent” BCI because it depends on the participant using eye muscles to focus on a target stimulus.

Dependent SSVEP paradigms are widely employed. They have been used to control wheelchairs, user interfaces for computer programs, spellers, and many other devices.

However, there is evidence that humans can shift visual attention without shifting gaze. That is, “independent” SSVEP BCIs may be feasible (Allison et al., 2008). Despite having lower accuracy, these BCIs may assist users who do not have control over their gaze.


Before we dive into the details of Mentalab’s SSVEP implementation, please note that all the code and instructions for the experiments below are available for free in explorepy’s GitHub repository.  Our community members can use this repository to develop their own SSVEP BCIs, and create more advanced BCI applications.

The experiment code is written in Python using explorepy, Psychopy, MNE, and Scikit-learn.

Experiment design

In the first experiment, we present a participant with two stimuli to the left and right of their visual field. At the beginning of each trial, we ask the participant to look at the left or right stimulus.

The two stimuli flicker at 10 Hz and 7.5 Hz respectively, which we chose because these frequencies work well with the refresh rates of most computer screens (usually 60 Hz).

We record EEG signals with a 4-channel Explore system, running at 250 Hz, and dry electrodes. The ground electrode is placed on either mastoid and the other electrodes are placed on the occipital cortex, as outlined here

After running the experiment script, three CSV files containing EEG, ORN and marker data are generated. The marker file records the start time of each trial. The instructions presented to the participants in each trial is documented in the marker code.

The number of trials, trial length, frequencies and target positions of the stimuli can be modified in the script as needed. However, we do recommend at least 50 trials.

Analysis

Common approaches to classify SSVEP signals include: a Fourier transformation, the minimum energy combination and Canonical Correlation Analysis (CCA).

The simplest approach is to apply a Fourier transform on the signals and look for frequency peaks. Although this method is fast and computationally low-cost, it is very prone to noise.

In contrast, CCA is much less error-prone. CCA is a statistical method that computes the correlation between two sets of random variables. It tries to maximize the correlation between an EEG signal and each stimulus’ frequency. From this, one stimulus can be chosen; the stimulus whose frequency correlates best with the EEG signal (Lin et al., 2007).

Results

Running the script on the sample recording gives the following plots.

accuracy-ssvep
Accuracy over trial duration

The first plot shows the accuracy of the CCA predictions throughout the trial.

If the trial length is too short (250ms), the system cannot predict the stimulus frequency. It has about a 50% success rate, which is as good as chance.

However, CCA reaches an accuracy above 95% after only one second, and 100% after 1.5 seconds.


Time-Frequency-SSVEP
Time-frequency representation for each channel

In the second plot, we see the time-frequency representations of each channel.

These representations clearly show the entrainment of the EEG signal for the two target frequencies (look at the dark red bands!).

The left-most plots correspond to the stimulus flickering at 10 Hz, and the right-most plots for the stimulus flickering at 7.5 Hz.


time-frequency-ssvep-focus
Enlarged time-frequency representation for channel O1. Shows strong band at 7.5 Hz.

For instance, when one of the time-frequency representations is enlarged, we see a band of dark red at 7.5 Hz (highlighted by the dotted line).

Mentalab Explore

We used this classifier to evaluate the performance of our EEG device: Mentalab Explore. The device performed excellently, even with dry electrodes, which are known to have higher impedances than gel electrodes.

Mentlab Explore could predict more than 95% of trials correctly using only one second of data. When the trial length was increased to 1.5 seconds, the performance was 100%.

The time-frequency representations showed strong entrainment of the EEG signals in the occipital cortex. These results indicate that Mentalab Explore is a suitable device for BCI setups and applications that require real-time mobile EEG data acquisition.

Online SSVEP

Taking this concept a step further, we built an online SSVEP application.

In the online SSVEP, not only is the data analyzed and classified in real-time, but the stimulus predictions are used as real-time feedback for the participant.

This script presents the participant with four stimuli, each flickering at a different frequency. It will predict which stimulus the participant is focused on based on a real-time EEG data stream.

An arrow then points to the stimulus the participant is looking at in real-time.

online-ssvep
Fig 4: Stimuli for the online SSVEP classifier. Shows four distinct stimuli and the prediction arrow.

Conclusion

Simple BCI paradigms, like SSVEPs, can be used to design sophisticated and meaningful applications.

Here, we have presented an online and offline classifier that could be used to determine the accuracy of EEG recording equipment. All code and detailed instructions for these experiments are available in our public GitHub repository.

The offline experiment can be used to evaluate the performance of the system objectively. The online SSVEP gives real-time feedback to the user, giving them a sense of how accurate the system is.

Both online and offline applications serve as a base for researchers to create their own SSVEP applications and BCI experiments.

We are  happy to discuss your ideas and provide guidance on how to use our application code. If you’d like to learn more, please contact us at

References

Allison, B. Z., McFarland, D. J., Schalk, G., Zheng, S. D., Jackson, M. M., & Wolpaw, J. R. (2008). Towards an independent brain-computer interface using steady state visual evoked potentials. Clinical neurophysiology119(2), 399–408. https://doi.org/10.1016/j.clinph.2007.09.121

Lin, Z., Zhang, C., Wu, W., & Gao, X. (2007). Frequency recognition based on canonical correlation analysis for SSVEP-based BCIs. IEEE transactions on bio-medical engineering54(6 Pt 2), 1172–1176. https://doi.org/10.1109/tbme.2006.889197

Middendorf, M., McMillan, G., Calhoun, G., & Jones, K. S. (2000). Brain-computer interfaces based on the steady-state visual-evoked response. IEEE transactions on rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society8(2), 211–214. https://doi.org/10.1109/86.847819