PhysioLabXR

Simplifying Physiological and Neuroimaging Experiments

_images/visualization.png _images/realtime_dsp.png _images/replay.png _images/scripting.png
  • built with 95%+ Python for running physiological/neuroscience/human-computer interaction experiments involving EEG, fNIRS, eyetracking, fMRI, cameras, microphones, and more!

  • offers real-time visualization, synchronization, recording, and data processing (e.g., to apply filters and run machine learning models).

  • supports multiple experiment platforms (screen-based, VR, and AR).

  • works on all major operating systems (Windows, MacOS, and Linux).

  • easy to install and use: install via pip and run with a single command, download the executable,or run from source.

  • designed for both academic researchers and industry practitioners.

  • open-source and community driven.

Streaming Data:

Read the paper on the Journal of Open Source Software:

https://joss.theoj.org/papers/10.21105/joss.05854/status.svg

Cite the paper:

@article{Li2024,
        doi = {10.21105/joss.05854},
        url = {https://doi.org/10.21105/joss.05854},
        year = {2024},
        publisher = {The Open Journal}, volume = {9}, number = {93}, pages = {5854},
        author = {Ziheng 'Leo' Li and Haowen 'John' Wei and Ziwen Xie and Yunxiang Peng and June Pyo Suh and Steven Feiner and Paul Sajda},
        title = {PhysioLabXR: A Python Platform for Real-Time, Multi-modal, Brain–Computer Interfaces and Extended Reality Experiments},
        journal = {Journal of Open Source Software} }

Download

Run the executable

Get the latest executable of PhysioLabXR from the release page, that supports Windows, MacOS, and Linux.

Install with pip

You can also install PhysioLabXR with pip, which will install the latest release version of PhysioLabXR and all its dependencies. This is platform-independent and works on Windows, MacOS, and Linux.

PhysioLabXR supports Python 3.9, 3.10, and 3.11. Support for 3.12 is coming soon.

If you are using pip, we recommend installing PhysioLabXR in a virtual environment. To create a virtual environment, run:

python -m venv physiolabxr-env

Then activate the virtual environment:

source physiolabxr-env/bin/activate

Install PhysioLabXR with:

pip install physiolabxr

Once installed, you can run the application with:

physiolabxr

Run from source

Alternatively, you can run the application from its source. Instructions are here.

Troubleshooting

If you run into any trouble installing or running PhysioLabXR, please refer to the troubleshooting page for common issues and solutions.

Community

Join our Community Slack to ask questions and get help from the community.

Get Started with a Simple Example

In this example, we will replay a prerecorded EEG experiment of the visual oddball paradigm, and extract the event-related potential (ERP) of the P300 response. Download the example recording erp-example.p from here (~4 MB)

After launching PhysioLabXR, go to the Replay Tab and load the example recording. The streams in the replayed recording will be automatically added to the Visualization Tab. There, you can click on the start buttons to see the EEG and event marker stream in real-time, synchronized as they were recorded during the experiment.


Note

If at any time during the replay, it says ‘Lost Connection to …’, it means the replay has finished and the replay streams are closed. When this happens, simply restart the replay from the Replay Tab by clicking on Start Replay again.

The event marker stream has one channel named DTN showing what type of stimulus is popping up for the participant. DTN stands for distractor, target, and novelty as part of the oddball paradigm. We should see an ERP in the EEG signal when there’s a new value in the DTN stream, meaning a new stimulus just popped up. However, each EEG channel has a different offset imposed by the EEG hardware, as you can see on the vertical scale of the EEG plot with values going from -8000 to 6000. To bring them to the same level and make the ERP signal visible, we will apply a high-pass filter to the EEG stream.


Adding Filters to the EEG Stream

To add filters to the EEG stream:

  1. Click on the option button of the plot with EEG data (learn more about plot options here).

  2. Under Select data processor, choose ButterworthHighpassFilter, and click on Add, a filter item will appear in the list of data processors.

  3. Set the Cutoff frequency for highpass filter to be 1 Hz.

  4. To add a second filter, select ButterworthLowpassFilter, click on Add, and set the Cutoff frequency to be 60 Hz.

  5. To activate the filters, click on the checkbox before the filters.

The bubble before the filter will turn stream_active, meaning the filter is currently active. You will now able to zoom in to see the ERPs in the filtered EEG signals.

Now, say we want to extract the ERP chunks from EEG, perhaps to save them for classification later. We can do this by adding a custom script.


Adding a Custom Script to Extract EEG Signals

We will add a script that uses the DTN stream as a trigger to extract the ERP chunks from the EEG stream (learn more about the scripting feature here). To do this:

  1. Go to the scripting tab, and click on Add to create a new script.

  2. Click on Create (not Locate, which is for loading an existing .py script) and choose a file location to save the script. Name the script something like ‘ERPExtraction’. Click on Save.

  3. A template script will open in your system’s default editor. Change it to the following code:

    import numpy as np
    
    from physiolabxr.scripting.RenaScript import RenaScript
    from physiolabxr.scripting.physio.epochs import get_event_locked_data, buffer_event_locked_data, get_baselined_event_locked_data
    
    
    class ERPExtraction(RenaScript):
        def __init__(self, *args, **kwargs):
            """
            Please do not edit this function
            """
            super().__init__(*args, **kwargs)
    
        # Start will be called once when the run button is hit.
        def init(self):
            self.events = (1, 2, 3)  # 1 is distractor, 2 is target, 3 is novelty
            self.tmin = -0.1  # Time before event marker to include in the epoch
            self.tmax = 0.8  # Time after event marker to include in the epoch
            self.baseline_time = 0.1  # Time period since the ERP epoch start to use as baseline
            self.erp_length = int((self.tmax - self.tmin) * 128)  # Length of the ERP epoch in samples
            self.event_locked_data_buffer = {}  # Dictionary to store event-locked data
            self.eeg_channels = self.get_stream_info('Example-BioSemi-Midline', 'ChannelNames')  # List of EEG channels
            self.srate = self.get_stream_info('Example-BioSemi-Midline', 'NominalSamplingRate')  # Sampling rate of the EEG data in Hz
    
        # loop is called <Run Frequency> times per second
        def loop(self):
            # first check if the inputs are available
            if 'Example-EventMarker' in self.inputs.keys() and 'Example-BioSemi-Midline' in self.inputs.keys():
                event_locked_data, last_event_time = get_event_locked_data(event_marker=self.inputs['Example-EventMarker'],
                                                                           data=self.inputs['Example-BioSemi-Midline'],
                                                                           events_of_interest=self.events,
                                                                           tmin=self.tmin,
                                                                           tmax=self.tmax,
                                                                           srate=128,
                                                                           return_last_event_time=True, verbose=1)
                self.inputs.clear_up_to(last_event_time)  # Clear the input buffer up to the last event time to avoid processing duplicate data
                self.event_locked_data_buffer = buffer_event_locked_data(event_locked_data, self.event_locked_data_buffer)  # Buffer the event-locked data for further processing
    
                if len(event_locked_data) > 0:  # if there's new data
                    if self.params['ChannelToPlot'] in self.eeg_channels:  # check if the channel to plot chosen in the params is valid
                        channel_index = self.eeg_channels.index(self.params['ChannelToPlot'])  # Get the index of the chosen EEG channel from the list
                        baselined_data = get_baselined_event_locked_data(self.event_locked_data_buffer, self.baseline_time, self.srate, pick=channel_index)  # Obtain baselined event-locked data for the chosen channel
                        erp_viz_data = np.zeros((self.erp_length, 2))  # Create a visualization data array for ERP
    
                        # Populate the visualization data with ERP values from different events (if available)
                        if 1 in baselined_data.keys():
                            erp_viz_data[:, 0] = np.mean(baselined_data[1], axis=0) if self.params['PlotAverage'] else baselined_data[1][-1]
                        if 2 in baselined_data.keys():
                            erp_viz_data[:, 1] = np.mean(baselined_data[2], axis=0) if self.params['PlotAverage'] else baselined_data[2][-1]
                        self.outputs['ERPs'] = np.array(erp_viz_data, dtype=np.float32)  # Set the output 'ERPs' with the visualization data
                    else:
                        print(f"Channel {self.params['ChannelToPlot']} not found")
    
        # cleanup is called when the stop button is hit
        def cleanup(self):
            print('Cleanup function is called')
    
  4. Save the script in the editor and return to the PhysioLabXR scripting tab. To have our script receive the EEG and event markers, we will add them as inputs. For visualization purposes, we will add an output called ‘ERPs’ and send the buffered ERPs to it whenever we have a new event marker.

    • In the inputs pane, type “Example-EventMarker”, hit enter or click add. Then type “Example-BioSemi-Midline”, hit enter or click add again. You should see the two inputs added to the list.

    • In the outputs pane, type “ERPs”. This is the stream name of the output. Hit enter or click add. In the output item that appears, change the number of output channels to 2, because we are plotting target and distractor ERPs separately.

  5. We will next add two parameters to control what’s being plotted. You can plot the values of the parameters while the script is running to see the effect of the changes.

    • In the parameters pane, type ChannelToPlot, hit enter or click add. In the data type dropdown, change its data type to str (string). Set its value to “FPz”. This is the channel from which we will plot the ERPs.

    • Then type PlotAverage, hit enter or click add. We will leave the data type as bool (boolean). When it is set to true, the data sent to ERP will be the average of all the ERPs of the chosen channel. When it is set to false, the ERPs will plot the most recent ERP of the chosen channel.

  1. Click on Run to run the script. Go back to the Visualization tab, type in ERPs and click on Add or hit enter. You should see a new stream called ERPs added to the plots with stream_available at the bottom, meaning this stream is available on the network, and you can start receiving and plotting it.

  2. Click on the Play button of the ERPs plot widget to start the data flow. You should see the ERPs of the chosen channel plotted in the visualization tab. The first channel (red) shows the distractor ERPs and the second channel (blue) shows the targets.

Feel free to play around with the parameters to change the channel being plotted to see the effect of the changes. You can also set the PlotAverage parameter to true to check out the averaged ERPs for the target and distractor events. You can also choose one of the channels as they show up in the EEG plot in the Visualization tab.

Further Information

Where can you take it from here? Check out the tutorial on building a P300 speller game, where you can spell words through a user interface enabled by the classification of ERP signals.

OR

When you have an eyetracker, the pupil size it captures can be a helpful feature for classifying ERPs (more info here). Take a look at this guide on how to build a multimodal classifier with PhysioLabXR, and this one on how to create a real-time fixation detection algorithm to know where exactly the user’s gaze is focused.

Other Topics