Python apps/tools for interacting with MATLAB TMSi streams via LSL.
- Install
- Set up Metadata Logging
- Set up Stream Logging
- View/Interact With Streams Online
- View Recorded Streams and Metadata
- Interacting with Logs
- Clone this repository.
git clone git@github.com:Neuro-Mechatronics-Interfaces/TMSi_MATLAB_LSL.git && cd TMSi_MATLAB_LSL- Create your local Python environment. Call
python(system Python interpreter) with themoduleoption (-m) specifyingvenv, which should be a default package with standard Python installations. Ifvenvis missing, you may need a more recent Python interpreter version:
python -m venv .venv && call .venv\Scripts\activateYour terminal should now say (.venv) in front of the file path (thanks to call .venv\Scripts\activate - now, in this terminal, when you lead with python it will refer to the executable installed in .venv\Scripts\python.exe instead of your usual Python interpreter). Note that you need to use Python >= 3.10 to satisfy package dependencies - if you are on Windows 11 you can specify the Python version using the py alias:
py -3.12 -m venv .venv && call .venv\Scripts\activate- Add Python requirements to your local Python environment.
python -m pip install -r requirements.txtIf you type pip list you should see that the installed packages are similar or identical to the contents of requirements.txt.
After following the steps in Installation, you can start logging parameters and trial metadata using:
python metadata_logger.pyWhich should populate a window like this one.

Select the streams you'd like to log. The log binaries will automatically record all channels associated with the stream. Once you click Start Logging, the GUI will indicate you are logging:

When run_lsl_logger.py is launched and logging is started, the app automatically creates a set of CSV log files inside the specified log directory (default: logs/). These files are prefixed with a timestamp and custom suffix you provide in the GUI.
🔹 logger_<timestamp>_<suffix>_state.csv
Captures all state transitions sent by the MATLAB loop.
Columns
LSL_Timestamp: time the message was received by the logger.Loop_Timestamp: original timestamp assigned in MATLAB.Name: always "state".Value: current loop state, e.g. "init", "run", "rec", "idle", etc.
🔹 logger_<timestamp>_<suffix>_filename.csv
Records filename identifiers sent by the acquisition loop.
Columns
LSL_TimestampLoop_TimestampName: "filename"Value: filename string (e.g. "subject01_block03")
🔹 logger_<timestamp>_<suffix>_parameter.csv
Logs all parameter messages received while in the "rec" state.
Columns
LSL_TimestampLoop_TimestampName: "parameter"Value: configuration string passed to the acquisition loop.
🔹 logger_<timestamp>_<suffix>_trials.csv
High-level log of each recording trial (start, stop, and parameter changes).
Columns
Timestamp: LSL timestamp of the event.File: filename string active during the trial.Event:- "Recording Start"
- "Recording End"
- Or a raw JSON string representing any parameter change during the recording window.
🗒️ These files are appended live during logging and flushed to disk automatically.
After following the steps in Installation, you can start logging streams using:
python stream_logger.pyWhich should populate a window like this one.

Select the streams you'd like to log. The log binaries will automatically record all channels associated with the stream. Once you click Start Logging, the GUI will indicate you are logging:

As long as the streams are running and you have not clicked Stop Logging, samples will continue to be dumped into the generated .bin files, meaning they could possibly get very large if you forget and leave it running for a long time (just as a caution). If you are testing this out and have followed along with this and the Logging Parameters instructions, you should have binary files and csv files in a structure something like this by the time you are done:

You can use the StreamLogReader class (and LogViewer app, for convenience) to interface with these data.
After following the steps in Installation, you can start viewing online streams using:
python stream_interactor.pyThis populates a window like this one.

Select the streams you'd like to view by checking the checkboxes. Once you have the streams you want, click Connect to connect to them. This will enable the Start Logging as well as the + button in the plots pane below. Click the + button to popup options for plot types. Currently, there are only TimeSeries and TimeSeriesArray plots.

Select the TimeSeries Array option with SAGAB. You may need to expand the GUI window to view the plot easily.

You can close a given plot type by clicking the X to the top-right of the plot inset. You can add more plots by clicking the + again. Add the other TMSi grid by clicking the + again.

By convention, SAGAA is the distal ring - modify the titles and colormaps by changing the layout in the dropdown selection.
As with the stream_logger.py, you can generate log binaries in addition to plotting although the plotting will take a significant performance hit and the logs may have more drops due to the increased load.
To view data from the .bin stream recordings as well as associated metadata, you can quickly scan individual channels from selected recordings:
python log_viewer.pyThis should populate an interface which looks similar to the following:

If you selected a different folder for your Streams or Metadata than the default locations (logs/metadata and logs/streams relative to your repository root folder), then you should update those folders to match using the buttons on the top left.
Once you select a recording, it will expand to show all the available channels to plot. With this dataset, only the COUNTER channel has meaningful data (the sample count that increments with time), so it's selected to illustrate the working parsing of the recorded .bin format:

By clicking on the corresponding session metadata in the middle-left panel, it's possible to overlay the event timestamps of any metadata that was logged concurrently with the time-series streams, as shown:
The text next to each line indicates the raw values saved in the Event column associated with each timestamped line.
After you've logged your stream .bin files and associated parameter/metadata .csv files, you can load them into a consistent structure for analysis using the Data class:
from nml.lsl.Data import DataThis class lets you combine streamed time-series recordings with trial-level event metadata from a shared logging session.
You’ll typically instantiate a Data object using the datetime-stamped key from your .bin files and either:
- The same key (if metadata matches timestamp), or
- A suffix like
"DEFAULT"(if that's how metadata was labeled).
# Example for a session started on May 26, 2025 at 16:36:45
d = Data('20250526_163645', 'DEFAULT') # assumes metadata labeled with DEFAULT suffixThis loads:
- A 2D NumPy array
d.signalof shape[channels, samples] - A 1D NumPy array
d.timestampsof corresponding LSL timestamps - A dictionary of event DataFrames:
d.metadata['trials'],['state'], etc.
import matplotlib.pyplot as plt
import numpy as np
from nml.lsl.Data import Data
# d = Data('20250526_163645', 'DEFAULT') # Will grab first file in logs/metadata with "DEFAULT" suffix
d = Data('20250526_163645', '20250526_163724') # Equivalent, if you only have one session with "DEFAULT"
sig, ts = d.get_stream_data()
trials = d.get_events('trials')
ch_names = d.metadata['stream']['channel_names']
print(ch_names) # Saved in header metadata of binaries
plt.figure(figsize=(12, 4))
plt.plot(ts, sig[-1], label="COUNTER Channel")
# Add trial event markers
for _, row in trials.iterrows():
plt.axvline(row['Timestamp'], color='red', linestyle='--')
plt.text(row['Timestamp'], plt.ylim()[1]*0.9, row['Event'], rotation=90)
plt.xlabel("Time (s)")
plt.ylabel("Amplitude")
plt.title("COUNTER with Trial Markers")
plt.legend()
plt.tight_layout()
plt.show()uni_indices = [i for i, name in enumerate(ch_names) if "UNI" in name]
signal_uni = d.signal[uni_indices, :] # shape: [UNI_channels, samples]
starts = d.get_events('trials')[lambda df: df['Event'] == 'Recording Start']['Timestamp'].values
ends = d.get_events('trials')[lambda df: df['Event'] == 'Recording End']['Timestamp'].values
segments = []
for start, end in zip(starts, ends):
mask = (d.timestamps >= start) & (d.timestamps <= end)
trial = signal_uni[:, mask]
segments.append(trial)
print("Segments for each trial:")
print(segments) # segments now contains the signal segments for monopolar textile arrays, for each trialYou now have a list of trial segments, each as a 2D array [channels × trial_duration].
# RMS per trial across UNI channels
print("RMS per trial across UNI channels:")
rms_uni_by_trial = [np.sqrt(np.mean(trial**2, axis=1)) for trial in segments] # list of [n_UNI_channels] arrays
print(rms_uni_by_trial)This gives you one RMS vector per trial, across channels — perfect for later statistical analysis.
You can also replicate these examples in one step using python example_plot_data.py - although you'll need to update the filenames with data from your local device.