PyBCI structured and moduled

The BCI class

class BCI(config_file)

This is the main BCI class. The only thing you need to set up the BCI is a configuration file, that you can create either automatically - which is recommended - by using the module, made for that purpose, or write it manually.

To finally get the data you may use either get_datablock() (to get just the current block) or get_data(). In any case, the data is returned in a numpy array [channels][samples].

classmethod reset_security_mode()
Resets the counters for read and returned data arrays. Mainly used internally.
classmethod set_security_mode(mode)
Parameter:mode – True or False(default)

Sets the security mode. A warning is raised if the number of returned blocks is not equal to the read ones.

classmethod set_trigger_size(size)
Parameter:size (1 to 10, float or integer) – size of the trigger

Sets the size of a shown trigger. Affects PyBCI only when the mode is ‘signs_enables_c’.

classmethod set_returning_speed(level)

Resets the returning speed of data arrays.

Parameter:level – -9 (very slow) up to 9 (very fast). Possible exceptions: -10 (slowest level that is possible) and 10 (as fast as possible)
classmethod change_channellabels(channel, label)

Herewith you are able to relable the channels you want to get data from.

Parameters:
  • channel – Position in the channel list channels.
  • label – Matching label for the specified channel. Be careful: The channel labels you specify here have to match with the !number #! (not the label or the physical channel number) that is declared in the Brain Recorder Software.
classmethod get_datablock()

If you want to get just the current datablock (that is, one array the data is stored in, usually just a few samples) you may use this function.

Rytpe:numpy array [channels][samples]
classmethod get_data(time[, security_mode = True, supervision_mode = True])

‘Main’ function to get data. For the specified time (in seconds) data is stored and then returned in a numpy array [channels][samples].

If security_mode is set to true (this is default), a warning is raised if the number of returned blocks is not equal to the read ones.

If <supervision_mode> is true (this is default), the Python data array is deleted and the data reading process is restarted if the Brain Recorder has been stopped to avoid ‘old’ data in the requested data array. Five zeros are written in the data file (if saving_mode is switched on) to ‘sign’ this stopping.

Return type:numpy array [channels][samples]

Note

Torder of the channels in the data array you’ll get is the one the channels are listed in channels.

trigger_sign(shape, size, time[, texture = 'R', text='NoText')

You may use this function to give a sign in a seperate window with the shape <shape>. It is shown for <time> milliseconds.

Parameters:
  • size – In the range from 0 (not at all) to 1 (whole window)
  • shape – 1 or ‘triangle’ for a triangular shape or 2 or ‘square’ for a quadratic shape, 3 or ‘text’ for text that you can specify as the parameter text 4 or ‘bmp’ for bitmaps that you can specify using the parameter texture with ‘triangle’ as a default if the shape you specify is invalid.
  • texture (only in C++ mode:) – Implemented are ‘R’ (default), ‘L’, ‘r’ and ‘l’ on a grey background

It is shown for time milliseconds. When the sign is shown a trigger (‘5’) is sent via the parallel port.

classmethod save_data(data_file, format, data)

This function saves the specified data to the data_file with format format. The data is saved transposed, so that each column represents one channel.

Parameter:format – ‘plain’(ascii txt), ‘pickle’, ‘binary’ or ‘mat’(MATLAB-file)
classmethod end_bci()
The data files and the server connection is closed and allocated memory is deleted.

Other Classes

class Connect(numof_channels, mode, server)
This class is used just internally to start the BCI in a seperate thread. For the explanations of the parameters see the configuration module.
class Sign(shape, color_bg, color_trigger)

This class is used just internally to start a new thread for giving signs C++ based. For the explanations of the parameters see the configuration module.

Equivalent classes for the other signing modes are

Sign_py(width_win, height_win, color_bg, color_trigger) and Sign_tk(width_win, height_win, color_bg, color_trigger).

The Configuration Module

This module may help you to create a configuration file for the BCI. Herefore you should use the function make_config, that is explained in the following.

make_config(outfile, sample_rate, numof_channels, mode, [server = 'localhost', security_mode = False,
saving_mode = False, data_file = 'Nofile', format = 'binary', resolution = 0.1, returning_speed = 8,
channels = [1...numof_channels], color_bg = 'white', color_trigger = 'black', size_window = (1000, 800)])
This is the function to create a BCI configuration file automatically.
Parameters:
  • outfile – Name of the configuration file you want to create
  • sample_rate – Sample Rate
  • numof_channels – Number of channels you want to get data from. This is not necessarily the number of channels you are getting the data from.
  • channels – Labels of the channels you want to get data from. The label is the number # in the Brain Recorder.
  • mode

    Mode for giving signs in a seperate window by calling trigger_sign().

    Possible modes are: ‘signs_enabled: OpenGL C++ signing mode - this is the only signing mode including the possibility to show bitmap textures yet

    ‘signs_enabled_py’: OpenGL Python signing mode

    ‘signs_enabled_tk: Tkinter signing mode - this is the only signing mode including the possibility to show text signs yet, it may be pretty slow and partly unusuable though.

    ‘signs_disabled’.

  • server – Name of the server that is receiving Brain Recorder Data via TCP/IP-Port. Skip if this is the same computer this software is running on.
  • resolution – Resolution, that is declared in the Brain Recorder (usually either 0.1 (default) or 10). This is used for conversion to microvolt.
  • returning_speed – Speed of returning data arrays. If you are dependent on receiving the data as fast as possible, you should choose a high level. The levels that are possible range from -9 (very slow) up to 9 (very fast), with possible exceptions -10 as the slowest level that is possible and 10 (as fast as possible) The default is 8.
  • security_mode (True or False(default)) – A warning is raised if the number of returned blocks is not equal to the read ones.
  • saving_mode (True or False(default)) – A data_file with format format is opened, in which the data is written each time get_data is called.
  • data_file – The file the data is written in when saving_mode is True.
  • format (‘plain’(ascii txt), ‘pickle’, ‘binary’ or ‘mat’ (MATLAB-file)) – The file format the data is written in when saving_mode is True.
  • color_bg – Background color of the signing window
  • color_trigger – Color of the sign, that is given
  • size_window (in pixels) – Size of the signing window as a tuple <width>, <heigth>

The Module for EOG corrections

Module to estimate the impact of eye movements and blinks and to remove this impact from EEG data.

tools.EyemoveCorrector.estimate_impact(baseline, artefact)

With this function, an impact array of eye movement artefacts is estimated. The difference between two conditions, namely a baseline or resting condition (without eye movements) and an artefact condition are used for calculating the impact of the latter condition on all channels of the baseline condition. It is assumed that the difference for one channel is composed of the influence of all the other channels (and herewith also of the EOG channels).

The two conditions are reflected in the two arguments for this function, with the following structure:

Parameters:
  • baseline – Baseline condition as a one-dimensional numpy array, containing the mean samples of the baseline for [numof_channels].
  • artefact

    Artefact conditions as a two-dimensional numpy array, containing mean samples for each condition in a separate array.

    Example: The conditions

    1. horizontal eye movements
    2. vertical eye movements
    3. eye blinks

    would thus result in a numpy array [[numof_channels(hor)],[numof_channels(ver)], [numof_channels(blink)]].

Note

It is necessary for a valid estimation that the channels in the two data arrays are both equal and in the same order.

Returns:Another numpy array with the estimated impact. If you now take the dot product of this vector with a measured EEG signal vector, (f.i., calling remove_impact() you’ll get a vector without any activity that had caused exactly this difference between the two conditions.
tools.EyemoveCorrector.remove_impact(impact, signal)

Use this function to get a signal array without any activity, that had previously caused the difference between the two conditions used for calculating the impact array (usually evaluated by estimate_impact()). In other words, activity is removed from the data, that correlates with the difference of the two conditions, that had ‘produced’ the impact array.

Parameters:
  • impact – Square matrix as a numpy array [numof_channels][numof_channels]
  • signal – ‘Raw’ EEG signal vector to be artefact-corrected using the impact array. This vector has to be a numpy array [channels][samples], as the data is returned when calling the data functions of the BCI class.

Note

It is necessary for a valid estimation that the channels in the signal arrays are both equal and in the same order as in the data arrays used for estimating the impact array.

Returns:Signal numpy array, containing the corrected data values.

Table Of Contents

Previous topic

Using PyBCI

Next topic

PyBCI in Action

This Page