Skip to main content

jsPsych Integration

This tutorial walks through adding Chiasm eye-tracking to an existing jsPsych experiment, one step at a time. By the end you will have a working experiment that calibrates the participant, records gaze data during each trial, and cleans up afterwards.

Chiasm provides a small helper library — chiasm-jspsych-integration.js — that fits naturally into the jsPsych timeline model so you can add eye-tracking with minimal changes to your experiment code.

Prerequisites

Before you begin, make sure you have:

  1. A working jsPsych experiment (v8+) served over https:// or http://localhost.
  2. Created an Experiment in the Chiasm dashboard.
  3. Copied your auth token and experiment ID from the dashboard — you will need both in the code below.

Starting Point

Below is a simple jsPsych experiment that shows three images one at a time and asks the participant to press "y" or "n" for each. There is no eye-tracking yet — this is the code we will be modifying.

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>jsPsych Demo</title>

<!-- jsPsych Core -->
<script src="https://unpkg.com/jspsych@8.2.2"></script>
<link href="https://unpkg.com/jspsych@8.2.2/css/jspsych.css" rel="stylesheet" type="text/css" />

<!-- jsPsych Plugins -->
<script src="https://unpkg.com/@jspsych/plugin-preload@2.1.0"></script>
<script src="https://unpkg.com/@jspsych/plugin-image-keyboard-response@2.1.0"></script>
</head>
<body></body>

<script>
var jsPsych = initJsPsych({
default_iti: 250,
on_finish: function () {
jsPsych.data.displayData();
}
});

var preload = {
type: jsPsychPreload,
auto_preload: true
};

const trial_1 = {
type: jsPsychImageKeyboardResponse,
stimulus: "https://images.unsplash.com/photo-1506744038136-46273834b3fb?w=600",
choices: ['y', 'n'],
render_on_canvas: false,
stimulus_width: 800,
trial_duration: 3000,
prompt: '<p>Do you like this image? Press "y" for Yes or "n" for No.</p>'
};

const trial_2 = {
type: jsPsychImageKeyboardResponse,
stimulus: "https://images.unsplash.com/photo-1470071459604-3b5ec3a7fe05?w=600",
choices: ['y', 'n'],
render_on_canvas: false,
stimulus_width: 800,
trial_duration: 3000,
prompt: '<p>Do you like this image? Press "y" for Yes or "n" for No.</p>'
};

const trial_3 = {
type: jsPsychImageKeyboardResponse,
stimulus: "https://images.unsplash.com/photo-1501785888041-af3ef285b470?w=600",
choices: ['y', 'n'],
render_on_canvas: false,
stimulus_width: 800,
trial_duration: 3000,
prompt: '<p>Do you like this image? Press "y" for Yes or "n" for No.</p>'
};

jsPsych.run([preload, trial_1, trial_2, trial_3]);
</script>
</html>

Step 1 — Load the Chiasm scripts

Add three new script tags alongside your existing jsPsych scripts. You need:

  1. The call-function plugin — used internally by the integration helper.
  2. The Chiasm tracker — the core eye-tracking library.
  3. The jsPsych integration helper — provides convenience functions that turn Chiasm calls into jsPsych timeline nodes.
<!-- jsPsych Plugins (add call-function) -->
<script src="https://unpkg.com/@jspsych/plugin-call-function@2.1.0"></script>

<!-- Chiasm -->
<script src="https://cdn.chiasm.eu/latest/chiasm-tracker.js"></script>
<script src="https://cdn.chiasm.eu/latest/chiasm-jspsych-integration.js"></script>

Step 2 — Define your credentials and IDs

Inside your script, add the auth token you copied from the Chiasm dashboard, together with your experiment and participant IDs.

const CHIASM_AUTH_TOKEN = "YOUR_AUTH_TOKEN";
const expId = "YOUR_EXPERIMENT_ID";
const ppId = "YOUR_PARTICIPANT_ID";

Step 3 — Create the Chiasm setup trial

chiasmJsPsych.createChiasmSetup returns a jsPsych trial node that handles tracker initialization, screen-size calibration, webcam preview, and gaze calibration in one go. Add it to your timeline before any experimental trials.

const chiasmSetup = chiasmJsPsych.createChiasmSetup(
expId,
ppId,
CHIASM_AUTH_TOKEN
);

Step 4 — Wrap each trial with attachToTrial

Wrap your existing trial objects with chiasmJsPsych.attachToTrial. This automatically starts recording when the trial begins and stops recording when it ends — no need to call startRecording / stopRecording yourself.

const trial_1 = chiasmJsPsych.attachToTrial({
type: jsPsychImageKeyboardResponse,
stimulus: "https://images.unsplash.com/photo-1506744038136-46273834b3fb?w=600",
choices: ['y', 'n'],
render_on_canvas: false,
stimulus_width: 800,
prompt: '<p>Do you like this image? Press "y" for Yes or "n" for No.</p>'
});
tip

You only need to wrap trials where you want gaze data recorded. Trials that don't need eye-tracking (e.g. instruction screens, fixation crosses, debrief pages) can be left unwrapped.

Step 5 — Finalize in on_finish

Update the jsPsych on_finish callback to call chiasmJsPsych.finalize. This function does three things:

  1. Saves gaze data — flushes all pending predictions to the Chiasm dashboard.
  2. Matches gaze data to trials — associates each recording segment with the jsPsych trial that produced it, so you can look up gaze data per trial later.
  3. Displays results — adds the matched gaze data to jsPsych's data store so it appears alongside your trial data when you call jsPsych.data.displayData().

Because finalize is asynchronous, make the callback async and pass the jsPsych instance so it can access the trial data.

var jsPsych = initJsPsych({
default_iti: 250,
on_finish: async function () {
await chiasmJsPsych.finalize(jsPsych);
jsPsych.data.displayData();
}
});

Step 6 — Update the timeline

Insert chiasmSetup into the timeline array after preload but before the first experimental trial:

jsPsych.run([preload, chiasmSetup, trial_1, trial_2, trial_3]);

The participant will see the calibration flow once, then proceed through your trials with gaze recording active.

Complete Code

Putting it all together. A runnable version of this example — including the chiasm-jspsych-integration.js helper — is available in the examples/jspsych folder of the docs repository. You can download that folder and serve it locally (e.g. npx serve examples/jspsych or python -m http.server) to try it out.

caution

Opening the file directly via file:// will not work — the webcam requires a secure context (https:// or http://localhost).

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Chiasm jsPsych Demo</title>

<!-- jsPsych Core -->
<script src="https://unpkg.com/jspsych@8.2.2"></script>
<link href="https://unpkg.com/jspsych@8.2.2/css/jspsych.css" rel="stylesheet" type="text/css" />

<!-- jsPsych Plugins -->
<script src="https://unpkg.com/@jspsych/plugin-preload@2.1.0"></script>
<script src="https://unpkg.com/@jspsych/plugin-image-keyboard-response@2.1.0"></script>
<script src="https://unpkg.com/@jspsych/plugin-call-function@2.1.0"></script>
</head>
<body></body>

<!-- Chiasm -->
<script src="https://cdn.chiasm.eu/latest/chiasm-tracker.js"></script>
<script src="https://cdn.chiasm.eu/latest/chiasm-jspsych-integration.js"></script>

<script>
const CHIASM_AUTH_TOKEN = "YOUR_AUTH_TOKEN";
const expId = "YOUR_EXPERIMENT_ID";
const ppId = "YOUR_PARTICIPANT_ID";

const chiasmSetup = chiasmJsPsych.createChiasmSetup(
expId,
ppId,
CHIASM_AUTH_TOKEN
);

var jsPsych = initJsPsych({
default_iti: 250,
on_finish: async function () {
await chiasmJsPsych.finalize(jsPsych);
jsPsych.data.displayData();
}
});

var preload = {
type: jsPsychPreload,
auto_preload: true
};

const trial_1 = chiasmJsPsych.attachToTrial({
type: jsPsychImageKeyboardResponse,
stimulus: "https://images.unsplash.com/photo-1506744038136-46273834b3fb?w=600",
choices: ['y', 'n'],
render_on_canvas: false,
stimulus_width: 800,
trial_duration: 3000,
prompt: '<p>Do you like this image? Press "y" for Yes or "n" for No.</p>'
});

const trial_2 = chiasmJsPsych.attachToTrial({
type: jsPsychImageKeyboardResponse,
stimulus: "https://images.unsplash.com/photo-1470071459604-3b5ec3a7fe05?w=600",
choices: ['y', 'n'],
render_on_canvas: false,
stimulus_width: 800,
trial_duration: 3000,
prompt: '<p>Do you like this image? Press "y" for Yes or "n" for No.</p>'
});

const trial_3 = chiasmJsPsych.attachToTrial({
type: jsPsychImageKeyboardResponse,
stimulus: "https://images.unsplash.com/photo-1501785888041-af3ef285b470?w=600",
choices: ['y', 'n'],
render_on_canvas: false,
stimulus_width: 800,
trial_duration: 3000,
prompt: '<p>Do you like this image? Press "y" for Yes or "n" for No.</p>'
});

jsPsych.run([preload, chiasmSetup, trial_1, trial_2, trial_3]);
</script>
</html>

What Changed — Summary

Here is a quick recap of every change made to the original experiment:

#ChangeWhy
1Added plugin-call-function script tagRequired by the Chiasm integration helper
2Added chiasm-tracker.js and chiasm-jspsych-integration.js script tagsLoads the eye-tracking library and jsPsych helper
3Defined CHIASM_AUTH_TOKEN, expId, ppIdCredentials for the Chiasm service
4Called chiasmJsPsych.createChiasmSetup(…)Creates a timeline node that initialises and calibrates the tracker
5Wrapped each trial with chiasmJsPsych.attachToTrial(…)Automatically records gaze data for the duration of each trial
6Made on_finish async and called chiasmJsPsych.finalize(jsPsych)Saves gaze data, matches it to jsPsych trials, and displays the combined results
7Inserted chiasmSetup into the timeline arrayRuns calibration before the first experimental trial

Next Steps