Custom Code Integration
This tutorial walks through adding Chiasm eye-tracking to an existing HTML page, one step at a time. By the end you will have a working page that calibrates the participant, records gaze data while a stimulus is displayed, and cleans up afterwards.
Prerequisites
Before you begin, make sure you have:
- Created an Experiment in the Chiasm dashboard.
- Copied your auth token and experiment ID from the dashboard — you will need both in the code below.
Starting Point
Below is a minimal page that shows a random image for 3 seconds and then hides it. There is no eye-tracking yet — this is the code we will be modifying.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Chiasm Custom Code Demo</title>
</head>
<body style="margin:0;min-height:100vh;display:flex">
<img
id="stimulus"
src="https://picsum.photos/800/600"
style="max-width:100%;height:auto;margin:auto"
alt="Stimulus image"
/>
<script>
const DISPLAY_DURATION_MS = 3000;
const stimulus = document.getElementById("stimulus");
window.setTimeout(() => {
stimulus.style.display = "none";
}, DISPLAY_DURATION_MS);
</script>
</body>
</html>
Step 1 — Load the Chiasm tracker script
Add the tracker script tag before your own <script> block. This exposes the global initChiasmTracker function.
<script src="https://cdn.chiasm.eu/latest/chiasm-tracker.js"></script>
Step 2 — Define your credentials and IDs
Inside your script, add the auth token you copied from the Chiasm dashboard, together with your experiment and participant IDs.
const CHIASM_AUTH_TOKEN = "YOUR_AUTH_TOKEN";
const expId = "YOUR_EXPERIMENT_ID";
const ppId = "YOUR_PARTICIPANT_ID";
Step 3 — Initialize the tracker
initChiasmTracker returns a tracker object that you will use for every subsequent call. Because it is asynchronous, wrap the rest of the logic in an async IIFE.
(async () => {
const tracker = await initChiasmTracker({
authToken: CHIASM_AUTH_TOKEN,
});
// … remaining steps go here
})();
Step 4 — Register experiment info
Call setExpInfo to link this session with your experiment and participant. The third argument controls whether gaze data is saved to the Chiasm dashboard.
tracker.setExpInfo(expId, ppId, true);
Step 5 — Calibrate
Run screen-size calibration followed by the full tracker setup (webcam preview, gaze calibration, and validation). Both steps must finish before you show any stimuli.
await tracker.showScreenCalibration();
await tracker.setupTrackerWithRetries();
See showScreenCalibration and setupTrackerWithRetries for details.
Step 6 — Show the stimulus and record
Make the stimulus visible, then call startRecording. After the display duration elapses, call stopRecording and hide the stimulus.
stimulus.style.visibility = "visible";
await tracker.startRecording();
setTimeout(async () => {
await tracker.stopRecording();
stimulus.style.display = "none";
}, DISPLAY_DURATION_MS);
Keep the stimulus hidden (e.g. visibility: hidden) until calibration is done so the participant does not see it prematurely.
Step 7 — Clean up
After recording is finished, release tracker resources with cleanupTracker.
await tracker.cleanupTracker();
Complete Code
Putting it all together. A runnable version of this example is available in the examples/custom-code folder of the docs repository. You can download that folder and serve it locally (e.g. npx serve examples/custom-code or python -m http.server) to try it out.
Opening the file directly via file:// will not work — the webcam requires a secure context (https:// or http://localhost).
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Chiasm Custom Code Demo</title>
</head>
<body style="margin:0;min-height:100vh;display:flex">
<img
id="stimulus"
src="https://picsum.photos/800/600"
style="max-width:100%;height:auto;margin:auto"
alt="Stimulus image"
/>
<script src="https://cdn.chiasm.eu/latest/chiasm-tracker.js"></script>
<script>
const CHIASM_AUTH_TOKEN = "YOUR_AUTH_TOKEN";
const DISPLAY_DURATION_MS = 3000;
const expId = "YOUR_EXPERIMENT_ID";
const ppId = "YOUR_PARTICIPANT_ID";
const stimulus = document.getElementById("stimulus");
(async () => {
const tracker = await initChiasmTracker({
authToken: CHIASM_AUTH_TOKEN,
});
tracker.setExpInfo(expId, ppId, true);
await tracker.showScreenCalibration();
await tracker.setupTrackerWithRetries();
stimulus.style.visibility = "visible";
await tracker.startRecording();
setTimeout(async () => {
await tracker.stopRecording();
stimulus.style.display = "none";
await tracker.cleanupTracker();
}, DISPLAY_DURATION_MS);
})();
</script>
</body>
</html>
Optional — Save Predictions Locally
By default, gaze data is sent to the Chiasm dashboard. If you also want to capture predictions in the browser — for example to display them on screen or download them as a file — you can add a few extra lines.
1. Add an output container
Add a <pre> element to the page body where predictions can be displayed after the session:
<pre id="output" style="display:none;max-width:90vw;overflow:auto;font-size:0.8rem"></pre>
2. Create a predictions buffer and register a callback
Before starting the recording, create an array and pass a callback to setUserPredictionCallback. The tracker calls this function with each prediction as it arrives.
const predictions = [];
tracker.setUserPredictionCallback(pred => predictions.push({ ...pred }));
3. Wait for all predictions, then display and download
After stopRecording, call ensureAllPredictionsReturned to make sure every pending prediction has been delivered. Then render the data on screen and trigger a JSON download.
await tracker.ensureAllPredictionsReturned();
const output = document.getElementById("output");
output.textContent = JSON.stringify(predictions, null, 2);
output.style.display = "block";
const blob = new Blob([output.textContent], { type: "application/json" });
const a = document.createElement("a");
a.href = URL.createObjectURL(blob);
a.download = "chiasm_predictions.json";
a.click();
URL.revokeObjectURL(a.href);
Extended Complete Code
Here is the full page with the optional streaming and saving additions included. Copy it into an .html file and serve it locally to try it out:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Chiasm Custom Code Demo</title>
</head>
<body style="margin:0;min-height:100vh;display:flex;flex-direction:column;align-items:center;justify-content:center">
<img
id="stimulus"
src="https://picsum.photos/800/600"
style="max-width:100%;height:auto;visibility:hidden"
alt="Stimulus image"
/>
<pre id="output" style="display:none;max-width:90vw;overflow:auto;font-size:0.8rem"></pre>
<script src="https://cdn.chiasm.eu/latest/chiasm-tracker.js"></script>
<script>
const CHIASM_AUTH_TOKEN = "YOUR_AUTH_TOKEN";
const DISPLAY_DURATION_MS = 3000;
const expId = "YOUR_EXPERIMENT_ID";
const ppId = "YOUR_PARTICIPANT_ID";
const stimulus = document.getElementById("stimulus");
const predictions = [];
(async () => {
const tracker = await initChiasmTracker({
authToken: CHIASM_AUTH_TOKEN,
});
tracker.setExpInfo(expId, ppId, true);
tracker.setUserPredictionCallback(pred => predictions.push({ ...pred }));
await tracker.showScreenCalibration();
await tracker.setupTrackerWithRetries();
stimulus.style.visibility = "visible";
await tracker.startRecording();
setTimeout(async () => {
await tracker.stopRecording();
stimulus.style.display = "none";
await tracker.ensureAllPredictionsReturned();
const output = document.getElementById("output");
output.textContent = JSON.stringify(predictions, null, 2);
output.style.display = "block";
const blob = new Blob([output.textContent], { type: "application/json" });
const a = document.createElement("a");
a.href = URL.createObjectURL(blob);
a.download = "chiasm_predictions.json";
a.click();
URL.revokeObjectURL(a.href);
await tracker.cleanupTracker();
}, DISPLAY_DURATION_MS);
})();
</script>
</body>
</html>
Next Steps
- Get the Data — learn how to access gaze predictions
setExpInforeference — save data to the dashboard or keep it response-onlystartRecordingreference — tag recording segments with event IDs