01 / 20
g
glaucosim

Glaucoma monitoring,
from home.

Author
Mauro Gobira, MD
Visiting Scholar in Glaucoma
Site
UC San Diego · Shiley Eye Institute
Hamilton Glaucoma Center · 2026
Background

Three gaps in glaucoma care.

A quarterly clinic model cannot see what determines outcomes.

01 · Adherence
~50%
of glaucoma patients are non-adherent on objective measurement, and clinicians cannot detect it before the next visit.
02 · Progression sampling
8.8%
of 380,029 US OAG enrollees received zero visual fields over the study window; only ~23% met the AAO ≥ 1 VF/year guideline. A single VF/year cannot separate noise from a 1.5 dB/year progressor for ~5 years.
03 · Drug side effects
up to 60%
of eyes on prostaglandin analogues for ≥ 3 months develop deepening of the upper eyelid sulcus (DUES) — 60% on bimatoprost, down to 18–24% on tafluprost / latanoprost. Surface and lid changes evolve between visits.
The platform

Continuous monitoring,
between visits.

Glaucosim is a browser-and-phone layer that runs a longitudinal home cadence of clinically grounded tests, captures medication adherence, and surfaces a trend to the eye-care professional before the next appointment.

  • Visual function — VF 24-2, visual acuity, contrast sensitivity
  • Anatomy — anterior-segment video, voice-guided
  • Pressure — acoustic IOP screen (β, research)
  • Patient-reported — NEI VFQ-25, symptom diary
  • Adherence — drop diary, reminders, missed-dose log
No dedicated hardware No app store Eye-care reviewed
g
glaucosim · dashboard
● synced M. Reyes · 64F
PATIENT
Maria Reyes · OD POAG
Latanoprost 0.005% · 1× QHS · since 2023-08
MD trend · 24-2 OD
−0.62 dB / yr
MarMayJul SepNovJanMar −2.0−4.0−6.0
Drop adherence · last 30 d
26 / 30
Latest results
Visual field 24-2 · OD
Mar 12 · MD −4.8 · PSD 6.2
PROG ?
Visual acuity · OU
Mar 14 · 0.10 / 0.08 logMAR
stable
Pelli-Robson CS
Mar 14 · 1.65 / 1.50 logCS
stable
Anterior segment
Mar 14 · 4 grades · review
new
NEI VFQ-25
Feb 28 · 79/100 composite
−2 pts
Acoustic IOP (β)
Mar 12 · screen · research
β
DASHBOARD · CLINICIAN VIEW · FICTITIOUS PATIENT FOR DEMO
Landscape

What already exists.

Most home tools cover one test and depend on dedicated hardware. Glaucosim runs a multi-modality session on the devices the patient already owns.

Home · hardware-bound
Home · software / BYOD
In-clinic · hardware
In-clinic · software
Hardware-bound
Software / BYOD
Home / remote
In-clinic only
HFA · SITA
Goldmann
Reichert ORA / 7CR
Octopus · Henson
iCare HOME / HOME2
Sensimed Triggerfish
Implandata Eyemate
Olleyes VisuALL
Notal Vision (home OCT)
Heru VR
RadiusXR
Imo Vifa (Topcon CREWT)
NOVA VR (Bradley)
Eyenuk EyeArt
M&S Smart System
Melbourne Rapid Fields
iPad ZEST (Schulz)
EyeQue
Easee
Peek Vision
Glaucosim · multi-modality

Continuous IOP needs an implant or a contact lens (Eyemate, Triggerfish). Home perimetry needs a VR headset or a tablet kiosk (Olleyes, Heru, RadiusXR, Imo Vifa). Home anterior-segment imaging needs a clip-on lens. Home tools that ship without dedicated hardware cover a single test — refraction (EyeQue, Easee), VF (MRF, iPad ZEST), or screening (Peek Vision).

Glaucosim is the only point in the top-right quadrant covering visual function + anterior segment + IOP screen (β) + adherence in one home session, on devices the patient already owns. Peek Vision is the closest conceptual peer but is built for community-screening triage, not longitudinal glaucoma monitoring.

Per-measurement precision is lower than instrument-bound counterparts. The trade is an order-of-magnitude increase in sampling cadence, and slope-estimate variance falls as 1/n³ when test occasions are added.5

The hard part

A home test without integrity
checks is a screenshot of a screen.

How do we run clinical-grade tests remotely, without dedicated devices, and still trust the data?

Environment

Is the room within spec?

Visual acuity, contrast and perimetry each assume a different luminance window. A test outside its window is not interpretable.

Geometry

Is the patient where the test assumes?

Stimulus angle, optotype size, and pixel pitch all depend on the patient-to-screen distance — and on which eye is actually being tested.

Behavior

Is the patient actually fixating?

Peripheral perimetry assumes central fixation. A 4° saccade away from target makes the stimulus land at the wrong location.

Approach
Glaucosim runs an end-to-end sensor stack — five in-house ML models on the patient's own webcam and phone — that gates every trial against environment, geometry, and behavior before it counts.
In-house ML

Five models guard every trial.

No additional hardware. No data leaves the device until results are signed and synced.

11.7 mm
01

Screen distance

Iris-pinhole projection from MediaPipe FaceMesh.

COVERED
02

Eye cover

EAR + hand-landmark + iris occlusion fusion.

03

Gaze fixation

Iris-relative-to-canthi, Kalman-filtered.

~ cd/m²
04

Ambient light

Calibrated webcam-mean luminance proxy + glare.

QC OK
05

Capture quality

Anterior-segment focus, exposure, framing scorer.

Each module reads all five channels before allowing a trial. Out-of-band readings prompt re-positioning or invalidate the affected stimulus. Every event is logged for retrospective audit.

Model 01 · Screen distance

Distance from a single RGB frame.

Open live demo

Principle

We use the interpupillary distance (IPD) as the real-world anchor — the population mean for adults is 63 mm (SD ~3.5 mm).7 MediaPipe FaceMesh returns the two iris-center landmarks (468 left, 473 right). We measure the IPD in pixels and recover patient-to-screen distance from the pinhole projection.

d = (fpx · 63 mm) / IPDpx
Distance

d patient-to-camera (mm) · fpx camera focal length (px), recovered with a one-time on-screen calibration step · IPDpx live pixel distance between iris centers (FaceMesh 468 ↔ 473).

Why IPD and not iris diameter: the iris edge is harder to segment reliably under variable lighting and lashes, while iris centers are detected by MediaPipe with sub-pixel stability and remain visible even when the lid covers part of the limbus.

What we measure to trust it

  • Mean absolute error vs ruler-measured distance, 30–100 cm window
  • Sample variance across head pose (±20° yaw, ±15° pitch)
  • Drop-out rate under low light / glasses / blink frames
  • Latency budget < 33 ms / frame (30 Hz)
CAM image plane IPD_px 63 mm landmarks 468 ↔ 473 IPD_px ∝ 1 / d

SIMILAR TRIANGLES · REAL IPD FIXED AT 63 MM · PIXEL IPD INVERTS WITH DISTANCE

Model 02 · Eye cover

Per-trial verification of which eye is occluded.

Open live demo

Principle

Monocular tests assume the operator knows which eye is tested. At home, a left-eye trial labelled as right-eye produces a clean, plausible, incorrect record. Glaucosim verifies cover state from three independent signals — any single one of which is brittle alone.

EAR = ( ‖p2−p6‖ + ‖p3−p5‖ ) / ( 2 · ‖p1−p4‖ )
Eye Aspect Ratio

Open eye ≈ 0.27–0.32; closed eye < 0.15. Threshold calibrated per subject over a 25-frame baseline at session start.8

  • EAR — lid aperture from FaceMesh landmarks
  • Hand landmarks — overlapping the orbital bounding box
  • Iris occlusion — skin or fabric inside iris ROI

What we measure to trust it

  • Sensitivity / specificity per channel vs ground-truth video labels
  • Mis-occlusion rate (eye reported covered when it isn't)
  • Robustness against glasses, dark lashes, sleeves, palms
LEFT RIGHT COVERED EAR · OPEN · 0.29 EAR · 0.10 HAND ✓

PER-EYE STATE GATES EVERY STIMULUS · LIVE @ ~30 HZ

Model 03 · Gaze fixation

Drift outside 4° invalidates the stimulus.

Open live demo

Principle

Perimetry assumes the patient is looking at the central target. If gaze drifts, the stimulus meant to land at 21° lands at 17° or 25°, and the threshold at the labelled location is wrong without the algorithm knowing.

Gaze is computed as iris position relative to the eye corners, in a head-relative frame — so translating the head does not move the vector; only a saccade does. A 1-D Kalman filter is applied to each component, with measurement noise inflated during blinks.

g = ( ciris − ccanthi ) / weye
Eye-relative gaze

Reads as: the offset of the iris center from the eye's center, normalised by the width of the eye opening.

  • ciris — pixel coordinate of the iris center (FaceMesh 468 / 473).
  • ccanthi — midpoint between the inner and outer canthus of the same eye (landmarks 33 ↔ 133 left, 263 ↔ 362 right). This is the eye's geometric center in a head-relative frame.
  • weye — distance between the same two canthi. Used as the normaliser so g is unitless: head pose, screen distance, and resolution drop out.

After a 30-frame baseline at session start g₀, drift is  Δ = g − g₀. Stimuli presented while ‖Δ‖ > 4° are flagged and excluded from the ZEST posterior update. Heijl-Krakau blind-spot catches run in parallel for the standard reliability indices.

What we measure to trust it

  • Calibrated gaze accuracy at 9-point grid (mean ± SD in degrees)
  • Blink-window rejection sensitivity
  • Head-pose invariance across ±20° yaw / ±15° pitch
FIXATION TARGET ±4° tolerance stim @ 21° patient eye

DRIFTED STIMULI ARE DROPPED FROM THE BAYESIAN POSTERIOR · FL / FP / FN COMPUTED IN PARALLEL

Model 04 · Ambient light

Reject the session if the room is off-spec.

Open live demo

Principle

Visual function thresholds are luminance-dependent. Acuity assumes ISO 8596 background; Pelli-Robson assumes ~85 cd/m²; perimetry assumes a dim room so stimulus contrast reaches operating range.

Glaucosim derives an operational ambient proxy from the webcam: mean greyscale intensity of the central patch, exposure-compensated, calibrated against an on-screen reference step at session start.

amb ≈ k · ⟨Igrey⟩ · ( 1 / ecam )
Ambient proxy

⟨Igrey mean intensity of central patch · ecam camera exposure from MediaStream constraints · k per-device constant from a 5 s on-screen reference.

What we measure to trust it

  • Agreement (Pearson r, Bland-Altman) vs a calibrated lux meter
  • Per-module operating window pass-through rate
  • Off-axis glare detection: variance over the corneal reflection ROI
PER-MODULE WINDOW (cd/m²) VF ~10 CS ~85 VA 80–320 live L̂_amb · 42 cd/m² GATE ✓

EACH TEST DEFINES ITS OWN WINDOW · OUT-OF-WINDOW SESSIONS ARE TAGGED ADVISORY OR REJECTED

Model 05 · Capture quality

Anterior segment frames graded in real time.

Open live demo

Principle

A patient's phone records a short anterior-segment clip per take. To be useful for surface review, each frame has to be in focus, well exposed, and framed on the iris. A quality scorer runs over every frame so the patient is guided in real time.

Q = α·Fvar + β·Ehist + γ·Riris − δ·Mblur
Quality score

Fvar Laplacian variance (focus) · Ehist exposure flatness · Riris iris coverage from FaceMesh ROI · Mblur motion blur from optical-flow magnitude.

What we measure to trust it

  • Frame-level agreement with rater quality grading (κ)
  • Take re-do rate vs unguided baseline capture
  • Lens-blur, light-flicker, off-axis rejection

Only takes that pass the threshold are kept. The voice avatar tells the patient to come a little closer, hold still, or retake.

PHONE · ANTERIOR SEGMENT Q 0.81 · KEEP

ONE FRAME PER EYE · PHONE OR LAPTOP · IMAGES ENCRYPTED AT REST

Test 01 · Visual field

Visual field 24-2.

ZEST Bayesian thresholding on the 54-location grid · same family as SITA.

Open exam
01Why
  • Function — earliest, most actionable damage signal
  • Arcuate — RNFL anatomy maps to superior/inferior loss
  • Cadence — current 6–12 mo → home enables monthly
02Principle
t*
P(t · r) ∝ P(r · t) · P(t)
  • Posterior per location, not a single number
  • Stop when SD < 1.5 dB
  • Gaze-drifted stimuli are dropped from the update
03Method
  • 54 locations · 6° spacing · 24-2 grid
  • Calibrated display — luminance via Model 04
  • Gaze, cover, distance, light gated per trial
  • Fixation check via Heijl-Krakau on blind spot
04Output
3 8 14 0 2 19 25 23 26 27 28 28 26 28 25 21
MD−8.4 dB PSD6.7 dB VFI87% GHToutside FL · FP · FN
ZEST ≈ SITA accuracy with fewer presentations9 · iPad ZEST validated vs HFA10 Sample · OD · superior arcuate
Test 02 · Acoustic IOP (β · research)

Acoustic IOP.

Contactless pressure screen — laptop emits, phone listens. Research-only signal. Not a replacement for Goldmann.

Open exam
01Why
visit 1 visit 2 peak missed
  • Diurnal IOP peaks escape the 6-month clinic window
  • Tonometry at home needs an implant or rebound device
  • Goal — contactless screen on own devices
02Principle
stiffer eye → higher f*
|X(f)| = FFT{ xiris(t) }
  • Eye = viscoelastic ball under pressure
  • Resonance f* scales with corneal/scleral stiffness
  • Iris motion in video is the read-out (no mic)
03Method
0 s 5 s 12 → 22 Hz chirp
  • Drive — 12→22 Hz chirp · 5 s · laptop or phone speaker
  • Capture — MediaPipe iris (468/473) per frame
  • Subtract head motion · ICD-normalise
  • FFT on aligned iris-displacement trace
04Output
14–20 Hz f* 17.2 Hz
mmHg± 95% CI peak f*Hz SNR · noise gate
Prior art: Coquart 1992 · Salz 2009 · Kim 2021 · Osmers 2020 · Davis 2014 · Wu 2012 · Luce 2005 FOR RESEARCH ONLY · not a tonometer · not a substitute for Goldmann
Test 03 · Visual acuity

Visual acuity exam.

ETDRS / Bailey-Lovie logMAR on a physically calibrated display, at the patient's measured distance.

Open exam
01Why
E FPT LPED PECFD
  • Universal baseline — every clinician reads it
  • Sanity gate for any session (rules out gross drops)
  • Trends tighten interpretation of VF / IOP
02Principle
distance d 5' E
h = d · tan(5·MAR)
  • 20/20 letter = 5 arcminutes visual angle
  • Patient ≠ 4 m from laptop → resize in real time
  • Same angular subtense at any distance
03Method
phone laptop monitor UA + screen.{W,H} + DPR → device DB → DPI
pmm/px = 25.4 / DPI
  • Pixel pitch from device fingerprint, not the patient
  • iPad / iPhone / MacBook / common monitors in internal DB
  • Distance from Model 01 · iris cross-check optional
  • 2-down-1-up staircase · 0.1 logMAR · 5 reversals
04Output
0.5 0.0 OD OS
logMAR± 95% CI Snellen20/x Conditions logged
Sloan optotypes · 2-down-1-up staircase · 0.1 logMAR step · 5 reversals12 · clinically meaningful Δ ≈ 0.1 logMAR13 Pixel pitch via core/calibration.js
Test 04 · Contrast sensitivity

Contrast sensitivity exam.

Pelli-Robson, age-normed. Background luminance gated by Model 04 before the run starts.

Open exam
01Why
years of glaucoma contrast acuity
  • CS loss precedes detectable VA loss in early glaucoma
  • Correlates with daily-life function better than VA
  • Sensitive to drug-induced surface change
02Principle
HVZ DSN CKR ONH last triplet ≥ 2/3 → threshold
log CS = log10(1 / Cth)
  • Letter size fixed above acuity → only contrast varies
  • Triplets step down 0.15 log units
  • 2/3 correct rule per triplet
03Method
lux gated by Model 04
  • Luminance · ambient lux + display gain locked
  • Pixel pitch shared with VA pipeline
  • Distance from Model 01 — fixed letter size
  • Voice + tap input · skip/retry per triplet
04Output
≤1.50 impaired 1.50–1.80 mild ≥1.80 normal 1.65
log CSper eye Age-bandz-score Slope vs prior
Normal log CS ≈ 1.95 · ≤ 1.5 impaired14 · clinically meaningful Δ ≈ 0.3 logCS CS often falls before VA in early glaucoma
Test 05 · Anterior segment

Anterior segment exam.

Four graded outputs from a single frame per eye. Phone or laptop — patient picks the device.

Open exam
01Why
silent change between visits
  • PG side effects (redness, pigmentation, sulcus) develop slowly
  • Visit cadence misses onset by months
  • Home capture closes loop on tolerability
02Principle
lid sulcus bulbar cheek-as-reference
  • 4 grades from one primary-gaze frame per eye
  • ROIs anchored to FaceMesh landmarks
  • Cheek-as-reference cancels lighting + skin tone
03Method
or 3 stills sharpest
  • Phone or laptop · 32 cm ± 10 cm
  • 2-sensor gate · distance + luminance
  • 3-still burst · Laplacian-variance pick
  • FaceMesh must lock or capture rejected
04Output
OD ● Sharp · Q 82 ● Efron 0 · normal ● POHSS 0 · none ● PAP 0 · flat
Q0–1 Efron0–4 POHSS0–3 PAP0–3
Why each grade: hyperemia (PG side effect) · POHSS (cosmetic/compliance) · PAP (irreversible) · Q (capture confidence) Every frame tagged · device · distance · lux · Q · model version · time
Test 05 · Anterior segment · models

Active-learning loop.

V0 ships day one (classical CV) · V1+ replaces it per grade once labels accumulate · IQ stays deterministic.

01V0 now
Q = α·Fvar + β·Ehist + γ·Riris − δ·Mblur
Hy = ⟨R/(R+G+B)⟩ vs cheek
PO = arctan((L*−50)/b*) vs cheek
PAP = (Lskin − Lsulcus) / Lskin
  • Deterministic · no training data needed
  • FaceMesh landmarks + per-pixel color
  • Calibrated against published reference photos
  • Confidence flagged low until labels validate
02Label
dashboard · review Hy POHSS PAP +1 labelled take per review
  • Each review → 3 ordinal labels per take
  • Clinician owns the label · patient never sees prediction-as-truth
  • The platform IS the labelling tool
03Train
CNN backbone Hyperemia head POHSS head PAP head
  • ConvNeXt-Tiny · ImageNet-pretrained
  • 3 ordinal heads · CORN loss
  • Inputs = frame + (distance, lux, Q) metadata
  • Active sampling · low-confidence + disagreement
04Deploy
κ deploy corpus N grows → κ rises
  • V_n replaces V0 per grade above κ threshold
  • Eval = weighted κ vs 2-grader consensus
  • Predictions never overwrite labels (versioned)
  • UCSD-labelled corpus stays UCSD-owned
Loop · home capture → V0 predict → clinician review → labelled set → retrain → deploy → repeat Dashboard slider re-grade · every correction → training set
Test 06 · NEI VFQ-25

Vision-related quality of life.

The standard 25-item PRO, voice or tap, on a home cadence rather than annual.

Open exam
01Why
how the patient FEELS about vision
  • Objective tests can't measure perceived function
  • Anchors what the patient actually cares about
  • Cadence shift: annual → 90-day makes trajectory visible
02Principle
25 items 12 subscales 79 composite 0 – 100
  • 25 items12 subscales → composite
  • Each response rescaled 0–100
  • Composite = mean of vision-targeted subscales
03Method
voice or tap ~7 min · conditional branching
  • Voice or tap · patient picks per item
  • Adaptive branching skips irrelevant items
  • ~7 min total · saved offline if unreachable
  • 90-day default cadence (vs annual)
04Output
79 composite −2 vs baseline
Composite0–100 12 subscales Δ vs baseline
Calibrated and validated by Mangione et al. 200115 · cadence is the new variable here, not the instrument Voice or tap · ~7 min · 90-day default
Test 07 · Drop adherence

Medication adherence,
as a clinical variable.

Reminders, single-tap confirmation, structured missed-dose reason — then overlaid on visual-function trend.

Open exam
01Why
~31% overstated vs objective self-report objective
  • Self-report at visit overstates adherence ~31%3
  • Hidden variable behind every VF / IOP slope
  • Goal — adherence as a continuously logged signal
02Principle
DOSE 20:00 TAP REASON ○ forgot ○ travel ○ side effect ○ ran out
  • Reminder at every scheduled dose
  • Single-tap confirms
  • Missed dose → structured reason (not generic apology)
03Method
30-day · 26/30 · 87% green = confirmed · grey = missed
  • Push reminders per regimen (per drop, eye, time)
  • Tap window ± 30 min vs scheduled
  • Side-effect log piped into the same record
04Output
missed-dose cluster ↗ MD ↘
30-day ribbon % at 30/90/365 MD × adherence
Adherence becomes a variable, not a self-report — behavioural conversations get a concrete artefact Pushes via web push / iOS PWA · saved offline
Why I built this

I built Glaucosim because two appointments a year can't catch a disease that damages the optic nerve silently, fiber by fiber, between visits.

Mauro Gobira
Founder · Ophthalmology MD · Visiting Scholar, Shiley Eye Institute
glaucosim.com · app.glaucosim.com
Mauro Gobira