01 / 20
g
glaucosim

Glaucoma monitoring,
from home.

Author
Mauro Gobira, MD
Visiting Scholar in Glaucoma
Site
UC San Diego · Shiley Eye Institute
Hamilton Glaucoma Center · 2026
Background

Three gaps in glaucoma care.

A quarterly clinic model cannot see what determines outcomes.

01 · Adherence
~50%
of glaucoma patients are non-adherent on objective measurement, and clinicians cannot detect it before the next visit.
02 · Progression sampling
8.8%
of 380,029 US OAG enrollees received zero visual fields over the study window; only ~23% met the AAO ≥ 1 VF/year guideline. A single VF/year cannot separate noise from a 1.5 dB/year progressor for ~5 years.
03 · Drug side effects
up to 60%
of eyes on prostaglandin analogues for ≥ 3 months develop deepening of the upper eyelid sulcus (DUES) — 60% on bimatoprost, down to 18–24% on tafluprost / latanoprost. Surface and lid changes evolve between visits.
The platform

Continuous monitoring,
between visits.

Glaucosim is a browser-and-phone layer that runs a longitudinal home cadence of clinically grounded tests, captures medication adherence, and surfaces a trend to the eye-care professional before the next appointment.

  • Visual function — VF 24-2, visual acuity, contrast sensitivity
  • Anatomy — anterior-segment video, voice-guided
  • Pressure — acoustic IOP screen (β, research)
  • Patient-reported — NEI VFQ-25, symptom diary
  • Adherence — drop diary, reminders, missed-dose log
No dedicated hardware No app store Eye-care reviewed
g
glaucosim · dashboard
● synced M. Reyes · 64F
PATIENT
Maria Reyes · OD POAG
Latanoprost 0.005% · 1× QHS · since 2023-08
MD trend · 24-2 OD
−0.62 dB / yr
MarMayJul SepNovJanMar −2.0−4.0−6.0
Drop adherence · last 30 d
26 / 30
Latest results
Visual field 24-2 · OD
Mar 12 · MD −4.8 · PSD 6.2
PROG ?
Visual acuity · OU
Mar 14 · 0.10 / 0.08 logMAR
stable
Pelli-Robson CS
Mar 14 · 1.65 / 1.50 logCS
stable
Anterior segment
Mar 14 · 4 grades · review
new
NEI VFQ-25
Feb 28 · 79/100 composite
−2 pts
Acoustic IOP (β)
Mar 12 · screen · research
β
DASHBOARD · CLINICIAN VIEW · FICTITIOUS PATIENT FOR DEMO
Landscape

What already exists.

Most home tools cover one test and depend on dedicated hardware. Glaucosim runs a multi-modality session on the devices the patient already owns.

Home · hardware-bound
Home · software / BYOD
In-clinic · hardware
In-clinic · software
Hardware-bound
Software / BYOD
Home / remote
In-clinic only
HFA · SITA
Goldmann
Reichert ORA / 7CR
Octopus · Henson
iCare HOME / HOME2
Sensimed Triggerfish
Implandata Eyemate
Olleyes VisuALL
Notal Vision (home OCT)
Heru VR
RadiusXR
Imo Vifa (Topcon CREWT)
NOVA VR (Bradley)
Eyenuk EyeArt
M&S Smart System
Melbourne Rapid Fields
iPad ZEST (Schulz)
EyeQue
Easee
Peek Vision
Glaucosim · multi-modality

Continuous IOP needs an implant or a contact lens (Eyemate, Triggerfish). Home perimetry needs a VR headset or a tablet kiosk (Olleyes, Heru, RadiusXR, Imo Vifa). Home anterior-segment imaging needs a clip-on lens. Home tools that ship without dedicated hardware cover a single test — refraction (EyeQue, Easee), VF (MRF, iPad ZEST), or screening (Peek Vision).

Glaucosim is the only point in the top-right quadrant covering visual function + anterior segment + IOP screen (β) + adherence in one home session, on devices the patient already owns. Peek Vision is the closest conceptual peer but is built for community-screening triage, not longitudinal glaucoma monitoring.

Per-measurement precision is lower than instrument-bound counterparts. The trade is an order-of-magnitude increase in sampling cadence, and slope-estimate variance falls as 1/n³ when test occasions are added.5

The hard part

A home test without integrity
checks is a screenshot of a screen.

How do we run clinical-grade tests remotely, without dedicated devices, and still trust the data?

Environment

Is the room within spec?

Visual acuity, contrast and perimetry each assume a different luminance window. A test outside its window is not interpretable.

Geometry

Is the patient where the test assumes?

Stimulus angle, optotype size, and pixel pitch all depend on the patient-to-screen distance — and on which eye is actually being tested.

Behavior

Is the patient actually fixating?

Peripheral perimetry assumes central fixation. A 4° saccade away from target makes the stimulus land at the wrong location.

Approach
Glaucosim runs an end-to-end sensor stack — five in-house ML models on the patient's own webcam and phone — that gates every trial against environment, geometry, and behavior before it counts.
In-house ML

Five models guard every trial.

No additional hardware. No data leaves the device until results are signed and synced.

11.7 mm
01

Screen distance

Iris-pinhole projection from MediaPipe FaceMesh.

COVERED
02

Eye cover

EAR + hand-landmark + iris occlusion fusion.

03

Gaze fixation

Iris-relative-to-canthi, Kalman-filtered.

~ cd/m²
04

Ambient light

Calibrated webcam-mean luminance proxy + glare.

QC OK
05

Capture quality

Anterior-segment focus, exposure, framing scorer.

Each module reads all five channels before allowing a trial. Out-of-band readings prompt re-positioning or invalidate the affected stimulus. Every event is logged for retrospective audit.

Model 01 · Screen distance

Distance from a single RGB frame.

Open live demo

Principle

We use the interpupillary distance (IPD) as the real-world anchor — the population mean for adults is 63 mm (SD ~3.5 mm).7 MediaPipe FaceMesh returns the two iris-center landmarks (468 left, 473 right). We measure the IPD in pixels and recover patient-to-screen distance from the pinhole projection.

d = (fpx · 63 mm) / IPDpx
Distance

d patient-to-camera (mm) · fpx camera focal length (px), recovered with a one-time on-screen calibration step · IPDpx live pixel distance between iris centers (FaceMesh 468 ↔ 473).

Why IPD and not iris diameter: the iris edge is harder to segment reliably under variable lighting and lashes, while iris centers are detected by MediaPipe with sub-pixel stability and remain visible even when the lid covers part of the limbus.

What we measure to trust it

  • Mean absolute error vs ruler-measured distance, 30–100 cm window
  • Sample variance across head pose (±20° yaw, ±15° pitch)
  • Drop-out rate under low light / glasses / blink frames
  • Latency budget < 33 ms / frame (30 Hz)
CAM image plane IPD_px 63 mm landmarks 468 ↔ 473 IPD_px ∝ 1 / d

SIMILAR TRIANGLES · REAL IPD FIXED AT 63 MM · PIXEL IPD INVERTS WITH DISTANCE

Model 02 · Eye cover

Per-trial verification of which eye is occluded.

Open live demo

Principle

Monocular tests assume the operator knows which eye is tested. At home, a left-eye trial labelled as right-eye produces a clean, plausible, incorrect record. Glaucosim verifies cover state from three independent signals — any single one of which is brittle alone.

EAR = ( ‖p2−p6‖ + ‖p3−p5‖ ) / ( 2 · ‖p1−p4‖ )
Eye Aspect Ratio

Open eye ≈ 0.27–0.32; closed eye < 0.15. Threshold calibrated per subject over a 25-frame baseline at session start.8

  • EAR — lid aperture from FaceMesh landmarks
  • Hand landmarks — overlapping the orbital bounding box
  • Iris occlusion — skin or fabric inside iris ROI

What we measure to trust it

  • Sensitivity / specificity per channel vs ground-truth video labels
  • Mis-occlusion rate (eye reported covered when it isn't)
  • Robustness against glasses, dark lashes, sleeves, palms
LEFT RIGHT COVERED EAR · OPEN · 0.29 EAR · 0.10 HAND ✓

PER-EYE STATE GATES EVERY STIMULUS · LIVE @ ~30 HZ

Model 03 · Gaze fixation

Drift outside 4° invalidates the stimulus.

Open live demo

Principle

Perimetry assumes the patient is looking at the central target. If gaze drifts, the stimulus meant to land at 21° lands at 17° or 25°, and the threshold at the labelled location is wrong without the algorithm knowing.

Gaze is computed as iris position relative to the eye corners, in a head-relative frame — so translating the head does not move the vector; only a saccade does. A 1-D Kalman filter is applied to each component, with measurement noise inflated during blinks.

g = ( ciris − ccanthi ) / weye
Eye-relative gaze

Reads as: the offset of the iris center from the eye's center, normalised by the width of the eye opening.

  • ciris — pixel coordinate of the iris center (FaceMesh 468 / 473).
  • ccanthi — midpoint between the inner and outer canthus of the same eye (landmarks 33 ↔ 133 left, 263 ↔ 362 right). This is the eye's geometric center in a head-relative frame.
  • weye — distance between the same two canthi. Used as the normaliser so g is unitless: head pose, screen distance, and resolution drop out.

After a 30-frame baseline at session start g₀, drift is  Δ = g − g₀. Stimuli presented while ‖Δ‖ > 4° are flagged and excluded from the ZEST posterior update. Heijl-Krakau blind-spot catches run in parallel for the standard reliability indices.

What we measure to trust it

  • Calibrated gaze accuracy at 9-point grid (mean ± SD in degrees)
  • Blink-window rejection sensitivity
  • Head-pose invariance across ±20° yaw / ±15° pitch
FIXATION TARGET ±4° tolerance stim @ 21° patient eye

DRIFTED STIMULI ARE DROPPED FROM THE BAYESIAN POSTERIOR · FL / FP / FN COMPUTED IN PARALLEL

Model 04 · Ambient light

Reject the session if the room is off-spec.

Open live demo

Principle

Visual function thresholds are luminance-dependent. Acuity assumes ISO 8596 background; Pelli-Robson assumes ~85 cd/m²; perimetry assumes a dim room so stimulus contrast reaches operating range.

Glaucosim derives an operational ambient proxy from the webcam: mean greyscale intensity of the central patch, exposure-compensated, calibrated against an on-screen reference step at session start.

amb ≈ k · ⟨Igrey⟩ · ( 1 / ecam )
Ambient proxy

⟨Igrey mean intensity of central patch · ecam camera exposure from MediaStream constraints · k per-device constant from a 5 s on-screen reference.

What we measure to trust it

  • Agreement (Pearson r, Bland-Altman) vs a calibrated lux meter
  • Per-module operating window pass-through rate
  • Off-axis glare detection: variance over the corneal reflection ROI
PER-MODULE WINDOW (cd/m²) VF ~10 CS ~85 VA 80–320 live L̂_amb · 42 cd/m² GATE ✓

EACH TEST DEFINES ITS OWN WINDOW · OUT-OF-WINDOW SESSIONS ARE TAGGED ADVISORY OR REJECTED

Model 05 · Capture quality

Anterior segment frames graded in real time.

Open live demo

Principle

A patient's phone records a short anterior-segment clip per take. To be useful for surface review, each frame has to be in focus, well exposed, and framed on the iris. A quality scorer runs over every frame so the patient is guided in real time.

Q = α·Fvar + β·Ehist + γ·Riris − δ·Mblur
Quality score

Fvar Laplacian variance (focus) · Ehist exposure flatness · Riris iris coverage from FaceMesh ROI · Mblur motion blur from optical-flow magnitude.

What we measure to trust it

  • Frame-level agreement with rater quality grading (κ)
  • Take re-do rate vs unguided baseline capture
  • Lens-blur, light-flicker, off-axis rejection

Only takes that pass the threshold are kept. The voice avatar tells the patient to come a little closer, hold still, or retake.

PHONE · ANTERIOR SEGMENT Q 0.81 · KEEP

ONE FRAME PER EYE · PHONE OR LAPTOP · IMAGES ENCRYPTED AT REST

Test 01 · Visual field

Visual field 24-2 exam.

ZEST Bayesian adaptive thresholding on the 54-location grid. Same family as SITA.

Open exam

Core principle

At each of the 54 locations, the threshold is treated as a probability distribution, not a single number. Every stimulus shifts that distribution toward the patient's true value. The test stops at a given location only when the distribution is tight enough to commit.

P(t | r1:n) ∝ P(rn | t) · P(t | r1:n−1)
Bayesian update

Termination when posterior SD < 1.5 dB. Drifted-gaze stimuli (Model 03) are dropped from the update.

Output

  • MD — mean deviation vs age-matched normal
  • PSD — pattern SD (focal departure after diffuse loss)
  • VFI — macula-weighted % of normal function
  • GHT — Glaucoma Hemifield Test (5 superior vs 5 inferior clusters)
  • Reliability — FL (Heijl-Krakau), FP, FN — gaze-gated

Turpin showed ZEST ≈ SITA in threshold accuracy with fewer presentations.9 Schulz validated iPad ZEST against HFA in glaucoma.10

3 8 14 0 2 19 26 0 2 9 21 25 29 20 12 22 26 28 28 25 29 29 30 28 28 28 26 28 28 27 26 27 26 26 25 27 25 25 22 21 25 25 22 21 22 SUPERIOR ARCUATE DEFECT
Sample 24-2 ZEST · OD · MD −8.42 dB · PSD 6.71 dB · VFI 87% · GHT OUTSIDE NORMAL LIMITS
Test 02 · Acoustic IOP (β · research)

Acoustic IOP.

Contactless pressure screen — laptop emits, phone listens. Research-only signal. Not a replacement for Goldmann.

Open exam

Core principle

The eye is a viscoelastic ball under pressure. Drive it with low-frequency sound and it has a mechanical resonance whose frequency depends on the stiffness of the cornea + sclera — and within a patient, that stiffness is dominated by intraocular pressure. Higher IOP, stiffer eye, higher resonance frequency.

What the model actually captures

We don't listen for an echo with the microphone. The speaker drives the eye; the phone selfie camera tracks the iris landmarks frame-by-frame. The iris is rigidly coupled to the cornea, so its sub-pixel motion in the video is a direct read-out of the eye vibrating under acoustic excitation.

The pipeline:

  • Drive — 12 → 22 Hz linear chirp, 5 s, played through the laptop or phone speaker.
  • Capture — MediaPipe iris landmarks (468 L, 473 R) tracked at the camera frame rate.
  • Head-motion subtraction — iris position is referenced to head landmarks, then normalised by intercanthal distance, so the trace measures iris displacement relative to the face in size-invariant units.
  • FFT of the windowed iris-displacement trace aligned to the chirp.
  • Peak detection in the expected resonance band (14–20 Hz).
  • Peak Hz → mmHg via a per-patient calibration anchored to enrollment Goldmann.
finst(t) = f0 + ( Δf / T ) · t  |  |X(f)| = FFT{ xiris(t) }
Drive · Read-out

f0 = 12 Hz, f1 = 22 Hz, T = 5 s. Resonance peak f* expected in 14–20 Hz. Per-patient mapping f* → mmHg from iop-mmhg.js.

Output

  • Estimated mmHg with 95% CI
  • Peak f* Hz and within-session stability across repeat takes
  • Signal-quality score from peak SNR + ambient-noise gate

Proposed validation at Shiley: acoustic mmHg estimate vs same-day Goldmann across IOP ranges, retest reliability over 4 weeks.

SPEAKER · 12→22 Hz EYE · resonant PHONE CAM tracks iris @ video fps
IRIS DISPLACEMENT x(t)
0 → 5 s · head-motion subtracted · ICD-normalised
SPECTRUM |X(f)| · peak f*
14–20 Hz f* = 17.2 Hz
peak f* → mmHg (per-patient calibration)

Prior art · scientific basis

IOP MODULATES EYE RESONANCE
  • Coquart et al., J Biomech 1992 — FEM model: eye resonance frequencies are sensitive to IOP.
  • Kim et al., Sci Rep 2021 — Vibroacoustic resonance and CMVR scale monotonically with IOP (p < 0.0001).
ACOUSTIC TONOMETRY · IN-VIVO
  • Salz et al., J Glaucoma 2009 — Acoustic tonometry feasibility on porcine eyes: r = −0.98 vs IOP.
  • Osmers et al., TVST 2020 — First in-vivo human trial of an acoustic self-tonometer.
VIBRATION FROM A REGULAR CAMERA
  • Davis et al., ACM TOG · SIGGRAPH 2014 — "Visual Microphone": sub-pixel vibration recovered from ordinary video.
  • Wu et al., ACM TOG · SIGGRAPH 2012 — Eulerian Video Magnification reveals motions below pixel resolution.
CONTACTLESS DEFORMATION → IOP (PRECEDENT)
  • Luce, J Cataract Refract Surg 2005 — ORA: applied force → corneal deformation reads out biomechanics and IOP. Same logic, mechanical excitation rather than acoustic.

FOR RESEARCH ONLY · NOT A TONOMETER · NOT A SUBSTITUTE FOR GOLDMANN

Test 03 · Visual acuity

Visual acuity exam.

ETDRS / Bailey-Lovie logMAR on a physically calibrated display, at the patient's measured distance.

Open exam

Core principle

A 20/20 letter is defined as one that occupies exactly 5 arcminutes of visual angle. The patient is rarely 4 m from a laptop, so the optotype is physically resized in real time to preserve that same angular subtense at the live measured distance.

h = d · tan( 5 · MARarcmin )
Letter height (mm)

The harder half: pixel pitch

Computing the letter height in millimetres is the easy step. Rendering that height correctly on a screen the browser refuses to describe is the hard one — DOM physical units (1cm, 1mm) are reference units pinned to 96 DPI, not the actual display.

Glaucosim recovers the device's pixel pitch by identifying the screen, not by asking the patient. The user agent, screen.width × screen.height, and devicePixelRatio together fingerprint the device against an internal database of iPads, iPhones, MacBooks, Android flagships and common external monitors (Studio Display, Dell UltraSharp, LG UltraFine, BenQ PD27) — each indexed to a known CSS DPI. For external displays on macOS we read the monitor label exposed by the Window Management API, which the OS derives from the EDID.

pmm/px = 25.4 / DPIdevice  →  hpx = hmm / pmm/px
Pixel pitch & optotype pixels

25.4 mm/inch divided by the device's CSS DPI gives mm per CSS pixel — matching window.innerWidth. A webcam cross-check optionally validates the estimate by comparing measured iris-pair pixel span against the expected size at the live distance. Source: core/calibration.js.

Output

  • Per-eye logMAR with 95% CI from staircase reversals
  • Snellen equivalent at 20/x
  • Conditions logged — distance, pixel pitch, ambient luminance, optotype size in mm and px

Sloan optotypes, 2-down-1-up staircase, 0.1 logMAR step, 5 reversals.12 Clinically meaningful Δ ≈ 0.1 logMAR.13

EYE d 5' E h = d · tan(5') → 5.82 mm @ 4 m SAME ANGLE · ANY DISTANCE

DISTANCE FROM MODEL 01 · OPTOTYPE HEIGHT RECOMPUTED PER FRAME

Test 04 · Contrast sensitivity

Contrast sensitivity exam.

Pelli-Robson, age-normed. Background luminance gated by Model 04 before the run starts.

Open exam

Core principle

Pelli-Robson fixes letter size well above acuity threshold, then varies only one thing: contrast. Letters are shown in triplets that step down 0.15 log units of contrast. The contrast threshold is the last triplet the patient reads with at least two of three letters correct.

log CS = log10( 1 / Cthreshold )
Score

C Michelson contrast — ( Lmax − Lmin ) / ( Lmax + Lmin ). Normal log CS ≈ 1.95; ≤ 1.5 is impaired.14

Output

  • log CS per eye, last correct triplet rule
  • Age-band z-score against published norms
  • Slope vs prior tests in the longitudinal record

CS loss often precedes detectable acuity change in early glaucoma — and is sensitive to drug-induced ocular-surface change.

CONTRAST · 0.15 LOG-UNIT STEPS H V Z log CS 1.05 D S N 1.20 C K R 1.35 · last read O N H 1.50 V R S 1.65

LETTER SIZE FIXED · ONLY CONTRAST VARIES · LAST CORRECT TRIPLET = THRESHOLD

Test 05 · Anterior segment

Anterior segment exam.

Four graded outputs from a single frame per eye. Phone or laptop — patient picks the device.

Open exam

What we grade (each on every visit)

01 · Image quality

Q 0–1 · keep / retake

Per-frame score combining Fvar (Laplacian focus), Ehist (exposure flatness), Riris (iris ROI coverage from FaceMesh) and Mblur (motion blur). Reported alongside the three clinical grades so reviewers see how confident the capture is.

02 · Conjunctival hyperemia

Efron 0–4 · continuous redness index 0–1

MediaPipe FaceMesh segments the bulbar conjunctiva ROI in the primary-gaze frame. Redness index = ⟨R / (R + G + B)⟩ over the ROI, illumination-normalised against the patient's own ambient-lit cheek patch. Continuous score → ordinal Efron 0–4.

03 · Eyelid skin hyperpigmentation

POHSS 0–3 · ITA° in CIE L*a*b*

Upper-eyelid skin patch sampled in CIE L*a*b*. The melanin proxy ITA° = arctan( ( L* − 50 ) / b* ) · 180/π; we report the within-patient ΔITA° vs an infraorbital cheek reference patch, then map to the Periocular Hyperpigmentation Severity Scale (Sheth 2014).

04 · Orbital fat reabsorption

Aakalu PAP 0–3 · MRD1 in mm

Prostaglandin-associated periorbitopathy. MRD1 (pupil-center → upper-lid-margin) converted to mm via per-frame IPD scale, plus an upper-lid-sulcus depth proxy from shadow contrast. Mapped to the Aakalu 0–3 ordinal scale.

Capture flow · phone or laptop

STANDARDISED CAPTURE · NO FLASH 1 · DEVICE CHOICE Phone front cam — or — Laptop webcam 2 · TWO-SENSOR GATE Distance · ambient luminance distance from Model 01 · lux from Model 04 3 · ONE FRAME PER EYE OD then OS · primary gaze no flash · no gaze sweep 4 · PER-FRAME QUALITY (Model 05) Focus · exposure · iris ROI · blur retake prompt if Q < threshold 5 · STORAGE Frame + 4 grades + metadata Supabase storage, org-scoped RLS

EVERY FRAME TAGGED WITH DEVICE · DISTANCE · LUX · Q · MODEL VERSION · TIME

Test 05 · Anterior segment · models

From classical CV today
to a trained model from our cohort.

V0 ships with hand-engineered features per output. V1+ is a multi-task CNN, on the three clinical grades only, trained on labels the clinician writes in the dashboard. Image quality stays deterministic.

V0 · ships day one

Classical CV, no training data required.

Each grading is computed deterministically from MediaPipe landmarks + per-pixel color in a stable ROI. Calibrated against published reference photographs of each scale.

Image quality

Q = α·Fvar + β·Ehist + γ·Riris − δ·Mblur · gates retake in real time · reported alongside the three grades so reviewers see the capture's confidence.

Hyperemia

ROI = bulbar conjunctiva mask (MediaPipe) · feature = ⟨R / (R+G+B)⟩ normalised against the patient's cheek patch · ordinal map to Efron 0–4 via reference-photo LUT.

Hyperpigmentation

Upper-lid skin patch in CIE L*a*b* · feature = ITA° + ΔITA° vs cheek · ordinal map to POHSS 0–3.

Orbital fat reabsorption

FaceMesh upper-lid + pupil → MRD1 (mm) via per-frame IPD scale · sulcus shadow contrast as a depth proxy · ordinal map to Aakalu PAP 0–3.

V1+ · trained from our cohort

Multi-task CNN, active-learning loop.

Every clinician review in the dashboard adds three ordinal labels per take. The platform is the labelling tool.

ACTIVE-LEARNING LOOP Home capture phone / laptop V0 predict 3 grades + conf Clinician review confirm / correct Labelled set growing N Retrain CNN V_n V_n replaces V0 above the κ threshold

Model

  • Backbone — ConvNeXt-Tiny or EfficientNet-B3 (ImageNet-pretrained)
  • Three heads — ordinal regression (CORN loss), one per grading
  • Inputs — auto-poster frame + paired metadata (distance, luminance, IQ score)
  • Eval — weighted κ vs grader consensus; per-grade ROC; calibration plot
  • Active sampling — prioritise takes where V0 confidence is low or Vn-vs-Vn−1 predictions disagree

Versioned model files; predictions never overwrite labels. The dashboard surface lets a fellow drag a slider to re-grade — every correction lands in the training set. UCSD-labelled corpus stays UCSD-owned.

Test 06 · NEI VFQ-25

Vision-related quality of life.

The standard 25-item PRO, voice or tap, on a home cadence rather than annual.

Open exam

Core principle

25 questions split into 12 subscales — general vision, near, distance, peripheral, ocular pain, role limitations, dependency, social, mental, color, driving, plus a general-health item. Each response is rescaled 0–100. Subscale = mean of items; composite = mean of vision-targeted subscales.

Composite = ( 1 / 11 ) · Σ Subscalei
VFQ-25 scoring

Calibrated and validated by Mangione et al. 2001.15 The shift here is cadence: we run it every 90 days at home, so trajectory becomes visible.

Output

  • Composite score 0–100
  • Per-subscale bar chart, current vs baseline
  • Longitudinal Δ against the patient's own anchor
SUBSCALES · 0 – 100 General vision Near Distance Peripheral Ocular pain Role limits Dependency Social Mental Color vision Driving General health 50 75 100 COMPOSITE 79 −2 vs baseline

VOICE OR TAP · ~7 MIN · 90-DAY CADENCE

Test 07 · Drop adherence

Medication adherence,
as a clinical variable.

Reminders, single-tap confirmation, structured missed-dose reason — then overlaid on visual-function trend.

Open exam

Core principle

Adherence is invisible because self-report at the next visit overstates it by ~31% versus objective measurement.3 Glaucosim turns adherence into a continuously logged variable: a reminder fires at every scheduled dose, the patient confirms with one tap, and missed doses are captured with a structured reason rather than a generic apology.

The clinician dashboard overlays missed-dose density on the MD trend, so adherence vs progression is one chart — and a behavioural conversation has a concrete artefact behind it.

Output

  • 30-day adherence ribbon
  • Rolling % at 30 / 90 / 365 days
  • Missed-dose timeline with reason categories
  • Adherence × MD overlay on the longitudinal chart
30-DAY RIBBON26 / 30
MD TREND × MISSED-DOSE DENSITY · 12 MO−0.62 dB/yr
missed-dose cluster · ↘ MD

ADHERENCE BECOMES A VARIABLE, NOT A SELF-REPORT

Why I built this

I built Glaucosim because two appointments a year can't catch a disease that damages the optic nerve silently, fiber by fiber, between visits.

Mauro Gobira
Founder · Ophthalmology MD · Visiting Scholar, Shiley Eye Institute
glaucosim.com · app.glaucosim.com
Mauro Gobira