2025 Essay Competition Winner “AI in Eye Care”

Introduction 

More than half a billion years ago, the first eyes appeared—small, simple organs that transformed life  itself.  

Today, that ancient organ meets a newborn mind, only decades old. This new intelligence holds the  promise of clarity, but also the peril of distortion.  

Ophthalmology is uniquely placed within this unfolding story. Its history is one of relentless progress,  from the crude cataract needle of antiquity to the ophthalmoscope, OCT, and laser surgery, each  advancing what could be observed and treated. AI, however, represents a departure from the  innovations of the past: not a passive tool wielded by the ophthalmologist, but an independent reasoner  capable of working alongside them. Yet like any newborn mind, it must be nurtured wisely, or left to  inherit our deepest biases. 

This essay does not aim to canonise AI as ophthalmology’s saviour. Rather, it asks: will artificial  intelligence sharpen our vision—or deepen our blind spots? 

The Promise 

In ophthalmology, few domains reveal the promise of artificial intelligence more vividly than population  screening and disease prediction. The specialty’s imaging-rich nature makes it especially well-suited to  AI, with diseases routinely captured in pictures primed for computational analysis. Diabetic retinopathy  (DR) has become the clearest testing ground: millions of retinal images must be screened each year, a  goal well beyond the reach of many health systems, particularly in low- and middle-income countries.  The search for cost-effective, scalable solutions has therefore become a global priority. 

The earliest generation of automated DR screening used machine learning (ML) trained to detect  predefined lesions such as microaneurysms or haemorrhages (1). More recently, deep learning (DL), a  branch of ML, uses multilayered neural networks that iteratively refine parameters, learning directly  from vast datasets rather than manually annotated features (1). In Gulshan et al.’s landmark study of  128,175 retinal images, a DL algorithm achieved sensitivities and specificities above 90% for moderate-to-severe DR (2). The promise seems greatest in low-resource settings: in Rwanda, AI-assisted DR  grading via the Cybersight telehealth platform enabled immediate referrals, same-day counselling, and  higher adherence compared with human grading (3). Such systems could mean the difference between  timely diagnosis and no diagnosis at all. 

Beyond screening, AI is beginning to anticipate disease trajectories. In glaucoma, where progression  often escapes detection until years after damage has begun, Yousefi et al. trained an unsupervised  model on more than 2,000 eyes, enabling recognition of deterioration 3.5 years earlier than  conventional methods (4). In age-related macular degeneration, Yim et al. developed a deep-learning  system that could predict which patients with wet AMD in one eye would progress to late disease in the  other (5). Predictive AI can help stratify patients by urgency and in resource-limited settings it can direct  scarce specialist time to those most at risk. 

Oculomics, another emerging application of AI in eye care, studies the retina as a window to systemic  health. In a landmark study, Poplin et al. showed that fundus photographs could predict age, sex,  smoking status, and even cardiovascular events (6). Beyond vascular risk, oculomics has identified  biomarkers of neurodegenerative disease, which show promise in aiding diagnosis of Alzheimer’s Dementia and Parkinson’s (7). If realised, AI-driven oculomics could “turn the slit lamp into a  stethoscope” of systemic health. 

The promise of AI in ophthalmology is undeniably vast. Yet an ominous question remains: if its potential  is so compelling, why is it met with as much skepticism as excitement? The very qualities that make AI  transformative also make it vulnerable to bias and misuse. To understand AI’s role in eye care, we must  weigh its potential for good against its risk of harm. 

The Perils 

Artificial intelligence in healthcare does not exist in a vacuum. Algorithms are trained on human data,  carrying the weight of racism, prejudice, and unequal access to care. Bias can infiltrate AI models at  every stage. Sampling bias arises when training data are drawn largely from narrow, non-representative  populations—the so-called WEIRD populations (Western, educated, industrialised, rich, and democratic)  —leaving out underserved communities, where the need is greatest. 

In ophthalmology, the risks are not merely theoretical. Deep learning systems for DR already show  performance disparities: in the EyePACS dataset, accuracy reached 73% for lighter-skinned individuals  but only 60.5% for darker-skinned ones (8). That these very groups are historically marginalized makes  the danger sharper. Left unchecked, this may hail a new era in which historical injustices are not  dismantled but perpetuated under the guise of innovation. 

Mitigating these risks requires transparency, accountability, and continual scrutiny. Yet even if bias is  addressed, deep learning brings another peril: its “black box” reputation—models whose inner workings  are opaque (9). In everyday contexts this may not matter, but in healthcare, where decisions carry  profound consequences, transparency is critical. Without insight into how conclusions are reached,  errors are difficult to detect or correct. A system which is reliable in trials may falter in clinics, where  diversity and complexity introduce unseen variables. Failures, often borne disproportionately by  vulnerable groups, then become harder to expose. When AI errs, patients suffer. Missed diagnoses raise  urgent questions of liability: is responsibility with the ophthalmologist, the manufacturer, or the  regulator? Current malpractice frameworks are ill-suited to answer. Policymakers are beginning to  respond: the EU’s new AI Act designates medical AI as “high risk,” imposing strict standards of  transparency, quality, and accountability (10). The principle is clear: innovation must not come at the  expense of trust—the very currency on which ophthalmology depends. 

Conclusion 

To say that AI may solve all of eye care’s problems may be somewhat myopic.  

Its capacity to expand screening and increase access is real, but so are its risks. Overreliance could blunt  clinical judgement and weaken empathy. 

The future of ophthalmology should not be written by machines alone but guided by the human gaze.  AI’s greatest contribution will be if it ensures that innovation serves equity, compassion, and the  patients who entrust us with their sight. 

References 

1. Grauslund J. Diabetic retinopathy screening in the emerging era of artificial intelligence.  Diabetologia. 2022 Sep;65(9):1415-23. 

2. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K,  Madams T, Cuadros J, Kim R. Development and validation of a deep learning algorithm for  detection of diabetic retinopathy in retinal fundus photographs. jama. 2016 Dec  13;316(22):2402-10. 

3. Mathenge W, Whitestone N, Nkurikiye J, Patnaik JL, Piyasena P, Uwaliraye P, Lanouette G,  Kahook MY, Cherwek DH, Congdon N, Jaccard N. Impact of artificial intelligence assessment of  diabetic retinopathy on referral service uptake in a low-resource setting: the RAIDERS  randomized trial. Ophthalmology Science. 2022 Dec 1;2(4):100168. 

4. Yousefi S, Kiwaki T, Zheng Y, Sugiura H, Asaoka R, Murata H, Lemij H, Yamanishi K. Detection of  longitudinal visual field progression in glaucoma using machine learning. American journal of  ophthalmology. 2018 Sep 1;193:71-9. 

5. Yim J, Chopra R, Spitz T, Winkens J, Obika A, Kelly C, Askham H, Lukic M, Huemer J, Fasler K,  Moraes G. Predicting conversion to wet age-related macular degeneration using deep learning.  Nature Medicine. 2020 Jun;26(6):892-9. 

6. Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, Peng L, Webster DR.  Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning.  Nature biomedical engineering. 2018 Mar;2(3):158-64. 

7. Suh A, Ong J, Kamran SA, Waisberg E, Paladugu P, Zaman N, Sarker P, Tavakkoli A, Lee AG. Retina  oculomics in neurodegenerative disease. Annals of Biomedical Engineering. 2023 8. Burlina P, Joshi N, Paul W, Pacheco KD, Bressler NM. Addressing artificial intelligence bias in  retinal diagnostics. Translational Vision Science & Technology. 2021 Feb 5;10(2):13- 9. Hassija V, Chamola V, Mahapatra A, Singal A, Goel D, Huang K, Scardapane S, Spinelli I, Mahmud  M, Hussain A. Interpreting black-box models: a review on explainable artificial intelligence.  Cognitive Computation. 2024 Jan;16(1):45-74. 

10. European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of  13 June 2024 laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act)  and amending certain Union legislative acts. Official Journal of the European Union. 2024 Aug 1;  L 2024/1689:1-108

Dr Shenelle Wickramarathna

Doctor at Basildon & Thurrock University Hospitals NHS Foundation Trust

https://www.linkedin.com/in/shenelle-wickramarathna-4b8487307/
Previous
Previous

2025 Essay Competition Runner-Up “AI in Eye Care”

Next
Next

Microbial Keratitis: Resistance Patterns and Therapeutic Limitations in the Modern Era