Retinal Screener

Taking a Look Behind the Screens

Showing posts with label TAT. Show all posts
Showing posts with label TAT. Show all posts

I've always said that retinal screeners and failsafe are a lot like teenagers and sex: we all talk about it; no one really knows how to do it; but we think everyone else is doing it all the time, so we say we’re doing it too. And until BARS holds another Failsafe Discussion Day (or sex education class) that's likely to remain the case.

But one area where I always assumed the National Diabetic Eye Screening Programme differed from the average teenage boy was in the overwhelming compulsion to draw penises. Frankly I didn't think we had any (compulsion, not penises). Admittedly, I've been known to explain retinal images to patients with the words "If it looks like Mars, it's normal; if it looks like Venus, it's ischaemic; and if it looks like Uranus, it could be a macula hole", but I've never had the balls to mention male genitalia, and the only thing I've drawn on is my experience.

That all looks set to change, however, following the arrival of last month's Test & Training set.

Back in 2013, the journal Psychological Science published a study entitled The Invisible Gorilla Strikes Again, which was based on the phenomenon of 'Inattentional Blindness', whereby observers miss unexpected, but salient, events while engaged in other tasks. It had previously been demonstrated by videos such as this one:


With the exception of Dian Fossey and the Harlem Globetrotters, however, most people viewing that video weren't specialists on the subject matter, so researchers at Harvard Medical School and Brigham & Women's Hospital in Boston decided to examine whether inattentional blindness also affects expert observers. They asked 24 radiologists to examine five CT scans of lungs for cancer nodules, a specialised task highly familiar to all of them. The last of those five scans looked like this...

Gorillas in the Midst


Despite being highly skilled observers, 20 of those 24 radiologists, or 83%, failed to spot the gorilla in the top right hand corner. The other 4 went ape when they saw it.

The results indicate not that the radiologists weren't looking carefully enough, but rather that their brains were so focused on spotting cancer nodules, they were blind to anything else.

The resulting press coverage of this study prompted a lot of discussion in my screening programme about whether or not we would notice such a thing when grading a retinal image. Most graders found it hard to believe they could ever miss anything so out of place. But then the radiologists probably would have said the same thing too. And let's face it, you could hide Lord Lucan in a nasal left view being graded after four-thirty on a Friday afternoon, and we'd all be none the wiser.

My conclusion was that there could be no better environment to further test this phenomenon than a diabetic eye screening programme. Not only would we be reducing the risk of blindness, we'd be assessing the risk of inattentional blindness. Once you include the blind luck required to get 100% on the TAT, we'd be on the verge of designing a triple threat study of high quality sight-impaired research, straddling Diabetes Care, The British Journal of Ophthalmology and Psychobabble Weekly.

And if I was thinking that way, I felt sure the national team were too. For the past two years I've been quietly confident that somewhere in Gloucester, behind closed doors, in a dimly lit room, and possibly in the dead of night, someone was green-lighting a top-secret research project to test the inattentional blindness of screeners, and that one day I would open an apparently innocuous image on the TAT, only to be confronted with a picture of Steve Aldington, the silverbacked godfather of grading, beating his chest as he swings through the temporaral arcade, grabbing veins like vines and shaking his fist at the camera.

Sadly it hasn't happened. Although if it had, 83% of us wouldn't know.

The thing about blind faith, however, is that it's eventually rewarded with a sight worth savouring. The April Test & Training set contained an unusual error in screen 005, which was assigned a ground-truthed grade of R0M0, despite the clear presence of retinopathy. My assertion is that this was a deliberate and subtle clue to the following month's invisible gorilla warfare...

Screen 035 in May's Test & Training set was this temporal left image, graded as R2M1:

Maculopathy


It's clearly M1. But in this case the M stands for Member...

Maculopathy With Knobs On


So there you have it. The April set featured a cock-up. The May set featured a cock. And I'm standing proud amongst the 17% who saw it.

There have been times over the past few months when I've felt that 'Year 3 of TAT' is so called because some of the images are so poor, they look like they were taken by a class of 7-year-olds. I'm still convinced that one or two were captured by candlelight with the patient's glasses still on, using a pinhole camera knocked up from an old occluder and a box of tropicamide.

I know we're meant to be saving sight, but at times I've been more worried about the vision of the photographer who felt they were acceptable images. We live in an age of digital photography, where the only cost of a bad photo is a few seconds of the operative's time, so surely we should be wiping those smudges off the lens, removing the dust from the microchip, and then bumping up the flash and having another go.

Of course, the whole point of Test and Training is to perform an EQA function, and it's definitely succeeding there. I've identified a few outliers myself, just from looking at the photos. Now we just need to find out which programmes they work for, and teach them how to use a camera.

Personally I think each Test & Training image should be coded to identify the photographer, and come with a scorecard, allowing us all to rate them on a scale of one to ten. Thus every image set would not only give an insight into the skill of the grader, but also provide quality assurance for the photographers. Screeners whose photos receive consistently low marks could be flagged up for more training, in the same way that graders are.

If anything, this is an even more vital aspect of EQA. It's all very well checking that your graders are up to the job, but if the images they're presented with are sub-standard, no amount of grading skill is going to compensate for that, and disease will undoubtedly be missed. The photos are the foundation on which the whole screening process is based, and if programmes make do with poor quality images, that house of cards will soon come crashing down around us.

On the plus side, of course, grading the occasional dodgy photo does give us a bit of practice with the more unusual and challenging images we all face from time to time. So have a go at this one...

Retina Display
Suffice it to say, there's a lot going on there. Fragments of the temporal arcade are still visible, particularly the superior artery, but the optic disc appears to have faded into the background, possibly as a result of papilloedema. The macula is ill-defined, with some haemorrhaging at the fovea, and some lighter patches which could be AMD, hard exudates or clumps of drusen. Overall, the picture looks so ischaemic that virtually the entire retinal blood supply has been shut down, with obvious implications for vision.

It's clearly not a well eye. In fact it's not an eye at all. It's actually a photo of the sun. But I think I'd go for R1M1 and a routine referral.

About this blog

I'm a Retinal Screener and Grader currently working for the NHS as part of a Diabetic Retinopathy Screening Programme somewhere in England.
Click here for more.

Technical Issue

If this sidebar is appearing at the very bottom of the page, you're probably using Internet Explorer 6.
Click here to enter the 21st century and update your browser.