How does this work? In regular treatment, doctors will look at photos of the back of the patient’s eye and look for lesions present like microaneurysms, hemorrhages, and hard exudates, all of which are indicative of bleeding and fluid leakage. So what Google has done is create a dataset of 128,000 images, each of which is examined by 3-7 ophthalmologists from a panel of 54 ophthalmologists, and feed that information back to the computer to “trains” it to be able to detect a healthy eye and one that could have signs of diabetic retinopathy.
According to Google, they then tested their algorithm against a panel 7 or 8 U.S. board-certified ophthalmologists and found that their results were on-par with that of the doctors. While this is a great example of machine learning in use, Google hopes that the speed and accuracy of their algorithm could be used in parts of the world where access to doctors and scanning equipment is difficult, and where the algorithm can quickly identify patients who might be in need.