Image: Left: Overview of the ARM. A digital camera captures the same field of view (FoV) as the user and passes the image to an attached compute unit capable of running real-time inference of a machine-learning model. The results are fed back into a custom AR display, which is inline with the ocular lens and projects the model output on the same plane as the slide. Right: A picture of the prototype, which has been retrofitted into a typical clinical-grade light microscope (Photo courtesy of Google).
A team of researchers at Google LLC (Menlo Park, CA, USA) has developed a prototype Augmented Reality Microscope (ARM) platform that could help accelerate and democratize the adoption of deep learning tools for pathologists around the world. The platform comprises a modified light microscope that allows for real-time image analysis and presentation of the results of machine learning algorithms directly into the field of view. The ARM can be retrofitted into existing light microscopes in hospitals and clinics using low-cost, readily available components, and without the need for analyzing whole slide digital versions of the tissue.
In a talk delivered at the Annual Meeting of the American Association for Cancer Research (AACR), with an accompanying paper "An Augmented Reality Microscope for Real-time Automated Detection of Cancer" (under review), Google described how its researchers demonstrated the potential utility of the ARM by configuring it to run two different cancer detection algorithms: one that detects breast cancer metastases in lymph node specimens, and another that detects prostate cancer in prostatectomy specimens. These models can run at magnifications between 4-40x, and the result of a given model is displayed by outlining detected tumor regions with a green contour. These contours help draw the pathologist’s attention to areas of interest without obscuring the underlying tumor cell appearance. While both cancer models were originally trained on images from a whole slide scanner with a significantly different optical configuration, the models performed remarkably well on the ARM with no additional re-training.
Google believes that the ARM has potential for a large impact on global health, especially for the diagnosis of infectious diseases, including tuberculosis and malaria, in developing countries. Additionally, even in hospitals that will adopt a digital pathology workflow in the near future, ARM could be used in combination with the digital workflow where scanners still face major challenges or where rapid turnaround is required (e.g. cytology, fluorescent imaging, or intra-operative frozen sections). The researchers will continue to explore how the ARM can help accelerate the adoption of machine learning for a positive impact around the world.