Today I read a paper titled “Simulation of Color Blindness and a Proposal for Using Google Glass as Color-correcting Tool”
The abstract is:
The human visual color response is driven by specialized cells called cones, which exist in three types, viz.
R, G, and B.
Software is developed to simulate how color images are displayed for different types of color blindness.
Specified the default color deficiency associated with a user, it generates a preview of the rainbow (in the visible range, from red to violet) and shows up, side by side with a colorful image provided as input, the display correspondent colorblind.
The idea is to provide an image processing after image acquisition to enable a better perception ofcolors by the color blind.
Examples of pseudo-correction are shown for the case of Protanopia (red blindness).
The system is adapted into a screen of an i-pad or a cellphone in which the colorblind observe the camera, the image processed with color detail previously imperceptible by his naked eye.
As prospecting, wearable computer glasses could be manufactured to provide a corrected image playback.
The approach can also provide augmented reality for human vision by adding the UV or IR responses as a new feature of Google Glass.