PhD Candidate | Software Consultant


Photo of Lee Stearns

I am a PhD candidate at the University of Maryland, College Park. I am currently working to complete my dissertation, titled "HandSight: A Touch-Based Wearable System to Increase Information Accessibility for People with Visual Impairments". I expect to graduate in August 2018. My co-advisors are Professors Rama Chellappa and Jon Froehlich.

I also provide consulting services for software and web development when my research schedule permits. I have designed and built utilities to process and visualize medical image data, and mobile and web applications to assist professional and amateur shooters in calculating ballistics while using a client's custom rifle scope.

Download My C.V.


The goal of my research is to build a wearable device to help people with visual impairments access information through touch. For example, reading and exploring printed text, identifying colors and textures, or gesturing on the surface of the body to control a mobile device. The key component is a small camera that is mounted on the finger, along with other sensors and haptic feedback mounted on the fingers or wrist. My research spans multiple disciplines, applying computer vision and machine learning techiques to process sensor readings, building wearable prototypes and custom circuitry, and designing and testing accessible interfaces through user studies.

Reading printed text with a finger-mounted camera

Reading and Exploring Printed Text

HandSight is a wearable system that assists visually impaired users in reading printed text and exploring the spatial layout of a page. It includes a small camera mounted on the finger along with speech feedback and audio and haptic cues to help users explore or read physical documents. The reading experience was inspired by braille, with users able to explore where text and images are positioned on the page through touch, and control the speed of reading as they move their fingers across the page.

Publications: ACVR 2014, TACCESS Nov 2016

The TouchCam prototype

Controlling Devices with On-Body Input

TouchCam uses a finger-camera and several other finger- and wrist-worn sensors to allow users to control mobile devices using touch-based gestures on the surface of their own body. Because the sensors are positioned on the gesturing hand rather than on the upper body or target interaction surface, TouchCam is easily scalable and can support input intuitive mappings between locations and applications at a wide variety of locations within the user's reach. For example, users can tap on their wrist to check the time, or swipe on their thigh to interact with a health and fitness app.

Publications: ICPR 2016, ASSETS 2017, IMWUT Dec 2017

TouchCam Video

Identifying clothing colors and textures

Identifying Colors and Textures

The finger-mounted camera system can support robust identification of colors and textures through touch-based interactions, mitigating the issues with distance and lighting that affect many existing approaches. Our preliminary investigation achieves high (99.4%) texture classification accuracy with a small dataset of clothing patterns, suggesting that users could train the system to reliably recognize customized articles of clothing in their own closet even with a small number of training examples. Our follow-up work will explore how well these results will extend to a larger, more varied dataset, and how best to convey visual texture and color information to visually impaired users.

Publication: ASSETS 2017

Augumented reality magnification of a magazine document using a finger-mounted camera

Magnifying with Augmented Reality

Augmented reality can magnify and enhance visual information, making the physical world more accessible for people with vision impairments. Unlike traditional CCTV or handheld magnifiers, a wearable AR system is portable and always available, provides a perceptually large display, and can show magnified output that is co-located with or even overlaid on top of the original content within the wearer’s field of view. Our preliminary investigation explores potential design dimensions, and follow-up work will evaluate their effectiveness and compare them with existing magnification approaches.

Publication: ASSETS 2017

In the News:

  • Big Ten Network: How Maryland researchers are improving reading for the visually impaired: BTN LiveBIG
  • WUSA9: UMD Researchers Hope to Help the Blind 'Experience the World'
  • TERP Magazine: A New Way With Words: Handy Device to Help Blind Read without Braille
  • New Scientist: Tiny fingertip camera helps blind people read without braille
  • PC Magazine: Fingertip Camera Reads to the Blind
  • Futurism: This New Tech Is Letting Blind People Read Without Braille
  • PSFK: Fingertip Cameras May Help The Blind Read Without Braille


My research group is large and diverse. Here are a few of the professors, students, and other people with whom I have collaborated in my research:

Yumeng Wang
Ji Hyuk Bae
Bridget Cheng
Tony Cheng
Meena Sengottuvelu
Darren Smith
David Ross


I have had the privilege of mentoring several talented undergraduate and high school students as they pursued their group or independent research projects:

Victor DeSouza
Alexander Medeiros
Meena Sengottuvelu
Chuan Chen
Jessica Yin
Harry Vancao
Eric Lancaster