Mission: A chrome extension that processes all of the images on a webpage. If there is an image that does not have an alt tag, use machine learning to generate one so that screen-readers can actually recognize it.
Created by May F., James K., and Jonathan A., during the CodeLabs Internship 2020. Mentored by Saharsh Yeruva.
altML is made up of three parts: there is the chrome extension that the user interacts with, the machine learning model that generates captions, and the Flask server that provides an alternate way to recieve captions.
If you want to contribute to the project, check out the CONTRIBUTING.md file!
altML is licensed under the MIT License. You can find the full license in LICENSE.md.
This project, and everyone participating in it is governed by the altML Code Of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to any of the project organizers.
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
See the Image Caption Generator specific README.md
The altML Chrome Extension is a program built using HTML, CSS, and JavaScript to interact with the browser. The extension's job is to interact with websites, find images that are missing their alt
attributes, and contact the caption server to fill them in.
See the Chrome Extension specific README.md
The Flask Server is a way to pass along caption requests from the client-side to the server-side. From here, the flask server can handle sensitive data like API credentials without anything being exposed in the extension.
The flask server is built in python, and it is still very early in development. Currently, there is no connection from the Chrome Extension to the flask server, but this is planned for the near future.
It is made to be open-source and modular, so that anyone can audit the code that is being used to pass along caption data to credential-backed API requests, and so that anyone can run their own instance of the server.