-
Clone the app to your local environment from your terminal using the following command:
git clone https://github.com/IBM-Cloud/openwhisk-darkvisionapp.git
-
or Download and extract the source code from this archive
Note: if you have existing instances of these services, you don't need to create new instances. You can simply reuse the existing ones.
-
Open the IBM Bluemix console
-
Create a Cloudant NoSQL DB service instance named cloudant-for-darkvision
-
Open the Cloudant service dashboard and create a new database named openwhisk-darkvision
-
Create a Watson Visual Recognition service instance named visualrecognition-for-darkvision
-
Create a Watson Speech to Text instance named stt-for-darkvision
-
Create a Natural Language Understanding instance named nlu-for-darkvision
-
Optionally create a Object Storage service instance named objectstorage-for-darkvision
If configured, media files will be stored in Object Storage instead of Cloudant. A container named openwhisk-darkvision will be automatically created.
This simple web user interface is used to upload the videos or images and visualize the results of each frame analysis.
- Change to the web directory.
cd openwhisk-darkvisionapp/web
-
If in the previous section you decided to use existing services instead of creating new ones, open manifest.yml and update the Cloudant service name.
-
If you configured an Object Storage service, make sure to add its name to the list of services in the manifest.yml services section or to uncomment the existing objectstorage-for-darkvision entry.
-
Push the application to Bluemix:
cf push
By default, anyone can upload/delete/reset videos and images. You can restrict access to these actions by defining the environment variables ADMIN_USERNAME and ADMIN_PASSWORD on your application. This can be done in the Bluemix console or with the command line:
cf set-env openwhisk-darkvision ADMIN_USERNAME admin
cf set-env openwhisk-darkvision ADMIN_PASSWORD aNotTooSimplePassword
You will need to restage the application for the change to take effect:
cf restage openwhisk-darkvision
Extracting frames and audio from a video is achieved with ffmpeg. ffmpeg is not available to an Cloud Functions action written in JavaScript or Swift. Fortunately Cloud Functions allows to write an action as a Docker image and can retrieve this image from Docker Hub.
To build the extractor image, follow these steps:
-
Change to the processing/extractor directory.
-
Ensure your Docker environment works and that you have logged in Docker hub. To login use
docker login
. -
Run
./buildAndPush.sh youruserid/yourimagename
Note: On some systems this command needs to be run with
sudo
.
- After a while, your image will be available in Docker Hub, ready for Cloud Functions.
-
Change to the root directory of the checkout.
-
Copy the file named template-local.env into local.env
cp template-local.env local.env
- Get the service credentials for services created above and replace placeholders in
local.env
with corresponding values (usernames, passwords, urls). These properties will be injected into a package so that all actions can get access to the services.
If you configured an Object Storage service, specify its properties in this file too but uncommenting the placeholder variables.
-
Update the value of STT_CALLBACK_URL with the organization and space where the Cloud Functions actions will be deployed.
-
Update the value of DOCKER_EXTRACTOR_NAME with the name of the Docker image you created in the previous section.
-
Ensure your Cloud Functions command line interface is property configured with:
bx wsk list
This shows the packages, actions, triggers and rules currently deployed in your Cloud Functions namespace.
- Get dependencies used by the deployment script
npm install
⚠️ Node.js >= 6.9.1 is required
- Create the action, trigger and rule using the script from the root directory directory:
node deploy.js --install
The script can also be used to --uninstall the Cloud Functions artifacts to --update the artifacts if you change the action code.
We need to tell the Speech to Text service where to call back when it has completed the audio processing.
- Register the callback
node deploy.js --register_callback
This command reuses the configuration of the variables STT_URL, STT_USERNAME, STT_PASSWORD, STT_CALLBACK_URL made in your local.env file.
That's it! Use the web application to upload images/videos and view the results! You can also view the results using an iOS application as shown further down the README.
-
Change to the web directory
-
Get dependencies
npm install
- Start the application
npm start
Note: To find the Cloudant database (and Object Storage) to connect to when running locally, the application uses the environment variables defined in local.env in previous steps.
- Upload videos through the web user interface. Wait for Cloud Functions to process the videos. Refresh the page to look at the results.