This project was part of the AI Hack Melbourne 2023 hackathon to develop a solution to Australia's social & community problems with artificial intelligence. The project was developed in 7 days and is not maintained.
Warden Wombat (previously Wombat AI) is a voice AI tool to quickly check on people's safety during disasters like floods or fires. It makes automated calls to many people, especially where mobile networks might be down. The AI can ask if they're safe, tell them about evacuations, and give them vital info quickly. It's meant to help first responders and the government reach out fast and efficiently in emergencies, making sure everyone gets the help they need.
Click the thumbnail watch the pitch recording on youtube.
- Vincent
- Vejith
- Liz
- Olga
- Ricardo
- AWS API Gateway - API
- AWS Cloudfront - CDN
- AWS DynamoDB - Database
- AWS Lambda - Microservices
- AWS SQS - Queue
- OpenAI - Large Language Models
- Twilio - Telephony
- Supabase - Auth
- Tome - Presentations
- AWS Cloudfomation - Infrastructure as Code (Prototyping)
- Terraform - Infrastructure as Code (Production)
- Github Actions - CI/CD
- Pre-commit - Code Quality
- Make - Build Tool
- NextJS - Frontend
- TailwindCSS - CSS Framework
During the MVP development we identified a number of enhancements that could be made to the platform in a very quick manner. These are listed below:
- Support for multiple languages (already possible, just need to tweak workflows)
- Support for keypad for people with hearing disabilities (already possible, just need to tweak workflows)
- Replace OpenAI with Amazon Bedrock. Due to time constraints and limitations obtaining credits this was only tested with UI and not integrated with the platform.
- Pulling in call recordings from Twillio and storing them in S3 for future reference. This was not implemented due to time constraints.
- Use of websockets to our own TTS and STT models for smooth conversation flow. This was not implemented due to time constraints, however we did validate this option. If a solution like this were to go to production at scale you would need better control over the audio models and this would be the prefered option.