-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dockerize API and Integrate BERT/FastText Models for Easy Deployment #1
Comments
Update on Dockerization Plan Hello everyone, We wanted to provide an update on our plans regarding the dockerization of the OTS (Open Text Shield). After reviewing our objectives and considering community feedback, we've decided to expand the scope:
Model Retraining Support
Code Refactoring
Next Steps We'll be updating the repository soon with these changes, along with detailed documentation to guide you through the setup and deployment process. Community Contributions We highly value community input and contributions. If you have experience with Docker, GPU optimization, or code refactoring, we'd love to hear from you. Thank you for your continued support! |
Good news! Now OTS is dockerized with pre-trained models bundled in. You can now deploy a working OTS in just a few minutes and start making predictions on any text you provide. Quick Start with DockerSetting up Open Text Shield is quick and easy with Docker. Follow these steps to get started: 1. Pull the Latest Docker Imagedocker pull telecomsxchange/opentextshield:latest 2. Run the Docker Containerdocker run -d -p 8002:8002 telecomsxchange/opentextshield:latest 3. Send a Message for PredictionOnce the container is running, you can send HTTP requests to the API to classify messages. Example curl -X POST "http://localhost:8002/predict/" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d "{\"text\":\"Your SMS content here\",\"model\":\"bert\"}" Example Response:{
"label": "ham",
"probability": 0.9971883893013,
"processing_time": 0.6801116466522217,
"Model_Name": "OTS_mBERT",
"Model_Version": "bert-base-uncased",
"Model_Author": "TelecomsXChange (TCXC)",
"Last_Training": "2024-03-20"
} |
x86 arch docker image has been released, to use it run : |
Description
We aim to simplify the deployment process of our FastAPI application, which serves as an interface to our BERT and FastText models for SMS classification. The current setup process is manual and requires several steps, including setting up Python environments, installing dependencies, and loading pre-trained models. To enhance usability and facilitate a smoother setup for developers and users alike, we propose dockerizing the application along with the BERT and FastText models.
Objective
Tasks
Dockerfile
that specifies the environment, installs dependencies, and sets up the application.Requirements
Dockerfile
and documentation to guide new users through the Docker setup and deployment process.Discussion Points
Contributions
Contributions are welcome, and this issue serves as a starting point for discussion, planning, and implementation. If you have experience with Docker, Python environments, or have insights into deploying machine learning models in production, your input would be highly valued.
The text was updated successfully, but these errors were encountered: