- All the information below is mainly from nvidia.com except the additional shell scripts (and related documentation) that I created.
- The latest RTX 3090 GPU or higher is supported (RTX 3090 tested to work too) in this Docker Container.
- Notes:
- This Container image is relatively huge!
Make sure you have large memory, CPU, and GPU to run
. Also, this Container is not for low-end GPU and CPU to use to run Deep Learning models.
- This Container image is relatively huge!
```
If [ you are looking for such common requirements in
AI/ML/DL CUDA (latest) + PyTorch (latest) base Container ]:
Then [ this one may be for you ]
```
- (
New
) Support for building Container behindCorporate SSL Proxy
usingDockerfile-proxy-SSL
file (see instructions below).- As many corporates' AI/ML scientists often encounter the challenges of building Containers behind corporate's security environments seeing the build failures of Containers due to SSL certificates not
trusted
. In short, your corporate is essentially doing Man-in-the-middle SSL manipulations for corproate security protections). This release will automate the setup of using your corporate's SSL certificates to allow the success of building the Container without the issue of 'SSL certificates not trusted' errors.
- As many corporates' AI/ML scientists often encounter the challenges of building Containers behind corporate's security environments seeing the build failures of Containers due to SSL certificates not
- A base Container with
no root access
(except usingsudo ...
and you can remove it usingsudo apt-get remove sudo
to protect your Container). - As most AI/ML/DL scientisits know that the first big hurdle of applying the technologies is to set up a `ML/DL Python environments (versions compatibles among all software libs, GPU support, CUDA, PyTorch, H/W GPU Cards/versions etc.).
- The goal of this project (Deep Learning Container) is to provide
ready-to-use Docker Container
for your ML/DL Python experiments to save tons of your time and frustrations dealing with thoseversions, comptabilities, GPU cards not supported, etc.
- And, the default Jupyter Notebooks app is automatically setup ready to use.
- Ubuntu 20.04 including Python 3.8 environment
- NVIDIA CUDA 11.6.0+
- cuBLAS 11.8.1.74+
- NVIDIA cuDNN 8.3.2.44+
- NVIDIA NCCL 2.11.4+ (optimized for NVLink™)
- Simply,
make build
This new release automatically setup necessary files using your corporate's SSL Proxy's certificates (you will obatin and provide in ./certificates folder).
- Obtain your Corporate's SSL Proxy certificates (e.g., OpenKBS.crt or whatever yours - maybe multiple of them) into the folder
./certificates
. - Copy
Dockerfile-proxy-SSL
to overwrite default one,Dockerfile
- Then,
make build
- That's it! The automation will configure properly all the necessary files inside the Docker OS to allow the build without the build errors of 'SSL certificates not trusted'.
To test whether the GPU function can properly run inside the Container and make sure the Host's Nvidia driver being properly.
- In the host computer, run command:
nvidia-smi
- Run the container with the above command to make sure the Container also have the access to GPU functions:
./run.sh nvidia-smi
- Default run is to auto-detect Host's NVIDIA GPU, e.g., RTX 3090 or any and use all the CUDA units available.
./run.sh or ./run.sh -d (detached) ./run.sh -d -r always (daemon + always-up)
- You need access token from the log file depending up you use './run.sh' or 'make up'.
- If you use './run.sh' to start, then the console screen will have a line containing 'token'. You just need to copy-and-paste to use it to login.
- If you use 'make up' to start, then use the command below to find the 'token' and then copy it to login:
./log.sh
- Then, go to your Web Browser to access Jupyter Notebook:
http://<Host-IP>:8888/tree or http://localhost:8888/tree