Focus on writing your code, let LLMs write the documentation for you.
With just a few keystrokes in your terminal by using OpenAI or 100% local LLMs without any data leaks.
Built with langchain, treesitter, lama.cpp and ollama
- 📝 Generate documentation comment blocks for all methods in a file
- e.g. Javadoc, JSDoc, Docstring, Rustdoc etc.
- ✍️ Generate inline documentation comments in method bodies
- 🌳 Treesitter integration
- 💻 Local LLM support
- 🌐 Azure OpenAI support
Note
Documentation will only be added to files without unstaged changes, so nothing is overwritten.
Create documentations for any method in a file specified by <RELATIVE_FILE_PATH>
with GPT-3.5-Turbo model:
aicomment <RELATIVE_FILE_PATH>
Create also documentation comments in the method body:
aicomment <RELATIVE_FILE_PATH> --inline
Guided mode, confirm documentation generation for each method:
aicomment <RELATIVE_FILE_PATH> --guided
Use GPT-4 model:
aicomment <RELATIVE_FILE_PATH> --gpt4
Use GPT-3.5-Turbo-16k model:
aicomment <RELATIVE_FILE_PATH> --gpt3_5-16k
Use Azure OpenAI:
aicomment <RELATIVE_FILE_PATH> --azure-deployment <DEPLOYMENT_NAME>
Use local Llama.cpp:
aicomment <RELATIVE_FILE_PATH> --local_model <MODEL_PATH>
Use local Ollama:
aicomment <RELATIVE_FILE_PATH> --ollama-model <OLLAMA_MODEL>
Note
How to download models from huggingface for local usage see Local LLM usage
Note
If very extensive and descriptive documentations are needed, consider using GPT-4/GPT-3.5 Turbo 16k or a similar local model.
Important
The results by using a local LLM will highly be affected by your selected model. To get similar results compared to GPT-3.5/4 you need to select very large models which require a powerful hardware.
- Python
- Typescript
- Javascript
- Java
- Rust
- Kotlin
- Go
- C++
- C
- C#
- Haskell
- Python >= 3.9
Install in an isolated environment with pipx
:
pipx install doc-comments-ai
If you are facing issues using pipx uou can also install directly from source through PyPI with
pip install doc-comments-ai
However, it is recommended to use pipx instead to benefit from isolated environments for the dependencies.
For further help visit the Troubleshooting section.
Create your personal OpenAI API key and add it as $OPENAI_API_KEY
to your environment with:
export OPENAI_API_KEY = <YOUR_API_KEY>
Add the following variables to your environment:
export AZURE_API_BASE = "https://<your-endpoint.openai.azure.com/"
export AZURE_API_KEY = <YOUR_AZURE_OPENAI_API_KEY>
export AZURE_API_VERSION = "2023-05-15"
When using a local LLM no API key is required. On first usage of --local_model
you will be asked for confirmation to intall llama-cpp-python
with its dependencies.
The installation process will take care of the hardware-accelerated build tailored to your hardware and OS. For further details see:
installation-with-hardware-acceleration
To download a model from huggingface for local usage the most convenient way is using the huggingface-cli
:
huggingface-cli download TheBloke/CodeLlama-13B-Python-GGUF codellama-13b-python.Q5_K_M.gguf
This will download the codellama-13b-python.Q5_K_M
model to ~/.cache/huggingface/
.
After the download has finished the absolute path of the .gguf
file is printed to the console which can be used as the value for --local_model
.
Important
Since llama.cpp
is used the model must be in the .gguf
format.
-
Make sure the rust compiler is installed on your system from here.
pip failed to build package: tiktoken Some possibly relevant errors from pip install: error: subprocess-exited-with-error error: can't find Rust compiler
If you are missing a feature or facing a bug don't hesitate to open an issue or raise a PR. Any kind of contribution is highly appreciated!