diff --git a/README.md b/README.md
index de3daa7..8b6edf5 100644
--- a/README.md
+++ b/README.md
@@ -4,103 +4,76 @@
-**Current Version: v1.0.1**
+**Latest Version: v1.0.2**
-### What's New in v1.0.1
+[](https://opensource.org/licenses/MIT)
-- **New Feature**: Integration with `llama.cpp` framework, enhancing the tool's capabilities.
-- **Bug Fixed**: Remove prompt input for better user experiments
-- **Documentation Updates**: Release notes added along with updated instructions for working with llama.cpp
+Enhance your coding and command-line efficiency with `terminalmind`, now known as `temi`. Get relevant code snippets, process text and PDF data without leaving your terminal, and maintain a seamless workflow.
-For more detailed information, please see the [release notes](https://github.com/namtranase/terminalmind/releases/tag/v1.0.0).
-[](https://opensource.org/licenses/MIT)
+### 🚀 What's New
+
+- Simplified configuration via `config.json`
+- Enhanced helper interface for user commands
+- Cleaner and more accessible documentation
+
+Explore the [release notes](https://github.com/namtranase/terminalmind/releases/tag/v1.0.2) for more details.
-`terminalmind`, now accessible via the `temi` command, is a command-line tool designed to enhance coding efficiency by suggesting relevant code snippets and facilitating text and PDF data processing directly in the terminal. This approach promotes a seamless development workflow, keeping developers focused within their coding environment.
+## 🙏 Acknowledgments
-## Acknowledgments
+A huge shoutout to [Georgi Gerganov](https://github.com/ggerganov) and all contributors of [llama.cpp](https://github.com/ggerganov/llama.cpp) for their pioneering work.
-`terminalmind` builds on the work of the [llama.cpp](https://github.com/ggerganov/llama.cpp) project. Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and contributors for their groundbreaking work in creating a C++ interface for the LLM model. We encourage `terminalmind` users and developers to support and contribute to `llama.cpp`.
-## Supported Features
+## ✨ Supported Features
Demo:
-- [x] **Q&A in Terminal**: Interactive question-and-answer functionality directly in your command line.
-- [x] **PDF Information Retrieval**: Extract and analyze information from both local and web-based PDF files based on keyword.
-- [x] **Online Article Summarization**: Quickly summarize the content of online articles with a simple command.
-- [x] **temi integrates the llama.cpp framework**: `temi` now supports multiple models provided by the llama.cpp framework, enhancing its functionality.
-- [ ] **Command Execution Based on User Input**: *Upcoming feature* to execute relevant terminal commands automatically based on user queries.
-- [ ] **GPU Acceleration for Enhanced Performance**: *In development* – leveraging GPU acceleration for faster response times and improved efficiency.
-- [ ] **Integration with llama.cpp Server**: Enhance response times and model management by interfacing with llama.cpp's dedicated server.
-- [ ] **Interactive App and Web Launcher**: Execute applications and navigate to websites directly through keyword-triggered commands.
-- [ ] **Docker Compatibility**: Run temi seamlessly in Docker containers for isolated and consistent development environments.
-- [ ] **User-Friendly Settings Interface**: Customize temi with ease using a graphical settings panel.
-
-
- Table of Contents
-
- - Description
- - Installation
- - Usage
- - Contributing
- - License
-
-
-
-## Description
-
-`terminalmind`, accessible via the `temi` command, offers functionalities like text summarization, content retrieval from PDFs, and more, using an integrated language model. Designed for terminal use, it caters to developers, researchers, and technology enthusiasts.
-
-## Installation
+- **Q&A in Terminal**: Get instant answers within your command line.
+- **PDF Information Retrieval**: Delve into PDFs for the info you need.
+- **Online Article Summarization**: Summarize articles in commands.
+- **llama.cpp Integration**: Multiple model support for extended capabilities.
+- **User-Friendly Settings**: A simple JSON config caters to all use cases.
+- *Upcoming*: Command execution, GPU acceleration, llama.cpp server integration, app and web launcher, and Docker support.
+
+
+## 📖 Description
+
+`temi` is your go-to CLI companion for quick text summarization, efficient content retrieval, and more. It's an indispensable tool for developers, researchers, and anyone who loves efficiency.
+
+## 🛠 Installation
Clone the repository:
```bash
git clone https://github.com/namtranase/terminalmind.git
cd terminalmind
-```
-
-Install requirement packages:
-```bash
-sudo apt install python3-requests python3-bs4 python3-pypdf2
-```
-
-Download the model:
-```bash
+# Install requirement packages
+sudo apt install python3-requests python3-bs4 python3-pypdf2 jq
+# Download the model
./scripts/download_model.sh
-```
-
-Install the package by scripts:
-```
+#Install the package by scripts:
./scripts/install_temi.sh
```
-Or build step by step.
+Optionally, build and install the Debian package manually:
-Build the Debian package:
```bash
dpkg-deb --build temi-packaging temi.deb
-```
-
-Install package using `dpkg`:
-```bash
sudo dpkg -i temi.deb
```
-## Usage
-
-After installation, you can use temi but first, it needs to update the model file. Start temi for the first time and paste the absolute path to the model downloaded above. For example:
+## 🖥 Usage
+For the first time, you can call helper for better understand the functions of temi:
```bash
-temi Hello
-# Model file not found at /llm_models/model.gguf.
-# Please enter the absolute path to your .gguf model file:
-models/model.gguf
+temi --help
```
+
+
+
-And now you can use `temi` as the terminal assistant:
+Use temi for a variety of tasks:
```bash
temi how to make a python package
# To create a Python package, follow these steps:
@@ -113,11 +86,11 @@ temi how to make a python package
# 6. To distribute, use tools like setuptools, Twine and PyPI to create a distribution package.%
```
-## Contributing
+## 🤝 Contributing
-Contributions to `temi` are greatly appreciated, whether they involve submitting pull requests, reporting bugs, or suggesting enhancements.
+We welcome contributions!
-### Getting Started
+### Quick Start for Contributors
For detailed instructions on working with the llama.cpp submodule, including setup and usage, refer to the WORK_WITH_LLAMACPP.md file.
1. Install repo and submoudles:
@@ -135,14 +108,6 @@ pip install -r requirements/dev_requirements.txt
pre-commit install
```
-### Making Contributions
-
-- Fork and Clone the Repository: Start by forking the repository and then clone your fork to your local machine.
-
-- Create a New Branch: Work on your changes in a new git branch. This helps to keep your modifications organized and separate from the main codebase.
-
-- Submit a Pull Request: Once your changes are complete, push them to your fork and submit a pull request to the main terminalmind repository.
-
## License
`terminalmind` is MIT licensed, as found in the [LICENSE](https://github.com/namtranase/terminalmind/LICENSE) file.
diff --git a/assets/helper.png b/assets/helper.png
new file mode 100644
index 0000000..be303a7
Binary files /dev/null and b/assets/helper.png differ
diff --git a/config/temi_config.json b/config/temi_config.json
new file mode 100644
index 0000000..3dbc720
--- /dev/null
+++ b/config/temi_config.json
@@ -0,0 +1,4 @@
+{
+ "model_path": "~/llm_models/model.gguf",
+ "type": "local"
+}
diff --git a/llama.cpp b/llama.cpp
index 6f9939d..1182cf4 160000
--- a/llama.cpp
+++ b/llama.cpp
@@ -1 +1 @@
-Subproject commit 6f9939d119b2d004c264952eb510bd106455531e
+Subproject commit 1182cf4d4f6ee383b92695c2e3fe438086dcdba7
diff --git a/temi-packaging/usr/local/bin/config_setup.sh b/temi-packaging/usr/local/bin/config_setup.sh
new file mode 100644
index 0000000..0c55de8
--- /dev/null
+++ b/temi-packaging/usr/local/bin/config_setup.sh
@@ -0,0 +1,61 @@
+#!/bin/bash
+
+# Define the directory where the script is located
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd)"
+
+# Path to the configuration directory and default configuration file
+CONFIG_DIR="$HOME/.config/temi"
+CONFIG_FILE="$CONFIG_DIR/temi_config.json"
+
+# Ensure jq is installed
+if ! command -v jq &>/dev/null; then
+ echo "jq is not installed. Please install jq to use this script."
+ exit 1
+fi
+
+# Function to load the configuration from the JSON file
+load_config() {
+ if [ ! -f "$CONFIG_FILE" ]; then
+ # Provide a default configuration if the config file is missing
+ mkdir -p "$CONFIG_DIR"
+ echo '{"model_path": "~/llm_models/model.gguf", "type": "local"}' > "$CONFIG_FILE"
+ echo "Default configuration created at '$CONFIG_FILE'."
+ fi
+
+ MODEL_PATH=$(jq -r '.model_path' "$CONFIG_FILE")
+ MODEL_TYPE=$(jq -r '.type' "$CONFIG_FILE")
+
+ # Resolve tilde and parameter expansion
+ MODEL_PATH=$(eval echo "$MODEL_PATH")
+
+ # Check if the model path is valid
+ if [ "$MODEL_TYPE" == "local" ] && [ ! -f "$MODEL_PATH" ]; then
+ echo "Invalid model path in configuration: '$MODEL_PATH'"
+ fi
+}
+
+# Function to update the configuration file
+update_config() {
+ local new_config_path="$1"
+
+ if [ ! -f "$new_config_path" ]; then
+ echo "The provided configuration file does not exist: '$new_config_path'"
+ exit 1
+ fi
+
+ # Copy the new configuration file to the standard location
+ cp "$new_config_path" "$CONFIG_FILE"
+ echo "Configuration updated successfully."
+
+ # Reload the configuration
+ load_config
+}
+
+# Check if the user wants to update the configuration
+if [[ "$1" == "update_config" ]]; then
+ update_config "$2"
+ exit 0
+fi
+
+# Load configuration settings
+load_config
diff --git a/temi-packaging/usr/local/bin/temi b/temi-packaging/usr/local/bin/temi
index ab07e2d..9e9a6f1 100755
--- a/temi-packaging/usr/local/bin/temi
+++ b/temi-packaging/usr/local/bin/temi
@@ -3,13 +3,14 @@
# Get the directory of the current script
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd)"
-# Use paths relative to the script directory
-CONFIG_FILE="$HOME/.temi_config"
EXTRACT_PDF_SCRIPT="$SCRIPT_DIR/extract_pdf_content.py"
FETCH_ARTICLE_SCRIPT="$SCRIPT_DIR/fetch_article_content.py"
PROMPT_FILE="$SCRIPT_DIR/prompt.txt"
TEMICORE="$SCRIPT_DIR/temicore"
+# Source the configuration setup script
+source "$SCRIPT_DIR/config_setup.sh"
+
# Function to validate if the model path ends with .gguf
validate_model_path() {
if [[ $1 != *.gguf ]]; then
@@ -17,6 +18,38 @@ validate_model_path() {
exit 1
fi
}
+
+show_help() {
+ echo -e "\033[1mUsage:\033[0m"
+ echo " temi [COMMAND] [ARGUMENTS]... [OPTIONS]"
+ echo
+ echo -e "\033[1mCommands:\033[0m"
+ echo " summary|summarize Summarize the content from a given URL."
+ echo " RAG|retrieve|Retrieve KEYWORD Retrieve information given keyword from a PDF file."
+ echo " change_model Change the model configuration."
+ echo " update_config FILE Update the configuration with the given JSON file."
+ echo
+ echo -e "\033[1mOptions:\033[0m"
+ echo " --help Show this help message and exit."
+ echo
+ echo -e "\033[1mExamples:\033[0m"
+ echo " temi summary https://example.com/article"
+ echo " temi retrieve with keyword: test, path/to/file.pdf"
+ echo " temi change_model"
+ echo " temi update_config path/to/config.json"
+ echo
+ echo "For more information, visit: https://github.com/namtranase/terminalmind"
+ echo
+ echo -e "\033[1mNote:\033[0m Replace [COMMAND] with one of the commands above,"
+ echo "and [ARGUMENTS]... with the appropriate input for the command."
+}
+
+# Check for --help option
+if [[ "$1" == "--help" ]]; then
+ show_help
+ exit 0
+fi
+
# Function to extract URL from input
extract_url() {
echo "$1" | grep -o 'http[s]*://[^ ]*'
@@ -42,11 +75,6 @@ extract_keyword() {
echo "$keyword"
}
-# Load model path from the configuration file if it exists
-if [ -f "$CONFIG_FILE" ]; then
- source "$CONFIG_FILE"
-fi
-
change_model() {
echo "Please enter the new absolute path to your .gguf model file:"
read -r new_model_path