Skip to content

Commit

Permalink
[CI]: fix and pass pre-commit hook (InternLM#666)
Browse files Browse the repository at this point in the history
  • Loading branch information
Liqu1d-G authored Jan 26, 2024
1 parent 1cb9870 commit 78bcb07
Show file tree
Hide file tree
Showing 28 changed files with 638 additions and 547 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/daily_tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ jobs:
pip install transformers
pip install sentencepiece
srun -p ${SLURM_PARTITION} --kill-on-bad-exit=1 --job-name=${GITHUB_RUN_ID}-${GITHUB_JOB} --gpus-per-task=2 pytest -s -v --color=yes ./tests/test_hf_model.py
conda deactivate
conda deactivate
clear_env:
if: ${{ !cancelled() }}
needs: [HF_model]
Expand Down
12 changes: 0 additions & 12 deletions .github/workflows/lint_check.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,15 +24,3 @@ jobs:
run: |
pip install isort==5.12.0
isort --check --profile=black .
- name: lint-black
run: |
pip install black==22.8.0
BLACK_EXCLUDE_SETTINGS='\.venv/|\.local/|\.cache/|\.git/'
black --line-length=120 --check --exclude $BLACK_EXCLUDE_SETTINGS ./chat/web_demo.py
- name: lint-pylint
run: |
pip install pylint==v2.17.2
PYLINT_DISABLE_LIST="C0114,C0415,W0212,W0235,W0238,W0621,C0103,R1735,C2801,E0402,C0412,W0719,R1728,W1514,W0718,W0105,W0707,C0209,W0703,W1203"
pylint --rcfile .pylintrc --disable=$PYLINT_DISABLE_LIST ./chat/web_demo.py
81 changes: 36 additions & 45 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,53 +1,44 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/psf/black
rev: '22.8.0'
- repo: https://github.com/PyCQA/flake8
rev: 5.0.4
hooks:
- id: black
args:
- --line-length=120
- repo: https://github.com/pycqa/isort
rev: '5.12.0'
- id: flake8
- repo: https://github.com/PyCQA/isort
rev: 5.11.5
hooks:
- id: isort
name: isort
files: "\\.(py)$"
args:
- --profile=black
- repo: https://github.com/PyCQA/flake8
rev: '3.8.4'
- id: isort
- repo: https://github.com/pre-commit/mirrors-yapf
rev: v0.32.0
hooks:
- id: flake8
args:
- --ignore=F403,F405,W504,W503,E203
- --max-line-length=120
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.9.0
- id: yapf
- repo: https://github.com/codespell-project/codespell
rev: v2.2.1
hooks:
- id: python-check-blanket-noqa
- repo: https://github.com/pre-commit/pre-commit-hooks
- id: codespell
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-added-large-files
args: ['--maxkb=100',--enforce-all]
- id: check-json
- id: check-docstring-first
- id: check-yaml
- id: debug-statements
- id: mixed-line-ending
- repo: https://github.com/PyCQA/pylint/
rev: v2.17.2
- id: trailing-whitespace
- id: check-yaml
- id: end-of-file-fixer
- id: requirements-txt-fixer
- id: double-quote-string-fixer
- id: check-merge-conflict
- id: fix-encoding-pragma
args: ["--remove"]
- id: mixed-line-ending
args: ["--fix=lf"]
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.9
hooks:
- id: pylint
name: pylint
entry: pylint
language: system
types: [python]
args:
[
'--rcfile=.pylintrc',
'--disable=C0114,C0415,W0212,W0235,W0238,W0621,C0103,R1735,C2801,E0402,C0412,W0719,R1728,W1514,W0718,W0105,W0707,C0209,W0703,W1203'
]
- id: mdformat
args: ["--number", "--table-width", "200"]
additional_dependencies:
- mdformat-openmmlab
- mdformat_frontmatter
- linkify-it-py
- repo: https://github.com/myint/docformatter
rev: v1.3.1
hooks:
- id: docformatter
args: ["--in-place", "--wrap-descriptions", "79"]
86 changes: 45 additions & 41 deletions README.md

Large diffs are not rendered by default.

85 changes: 44 additions & 41 deletions README_zh-CN.md

Large diffs are not rendered by default.

14 changes: 7 additions & 7 deletions agent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,18 @@ English | [简体中文](README_zh-CN.md)

## Introduction

InternLM-Chat-7B v1.1 has been released as the first open-source model with code interpreter capabilities, supportting external tools such as Python code interpreter and search engine.
InternLM-Chat-7B v1.1 has been released as the first open-source model with code interpreter capabilities, supporting external tools such as Python code interpreter and search engine.

InternLM2-Chat, open sourced on January 17, 2024, further enhances its capabilities in code interpreter and general tool utilization. With improved and more generalized instruction understanding, tool selection, and reflection abilities, InternLM2-Chat can more reliably support complex agents and multi-step tool calling for more intricate tasks. InternLM2-Chat exhibits decent computational and reasoning abilities even without external tools, surpassing ChatGPT in mathematical performance. When combined with a code interpreter, InternLM2-Chat-20B obtains comparable results to GPT-4 on GSM8K and MATH. Leveraging strong foundational capabilities in mathematics and tools, InternLM2-Chat provides practical data analysis capabilities.

The results of InternLM2-Chat-20B on math code interpreter is as below:

| | GSM8K | MATH |
| :---: | :---: | :--: |
| InternLM2-Chat-20B | 79.6 | 32.5 |
| InternLM2-Chat-20B with Code Interpreter | 84.5 | 51.2 |
| ChatGPT (GPT-3.5) | 78.2 | 28.0 |
| GPT-4 | 91.4 | 45.8 |
| | GSM8K | MATH |
| :--------------------------------------: | :---: | :--: |
| InternLM2-Chat-20B | 79.6 | 32.5 |
| InternLM2-Chat-20B with Code Interpreter | 84.5 | 51.2 |
| ChatGPT (GPT-3.5) | 78.2 | 28.0 |
| GPT-4 | 91.4 | 45.8 |

## Usages

Expand Down
12 changes: 6 additions & 6 deletions agent/README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ InternLM2-Chat 进一步提高了它在代码解释和通用工具调用方面

以下是 InternLM2-Chat-20B 在数学代码解释器上的结果。

| | GSM8K | MATH |
| :---: | :---: | :--: |
| InternLM2-Chat-20B 单纯依靠内在能力 | 79.6 | 32.5 |
| InternLM2-Chat-20B 配合代码解释器 | 84.5 | 51.2 |
| ChatGPT (GPT-3.5) | 78.2 | 28.0 |
| GPT-4 | 91.4 | 45.8 |
| | GSM8K | MATH |
| :---------------------------------: | :---: | :--: |
| InternLM2-Chat-20B 单纯依靠内在能力 | 79.6 | 32.5 |
| InternLM2-Chat-20B 配合代码解释器 | 84.5 | 51.2 |
| ChatGPT (GPT-3.5) | 78.2 | 28.0 |
| GPT-4 | 91.4 | 45.8 |

## 体验

Expand Down
2 changes: 1 addition & 1 deletion agent/lagent_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ streamlit run examples/react_web_demo.py

## 用 InternLM-Chat 构建一个 ReAct 智能体

**注意:**如果你想要启动一个 HuggingFace 的模型,请先运行 pip install -e .[all]
\*\*注意:\*\*如果你想要启动一个 HuggingFace 的模型,请先运行 pip install -e .\[all\]

```python
# Import necessary modules and classes from the "lagent" library.
Expand Down
29 changes: 15 additions & 14 deletions agent/pal_inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,20 +21,21 @@ python pal_inference.py \
```

Parameter explanation:
| Parameter | Description |
| :--------: | :--------------------: |
| \<model\> | Path to the model used for inference |
| \<out_dir\> | Generated code will be saved in the specified output folder |
| --dataset <dataset> | Name of the dataset used for code generation (defaults to gsm8k) |
| --max_length <length> | Maximum input token length for the model (defaults to 2048) |
| --top_p <threshold> | Probability threshold for the sum of candidate tokens (defaults to 0.8) |
| --eoh <end token> | User input end identifier (defaults to "") |
| --eoa <end token> | Model input end identifier (defaults to "") |
| --eos <end token> | System input end identifier (defaults to "") |
| --temperature, -t <temp> | Sampling temperature during generation (defaults to 1.0) |
| --time_out <time> | Maximum time (in seconds) for executing generated code (defaults to 100) |
| --verbose, -v | Print code error messages (optional) |
| --append, -a | Append output to historical results (optional) |

| Parameter | Description |
| :-----------------------: | :----------------------------------------------------------------------: |
| \<model> | Path to the model used for inference |
| \<out_dir> | Generated code will be saved in the specified output folder |
| --dataset <dataset> | Name of the dataset used for code generation (defaults to gsm8k) |
| --max_length <length> | Maximum input token length for the model (defaults to 2048) |
| --top_p <threshold> | Probability threshold for the sum of candidate tokens (defaults to 0.8) |
| --eoh <end token> | User input end identifier (defaults to "") |
| --eoa <end token> | Model input end identifier (defaults to "") |
| --eos <end token> | System input end identifier (defaults to "") |
| --temperature, -t <temp> | Sampling temperature during generation (defaults to 1.0) |
| --time_out <time> | Maximum time (in seconds) for executing generated code (defaults to 100) |
| --verbose, -v | Print code error messages (optional) |
| --append, -a | Append output to historical results (optional) |

A simple usage example is as follows:

Expand Down
Loading

0 comments on commit 78bcb07

Please sign in to comment.