From 9cda6b2df5f212f6cb5322e8edfabc976f86395c Mon Sep 17 00:00:00 2001 From: Muhammad Luqman Date: Sun, 6 Aug 2023 13:41:27 +0700 Subject: [PATCH 1/8] build: migrate to poetry --- README.md | 172 ++++++++++++---------- poetry.lock | 366 +++++++++++++++++++++++++++++++++++++++++++++++ pyproject.toml | 33 ++--- requirements.txt | 17 ++- 4 files changed, 490 insertions(+), 98 deletions(-) create mode 100644 poetry.lock diff --git a/README.md b/README.md index 13a427f..786f6d9 100644 --- a/README.md +++ b/README.md @@ -1,10 +1,10 @@ # STADATA - Simplified Access to [WebAPI](https://webapi.bps.go.id/developer/) BPS -[![pyversion](https://img.shields.io/pypi/pyversions/stadata)](https://img.shields.io/pypi/pyversions/stadata) -[![pypi](https://img.shields.io/pypi/v/stadata)](https://img.shields.io/pypi/v/stadata) -[![status](https://img.shields.io/pypi/status/stadata)](https://img.shields.io/pypi/status/stadata) -[![downloads](https://img.shields.io/pypi/dm/stadata.svg)](https://img.shields.io/pypi/dm/stadata.svg) -[![sourcerank](https://img.shields.io/librariesio/sourcerank/pypi/stadata.svg)](https://img.shields.io/librariesio/sourcerank/pypi/stadata.svg) +[![pyversion](https://img.shields.io/pypi/pyversions/stadata-semver)](https://img.shields.io/pypi/pyversions/stadata-semver) +[![pypi](https://img.shields.io/pypi/v/stadata-semver)](https://img.shields.io/pypi/v/stadata-semver) +[![status](https://img.shields.io/pypi/status/stadata-semver)](https://img.shields.io/pypi/status/stadata-semver) +[![downloads](https://img.shields.io/pypi/dm/stadata-semver.svg)](https://img.shields.io/pypi/dm/stadata-semver.svg) +[![sourcerank](https://img.shields.io/librariesio/sourcerank/pypi/stadata-semver.svg)](https://img.shields.io/librariesio/sourcerank/pypi/stadata-semver.svg) [![contributors](https://img.shields.io/github/contributors/bps-statistics/stadata)](https://img.shields.io/github/contributors/bps-statistics/stadata) [![license](https://img.shields.io/github/license/bps-statistics/stadata)](https://img.shields.io/github/license/bps-statistics/stadata) @@ -25,25 +25,26 @@ The key features of STADATA include: - Easy Installation: The package can be easily installed using pip, making it accessible to Python users. - Convenient API Methods: STADATA offers simple and straightforward API methods for listing domains, static tables, dynamic tables, and viewing specific tables. - Language Support: Users can choose between Indonesian ('ind') and English ('eng') languages to display the retrieved data. - + ## Table of Contents -* [Installation](#installation) -* [Requirements](#requirements) -* [Usage](#usage) - * [Getting Started](#getting-started) - * [API Methods](#api-methods) - * [List Domain](#list-domain) - * [List Static Table](#list-static-table) - * [List Dynamic Table](#list-dynamic-table) - * [List Press Release](#list-press-release) - * [List Publication](#list-publication) - * [View Static Table](#view-static-table) - * [View Dynamic Table](#view-dynamic-table) - * [View Press Release](#view-press-release) - * [View Publication](#view-publication) +- [Installation](#installation) +- [Requirements](#requirements) +- [Usage](#usage) + - [Getting Started](#getting-started) + - [API Methods](#api-methods) + - [List Domain](#list-domain) + - [List Static Table](#list-static-table) + - [List Dynamic Table](#list-dynamic-table) + - [List Press Release](#list-press-release) + - [List Publication](#list-publication) + - [View Static Table](#view-static-table) + - [View Dynamic Table](#view-dynamic-table) + - [View Press Release](#view-press-release) + - [View Publication](#view-publication) ## Installation + To install STADATA, use the following pip command: ```python @@ -61,10 +62,9 @@ STADATA is designed for Python 3.7 and above. To use the package, the following With the necessary requirements in place, you can easily start utilizing STADATA to access the WebAPI BPS and retrieve statistical data from BPS - Statistics Indonesia directly in your Python scripts. - ## Usage -To begin using STADATA, you must first install the package and satisfy its requirements, as mentioned in the previous section. Once you have the package installed and the dependencies in place, you can start accessing statistical data from BPS - Statistics Indonesia through the WebAPI BPS. +To begin using STADATA, you must first install the package and satisfy its requirements, as mentioned in the previous section. Once you have the package installed and the dependencies in place, you can start accessing statistical data from BPS - Statistics Indonesia through the WebAPI BPS. ### Getting Started @@ -79,25 +79,25 @@ client = stadata.Client('token') ``` Parameter: -* `token` (str, *required*): Your personal API token provided by the WebAPI BPS Developer portal. This token is necessary to authenticate and access the API. Make sure to replace `token` with your actual API token. - +- `token` (str, _required_): Your personal API token provided by the WebAPI BPS Developer portal. This token is necessary to authenticate and access the API. Make sure to replace `token` with your actual API token. ### API Methods The STADATA package provides the following API methods: -* [List Domain](#list-domain): This method returns a list of BPS's webpage domains from the national level to the district/region level. Domains are used to specify the region from which data is requested. -* [List Static Table](#list-static-table): This method returns a list of all static tables available on the BPS's webpage. -* [List Dynamic Table](#list-dynamic-table): This method returns a list of all dynamic tables available on the BPS's webpage. -* [List Press Release](#list-press-release): This method returns a list of all press release available on the BPS's webpage. -* [List Publication](#list-publication): This method returns a list of all publication available on the BPS's webpage. -* [View Static Table](#view-static-table): This method returns data from a specific static table. -* [View Dynamic Table](#view-dynamic-table): This method returns data from a specific dynamic table. -* [View Press Release](#view-press-release): This method returns data from a specific press release content. -* [View Publication](#view-publicatione): This method returns data from a specific publication. +- [List Domain](#list-domain): This method returns a list of BPS's webpage domains from the national level to the district/region level. Domains are used to specify the region from which data is requested. +- [List Static Table](#list-static-table): This method returns a list of all static tables available on the BPS's webpage. +- [List Dynamic Table](#list-dynamic-table): This method returns a list of all dynamic tables available on the BPS's webpage. +- [List Press Release](#list-press-release): This method returns a list of all press release available on the BPS's webpage. +- [List Publication](#list-publication): This method returns a list of all publication available on the BPS's webpage. +- [View Static Table](#view-static-table): This method returns data from a specific static table. +- [View Dynamic Table](#view-dynamic-table): This method returns data from a specific dynamic table. +- [View Press Release](#view-press-release): This method returns data from a specific press release content. +- [View Publication](#view-publicatione): This method returns data from a specific publication. #### List Domain + This method returns a list of BPS's webpage domains from the national level to the district level. Domains are used to specify the region from which data is requested. ```python @@ -105,9 +105,11 @@ client.list_domain() ``` Returns: + - `domains`: A list of domain IDs for different regions, e.g., provinces, districts, or national. #### List Static Table + This method returns a list of all static tables available on the BPS's webpage. You can specify whether to get all static tables from all domains or only from specific domains. ```python @@ -119,20 +121,21 @@ client.list_statictable(all=False, domain=['domain_id-1', 'domain_id-2']) ``` Parameters: -- `all` (bool, *optional*): A boolean indicating whether to get all static tables from all domains (*True*) or only from specific domains (*False*). -- `domain` (list of str, *required* if `all` is *False*): A list of domain IDs which you want to retrieve static tables from. + +- `all` (bool, _optional_): A boolean indicating whether to get all static tables from all domains (_True_) or only from specific domains (_False_). +- `domain` (list of str, _required_ if `all` is _False_): A list of domain IDs which you want to retrieve static tables from. Returns: -- `data`: A list of static table information - ``` - table_id|title|subj_id|subj|updt_date|size|domain - ``` +- `data`: A list of static table information + ``` + table_id|title|subj_id|subj|updt_date|size|domain + ``` #### List Dynamic Table -This method returns a list of all dynamic tables available on the BPS's webpage. You can specify whether to get all dynamic tables from all domains or only from specific domains. +This method returns a list of all dynamic tables available on the BPS's webpage. You can specify whether to get all dynamic tables from all domains or only from specific domains. ```python # Get all static tables from all domains @@ -143,19 +146,21 @@ client.list_dynamictable(all=False, domain=['domain_id-1', 'domain_id-2']) ``` Parameters: -- `all` (bool, *optional*): A boolean indicating whether to get all static tables from all domains (*True*) or only from specific domains (*False*). -- `domain` (list of str, *required* if `all` is *False*): A list of domain IDs which you want to retrieve static tables from. + +- `all` (bool, _optional_): A boolean indicating whether to get all static tables from all domains (_True_) or only from specific domains (_False_). +- `domain` (list of str, _required_ if `all` is _False_): A list of domain IDs which you want to retrieve static tables from. Returns: + - `data`: A list of static table information - ``` - var_id|title|sub_id|sub_name|subcsa_id|subcsa_name|notes|vertical|unit|graph_id|graph_name|domain - ``` + ``` + var_id|title|sub_id|sub_name|subcsa_id|subcsa_name|notes|vertical|unit|graph_id|graph_name|domain + ``` #### List Publication -This method returns a list of all publication available on the BPS's webpage. You can specify whether to get all publication from all domains or only from specific domains. You can also specify month and year when publication published to get specific publication. +This method returns a list of all publication available on the BPS's webpage. You can specify whether to get all publication from all domains or only from specific domains. You can also specify month and year when publication published to get specific publication. ```python # Get all static tables from all domains @@ -169,21 +174,23 @@ client.list_publication(all=False, domain=['domain_id-1', 'domain_id-2'], month= ``` Parameters: -- `all` (bool, *optional*): A boolean indicating whether to get all publication from all domains (*True*) or only from specific domains (*False*). -- `domain` (list of str, *required* if `all` is *False*): A list of domain IDs which you want to retrieve publication from. -- `month` (str, *optional*): A month when publication published. -- `year` (str, *required*): A year when publication published. + +- `all` (bool, _optional_): A boolean indicating whether to get all publication from all domains (_True_) or only from specific domains (_False_). +- `domain` (list of str, _required_ if `all` is _False_): A list of domain IDs which you want to retrieve publication from. +- `month` (str, _optional_): A month when publication published. +- `year` (str, _required_): A year when publication published. Returns: + - `data`: A list of publication - ``` - pub_id|title|issn|sch_date|rl_date|updt_date|size|domain - ``` + ``` + pub_id|title|issn|sch_date|rl_date|updt_date|size|domain + ``` #### List Press Release -This method returns a list of all press release available on the BPS's webpage. You can specify whether to get all press release content from all domains or only from specific domains. You can also specify month and year when press release published to get specific press release. +This method returns a list of all press release available on the BPS's webpage. You can specify whether to get all press release content from all domains or only from specific domains. You can also specify month and year when press release published to get specific press release. ```python # Get all static tables from all domains @@ -197,19 +204,22 @@ client.list_pressrelease(all=False, domain=['domain_id-1', 'domain_id-2'], month ``` Parameters: -- `all` (bool, *optional*): A boolean indicating whether to get press release from all domains (*True*) or only from specific domains (*False*). -- `domain` (list of str, *required* if `all` is *False*): A list of domain IDs which you want to retrieve press release from. -- `month` (str, *optional*): A month when press release published. -- `year` (str, *required*): A year when press release published. + +- `all` (bool, _optional_): A boolean indicating whether to get press release from all domains (_True_) or only from specific domains (_False_). +- `domain` (list of str, _required_ if `all` is _False_): A list of domain IDs which you want to retrieve press release from. +- `month` (str, _optional_): A month when press release published. +- `year` (str, _required_): A year when press release published. Returns: + - `data`: A list of press release - ``` - brs_id|subj_id|subj|title|rl_date|updt_date|size|domain - ``` + ``` + brs_id|subj_id|subj|title|rl_date|updt_date|size|domain + ``` #### View Static Table + This method returns data from a specific static table. You need to provide the domain ID and the table ID, which you can get from the list of static tables. ```python @@ -218,62 +228,76 @@ client.view_statictable(domain='domain_id', table_id='table_id', lang='ind') ``` Parameters: -- `domain` (str, *required*): The domain ID where the static table is located. -- `table_id` (str, *required*): The ID of the specific static table you want to retrieve data from. -- `lang` (str, *optional*, default: `ind`): The language in which the table data should be displayed (`ind` for Indonesian, `eng` for English). + +- `domain` (str, _required_): The domain ID where the static table is located. +- `table_id` (str, _required_): The ID of the specific static table you want to retrieve data from. +- `lang` (str, _optional_, default: `ind`): The language in which the table data should be displayed (`ind` for Indonesian, `eng` for English). Returns: -- `data`: The static table data in the specified language. +- `data`: The static table data in the specified language. #### View Dynamic Table + This method returns data from a specific dynamic table. You need to provide the domain ID, variable ID, and the period (year) for the dynamic table. ```python # View dynamic table with a specific period client.view_dynamictable(domain='domain_id', var='variable_id', th='year') ``` + Parameters: -- `domain` (str, *required*): The domain ID where the dynamic table is located. -- `var` (str, *required*): The ID of the specific variable in the dynamic table you want to retrieve data from. -- `th` (str, *optional*, default: ''): The period (year) of the dynamic table data you want to retrieve. + +- `domain` (str, _required_): The domain ID where the dynamic table is located. +- `var` (str, _required_): The ID of the specific variable in the dynamic table you want to retrieve data from. +- `th` (str, _optional_, default: ''): The period (year) of the dynamic table data you want to retrieve. Returns: + - `data`: The dynamic table data for the specified variable and period. #### View Publication + This method returns data from a specific publication. You need to provide the domain ID, publication ID for the publication. ```python # View dynamic table with a specific period client.view_publication(domain='domain_id', idx='publication_id') ``` + Parameters: -- `domain` (str, *required*): The domain ID where the publication is located. -- `idx` (str, *required*): The ID of the specific publication in the list of publication you want to retrieve data from. + +- `domain` (str, _required_): The domain ID where the publication is located. +- `idx` (str, _required_): The ID of the specific publication in the list of publication you want to retrieve data from. Returns: + - `Material`: Object interface for publication and press release content. Methods: + - `desc()` : Show all detail data of spesific publication - `download(url)`: Download publication content in PDF - #### View Press Release + This method returns data from a specific press release. You need to provide the domain ID, press release ID for the spesific press release. ```python # View dynamic table with a specific period client.view_pressrelease(domain='domain_id', idx='press_release_id') ``` + Parameters: -- `domain` (str, *required*): The domain ID where the press release is located. -- `idx` (str, *required*): The ID of the specific press release in the list of press release you want to retrieve data from. + +- `domain` (str, _required_): The domain ID where the press release is located. +- `idx` (str, _required_): The ID of the specific press release in the list of press release you want to retrieve data from. Returns: + - `Material`: Object interface for publication and press release content. Methods: + - `desc()` : Show all detail data of spesific press release -- `download(url)`: Download press release content in PDF \ No newline at end of file +- `download(url)`: Download press release content in PDF diff --git a/poetry.lock b/poetry.lock new file mode 100644 index 0000000..3a6a538 --- /dev/null +++ b/poetry.lock @@ -0,0 +1,366 @@ +# This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand. + +[[package]] +name = "certifi" +version = "2023.7.22" +description = "Python package for providing Mozilla's CA Bundle." +optional = false +python-versions = ">=3.6" +files = [ + {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"}, + {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"}, +] + +[[package]] +name = "charset-normalizer" +version = "3.2.0" +description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." +optional = false +python-versions = ">=3.7.0" +files = [ + {file = "charset-normalizer-3.2.0.tar.gz", hash = "sha256:3bb3d25a8e6c0aedd251753a79ae98a093c7e7b471faa3aa9a93a81431987ace"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:0b87549028f680ca955556e3bd57013ab47474c3124dc069faa0b6545b6c9710"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7c70087bfee18a42b4040bb9ec1ca15a08242cf5867c58726530bdf3945672ed"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a103b3a7069b62f5d4890ae1b8f0597618f628b286b03d4bc9195230b154bfa9"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94aea8eff76ee6d1cdacb07dd2123a68283cb5569e0250feab1240058f53b623"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:db901e2ac34c931d73054d9797383d0f8009991e723dab15109740a63e7f902a"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b0dac0ff919ba34d4df1b6131f59ce95b08b9065233446be7e459f95554c0dc8"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:193cbc708ea3aca45e7221ae58f0fd63f933753a9bfb498a3b474878f12caaad"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:09393e1b2a9461950b1c9a45d5fd251dc7c6f228acab64da1c9c0165d9c7765c"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:baacc6aee0b2ef6f3d308e197b5d7a81c0e70b06beae1f1fcacffdbd124fe0e3"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:bf420121d4c8dce6b889f0e8e4ec0ca34b7f40186203f06a946fa0276ba54029"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:c04a46716adde8d927adb9457bbe39cf473e1e2c2f5d0a16ceb837e5d841ad4f"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:aaf63899c94de41fe3cf934601b0f7ccb6b428c6e4eeb80da72c58eab077b19a"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:d62e51710986674142526ab9f78663ca2b0726066ae26b78b22e0f5e571238dd"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-win32.whl", hash = "sha256:04e57ab9fbf9607b77f7d057974694b4f6b142da9ed4a199859d9d4d5c63fe96"}, + {file = "charset_normalizer-3.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:48021783bdf96e3d6de03a6e39a1171ed5bd7e8bb93fc84cc649d11490f87cea"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:4957669ef390f0e6719db3613ab3a7631e68424604a7b448f079bee145da6e09"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:46fb8c61d794b78ec7134a715a3e564aafc8f6b5e338417cb19fe9f57a5a9bf2"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f779d3ad205f108d14e99bb3859aa7dd8e9c68874617c72354d7ecaec2a054ac"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f25c229a6ba38a35ae6e25ca1264621cc25d4d38dca2942a7fce0b67a4efe918"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2efb1bd13885392adfda4614c33d3b68dee4921fd0ac1d3988f8cbb7d589e72a"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f30b48dd7fa1474554b0b0f3fdfdd4c13b5c737a3c6284d3cdc424ec0ffff3a"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:246de67b99b6851627d945db38147d1b209a899311b1305dd84916f2b88526c6"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9bd9b3b31adcb054116447ea22caa61a285d92e94d710aa5ec97992ff5eb7cf3"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:8c2f5e83493748286002f9369f3e6607c565a6a90425a3a1fef5ae32a36d749d"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:3170c9399da12c9dc66366e9d14da8bf7147e1e9d9ea566067bbce7bb74bd9c2"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:7a4826ad2bd6b07ca615c74ab91f32f6c96d08f6fcc3902ceeedaec8cdc3bcd6"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:3b1613dd5aee995ec6d4c69f00378bbd07614702a315a2cf6c1d21461fe17c23"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:9e608aafdb55eb9f255034709e20d5a83b6d60c054df0802fa9c9883d0a937aa"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-win32.whl", hash = "sha256:f2a1d0fd4242bd8643ce6f98927cf9c04540af6efa92323e9d3124f57727bfc1"}, + {file = "charset_normalizer-3.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:681eb3d7e02e3c3655d1b16059fbfb605ac464c834a0c629048a30fad2b27489"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c57921cda3a80d0f2b8aec7e25c8aa14479ea92b5b51b6876d975d925a2ea346"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41b25eaa7d15909cf3ac4c96088c1f266a9a93ec44f87f1d13d4a0e86c81b982"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f058f6963fd82eb143c692cecdc89e075fa0828db2e5b291070485390b2f1c9c"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a7647ebdfb9682b7bb97e2a5e7cb6ae735b1c25008a70b906aecca294ee96cf4"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eef9df1eefada2c09a5e7a40991b9fc6ac6ef20b1372abd48d2794a316dc0449"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e03b8895a6990c9ab2cdcd0f2fe44088ca1c65ae592b8f795c3294af00a461c3"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:ee4006268ed33370957f55bf2e6f4d263eaf4dc3cfc473d1d90baff6ed36ce4a"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c4983bf937209c57240cff65906b18bb35e64ae872da6a0db937d7b4af845dd7"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:3bb7fda7260735efe66d5107fb7e6af6a7c04c7fce9b2514e04b7a74b06bf5dd"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:72814c01533f51d68702802d74f77ea026b5ec52793c791e2da806a3844a46c3"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:70c610f6cbe4b9fce272c407dd9d07e33e6bf7b4aa1b7ffb6f6ded8e634e3592"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-win32.whl", hash = "sha256:a401b4598e5d3f4a9a811f3daf42ee2291790c7f9d74b18d75d6e21dda98a1a1"}, + {file = "charset_normalizer-3.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:c0b21078a4b56965e2b12f247467b234734491897e99c1d51cee628da9786959"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:95eb302ff792e12aba9a8b8f8474ab229a83c103d74a750ec0bd1c1eea32e669"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1a100c6d595a7f316f1b6f01d20815d916e75ff98c27a01ae817439ea7726329"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6339d047dab2780cc6220f46306628e04d9750f02f983ddb37439ca47ced7149"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e4b749b9cc6ee664a3300bb3a273c1ca8068c46be705b6c31cf5d276f8628a94"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a38856a971c602f98472050165cea2cdc97709240373041b69030be15047691f"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f87f746ee241d30d6ed93969de31e5ffd09a2961a051e60ae6bddde9ec3583aa"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:89f1b185a01fe560bc8ae5f619e924407efca2191b56ce749ec84982fc59a32a"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e1c8a2f4c69e08e89632defbfabec2feb8a8d99edc9f89ce33c4b9e36ab63037"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2f4ac36d8e2b4cc1aa71df3dd84ff8efbe3bfb97ac41242fbcfc053c67434f46"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a386ebe437176aab38c041de1260cd3ea459c6ce5263594399880bbc398225b2"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:ccd16eb18a849fd8dcb23e23380e2f0a354e8daa0c984b8a732d9cfaba3a776d"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:e6a5bf2cba5ae1bb80b154ed68a3cfa2fa00fde979a7f50d6598d3e17d9ac20c"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:45de3f87179c1823e6d9e32156fb14c1927fcc9aba21433f088fdfb555b77c10"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-win32.whl", hash = "sha256:1000fba1057b92a65daec275aec30586c3de2401ccdcd41f8a5c1e2c87078706"}, + {file = "charset_normalizer-3.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:8b2c760cfc7042b27ebdb4a43a4453bd829a5742503599144d54a032c5dc7e9e"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:855eafa5d5a2034b4621c74925d89c5efef61418570e5ef9b37717d9c796419c"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:203f0c8871d5a7987be20c72442488a0b8cfd0f43b7973771640fc593f56321f"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e857a2232ba53ae940d3456f7533ce6ca98b81917d47adc3c7fd55dad8fab858"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e86d77b090dbddbe78867a0275cb4df08ea195e660f1f7f13435a4649e954e5"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c4fb39a81950ec280984b3a44f5bd12819953dc5fa3a7e6fa7a80db5ee853952"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2dee8e57f052ef5353cf608e0b4c871aee320dd1b87d351c28764fc0ca55f9f4"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8700f06d0ce6f128de3ccdbc1acaea1ee264d2caa9ca05daaf492fde7c2a7200"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1920d4ff15ce893210c1f0c0e9d19bfbecb7983c76b33f046c13a8ffbd570252"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:c1c76a1743432b4b60ab3358c937a3fe1341c828ae6194108a94c69028247f22"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f7560358a6811e52e9c4d142d497f1a6e10103d3a6881f18d04dbce3729c0e2c"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:c8063cf17b19661471ecbdb3df1c84f24ad2e389e326ccaf89e3fb2484d8dd7e"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:cd6dbe0238f7743d0efe563ab46294f54f9bc8f4b9bcf57c3c666cc5bc9d1299"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:1249cbbf3d3b04902ff081ffbb33ce3377fa6e4c7356f759f3cd076cc138d020"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-win32.whl", hash = "sha256:6c409c0deba34f147f77efaa67b8e4bb83d2f11c8806405f76397ae5b8c0d1c9"}, + {file = "charset_normalizer-3.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:7095f6fbfaa55defb6b733cfeb14efaae7a29f0b59d8cf213be4e7ca0b857b80"}, + {file = "charset_normalizer-3.2.0-py3-none-any.whl", hash = "sha256:8e098148dd37b4ce3baca71fb394c81dc5d9c7728c95df695d2dca218edf40e6"}, +] + +[[package]] +name = "colorama" +version = "0.4.6" +description = "Cross-platform colored terminal text." +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" +files = [ + {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, + {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, +] + +[[package]] +name = "idna" +version = "3.4" +description = "Internationalized Domain Names in Applications (IDNA)" +optional = false +python-versions = ">=3.5" +files = [ + {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"}, + {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"}, +] + +[[package]] +name = "numpy" +version = "1.24.4" +description = "Fundamental package for array computing in Python" +optional = false +python-versions = ">=3.8" +files = [ + {file = "numpy-1.24.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c0bfb52d2169d58c1cdb8cc1f16989101639b34c7d3ce60ed70b19c63eba0b64"}, + {file = "numpy-1.24.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ed094d4f0c177b1b8e7aa9cba7d6ceed51c0e569a5318ac0ca9a090680a6a1b1"}, + {file = "numpy-1.24.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:79fc682a374c4a8ed08b331bef9c5f582585d1048fa6d80bc6c35bc384eee9b4"}, + {file = "numpy-1.24.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7ffe43c74893dbf38c2b0a1f5428760a1a9c98285553c89e12d70a96a7f3a4d6"}, + {file = "numpy-1.24.4-cp310-cp310-win32.whl", hash = "sha256:4c21decb6ea94057331e111a5bed9a79d335658c27ce2adb580fb4d54f2ad9bc"}, + {file = "numpy-1.24.4-cp310-cp310-win_amd64.whl", hash = "sha256:b4bea75e47d9586d31e892a7401f76e909712a0fd510f58f5337bea9572c571e"}, + {file = "numpy-1.24.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f136bab9c2cfd8da131132c2cf6cc27331dd6fae65f95f69dcd4ae3c3639c810"}, + {file = "numpy-1.24.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e2926dac25b313635e4d6cf4dc4e51c8c0ebfed60b801c799ffc4c32bf3d1254"}, + {file = "numpy-1.24.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:222e40d0e2548690405b0b3c7b21d1169117391c2e82c378467ef9ab4c8f0da7"}, + {file = "numpy-1.24.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7215847ce88a85ce39baf9e89070cb860c98fdddacbaa6c0da3ffb31b3350bd5"}, + {file = "numpy-1.24.4-cp311-cp311-win32.whl", hash = "sha256:4979217d7de511a8d57f4b4b5b2b965f707768440c17cb70fbf254c4b225238d"}, + {file = "numpy-1.24.4-cp311-cp311-win_amd64.whl", hash = "sha256:b7b1fc9864d7d39e28f41d089bfd6353cb5f27ecd9905348c24187a768c79694"}, + {file = "numpy-1.24.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1452241c290f3e2a312c137a9999cdbf63f78864d63c79039bda65ee86943f61"}, + {file = "numpy-1.24.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:04640dab83f7c6c85abf9cd729c5b65f1ebd0ccf9de90b270cd61935eef0197f"}, + {file = "numpy-1.24.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5425b114831d1e77e4b5d812b69d11d962e104095a5b9c3b641a218abcc050e"}, + {file = "numpy-1.24.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd80e219fd4c71fc3699fc1dadac5dcf4fd882bfc6f7ec53d30fa197b8ee22dc"}, + {file = "numpy-1.24.4-cp38-cp38-win32.whl", hash = "sha256:4602244f345453db537be5314d3983dbf5834a9701b7723ec28923e2889e0bb2"}, + {file = "numpy-1.24.4-cp38-cp38-win_amd64.whl", hash = "sha256:692f2e0f55794943c5bfff12b3f56f99af76f902fc47487bdfe97856de51a706"}, + {file = "numpy-1.24.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2541312fbf09977f3b3ad449c4e5f4bb55d0dbf79226d7724211acc905049400"}, + {file = "numpy-1.24.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9667575fb6d13c95f1b36aca12c5ee3356bf001b714fc354eb5465ce1609e62f"}, + {file = "numpy-1.24.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3a86ed21e4f87050382c7bc96571755193c4c1392490744ac73d660e8f564a9"}, + {file = "numpy-1.24.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d11efb4dbecbdf22508d55e48d9c8384db795e1b7b51ea735289ff96613ff74d"}, + {file = "numpy-1.24.4-cp39-cp39-win32.whl", hash = "sha256:6620c0acd41dbcb368610bb2f4d83145674040025e5536954782467100aa8835"}, + {file = "numpy-1.24.4-cp39-cp39-win_amd64.whl", hash = "sha256:befe2bf740fd8373cf56149a5c23a0f601e82869598d41f8e188a0e9869926f8"}, + {file = "numpy-1.24.4-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:31f13e25b4e304632a4619d0e0777662c2ffea99fcae2029556b17d8ff958aef"}, + {file = "numpy-1.24.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95f7ac6540e95bc440ad77f56e520da5bf877f87dca58bd095288dce8940532a"}, + {file = "numpy-1.24.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:e98f220aa76ca2a977fe435f5b04d7b3470c0a2e6312907b37ba6068f26787f2"}, + {file = "numpy-1.24.4.tar.gz", hash = "sha256:80f5e3a4e498641401868df4208b74581206afbee7cf7b8329daae82676d9463"}, +] + +[[package]] +name = "numpy" +version = "1.25.2" +description = "Fundamental package for array computing in Python" +optional = false +python-versions = ">=3.9" +files = [ + {file = "numpy-1.25.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:db3ccc4e37a6873045580d413fe79b68e47a681af8db2e046f1dacfa11f86eb3"}, + {file = "numpy-1.25.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:90319e4f002795ccfc9050110bbbaa16c944b1c37c0baeea43c5fb881693ae1f"}, + {file = "numpy-1.25.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dfe4a913e29b418d096e696ddd422d8a5d13ffba4ea91f9f60440a3b759b0187"}, + {file = "numpy-1.25.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f08f2e037bba04e707eebf4bc934f1972a315c883a9e0ebfa8a7756eabf9e357"}, + {file = "numpy-1.25.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:bec1e7213c7cb00d67093247f8c4db156fd03075f49876957dca4711306d39c9"}, + {file = "numpy-1.25.2-cp310-cp310-win32.whl", hash = "sha256:7dc869c0c75988e1c693d0e2d5b26034644399dd929bc049db55395b1379e044"}, + {file = "numpy-1.25.2-cp310-cp310-win_amd64.whl", hash = "sha256:834b386f2b8210dca38c71a6e0f4fd6922f7d3fcff935dbe3a570945acb1b545"}, + {file = "numpy-1.25.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c5462d19336db4560041517dbb7759c21d181a67cb01b36ca109b2ae37d32418"}, + {file = "numpy-1.25.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c5652ea24d33585ea39eb6a6a15dac87a1206a692719ff45d53c5282e66d4a8f"}, + {file = "numpy-1.25.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d60fbae8e0019865fc4784745814cff1c421df5afee233db6d88ab4f14655a2"}, + {file = "numpy-1.25.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:60e7f0f7f6d0eee8364b9a6304c2845b9c491ac706048c7e8cf47b83123b8dbf"}, + {file = "numpy-1.25.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:bb33d5a1cf360304754913a350edda36d5b8c5331a8237268c48f91253c3a364"}, + {file = "numpy-1.25.2-cp311-cp311-win32.whl", hash = "sha256:5883c06bb92f2e6c8181df7b39971a5fb436288db58b5a1c3967702d4278691d"}, + {file = "numpy-1.25.2-cp311-cp311-win_amd64.whl", hash = "sha256:5c97325a0ba6f9d041feb9390924614b60b99209a71a69c876f71052521d42a4"}, + {file = "numpy-1.25.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b79e513d7aac42ae918db3ad1341a015488530d0bb2a6abcbdd10a3a829ccfd3"}, + {file = "numpy-1.25.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:eb942bfb6f84df5ce05dbf4b46673ffed0d3da59f13635ea9b926af3deb76926"}, + {file = "numpy-1.25.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3e0746410e73384e70d286f93abf2520035250aad8c5714240b0492a7302fdca"}, + {file = "numpy-1.25.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7806500e4f5bdd04095e849265e55de20d8cc4b661b038957354327f6d9b295"}, + {file = "numpy-1.25.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8b77775f4b7df768967a7c8b3567e309f617dd5e99aeb886fa14dc1a0791141f"}, + {file = "numpy-1.25.2-cp39-cp39-win32.whl", hash = "sha256:2792d23d62ec51e50ce4d4b7d73de8f67a2fd3ea710dcbc8563a51a03fb07b01"}, + {file = "numpy-1.25.2-cp39-cp39-win_amd64.whl", hash = "sha256:76b4115d42a7dfc5d485d358728cdd8719be33cc5ec6ec08632a5d6fca2ed380"}, + {file = "numpy-1.25.2-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:1a1329e26f46230bf77b02cc19e900db9b52f398d6722ca853349a782d4cff55"}, + {file = "numpy-1.25.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c3abc71e8b6edba80a01a52e66d83c5d14433cbcd26a40c329ec7ed09f37901"}, + {file = "numpy-1.25.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:1b9735c27cea5d995496f46a8b1cd7b408b3f34b6d50459d9ac8fe3a20cc17bf"}, + {file = "numpy-1.25.2.tar.gz", hash = "sha256:fd608e19c8d7c55021dffd43bfe5492fab8cc105cc8986f813f8c3c048b38760"}, +] + +[[package]] +name = "pandas" +version = "2.0.3" +description = "Powerful data structures for data analysis, time series, and statistics" +optional = false +python-versions = ">=3.8" +files = [ + {file = "pandas-2.0.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e4c7c9f27a4185304c7caf96dc7d91bc60bc162221152de697c98eb0b2648dd8"}, + {file = "pandas-2.0.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f167beed68918d62bffb6ec64f2e1d8a7d297a038f86d4aed056b9493fca407f"}, + {file = "pandas-2.0.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ce0c6f76a0f1ba361551f3e6dceaff06bde7514a374aa43e33b588ec10420183"}, + {file = "pandas-2.0.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba619e410a21d8c387a1ea6e8a0e49bb42216474436245718d7f2e88a2f8d7c0"}, + {file = "pandas-2.0.3-cp310-cp310-win32.whl", hash = "sha256:3ef285093b4fe5058eefd756100a367f27029913760773c8bf1d2d8bebe5d210"}, + {file = "pandas-2.0.3-cp310-cp310-win_amd64.whl", hash = "sha256:9ee1a69328d5c36c98d8e74db06f4ad518a1840e8ccb94a4ba86920986bb617e"}, + {file = "pandas-2.0.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b084b91d8d66ab19f5bb3256cbd5ea661848338301940e17f4492b2ce0801fe8"}, + {file = "pandas-2.0.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:37673e3bdf1551b95bf5d4ce372b37770f9529743d2498032439371fc7b7eb26"}, + {file = "pandas-2.0.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b9cb1e14fdb546396b7e1b923ffaeeac24e4cedd14266c3497216dd4448e4f2d"}, + {file = "pandas-2.0.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9cd88488cceb7635aebb84809d087468eb33551097d600c6dad13602029c2df"}, + {file = "pandas-2.0.3-cp311-cp311-win32.whl", hash = "sha256:694888a81198786f0e164ee3a581df7d505024fbb1f15202fc7db88a71d84ebd"}, + {file = "pandas-2.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:6a21ab5c89dcbd57f78d0ae16630b090eec626360085a4148693def5452d8a6b"}, + {file = "pandas-2.0.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9e4da0d45e7f34c069fe4d522359df7d23badf83abc1d1cef398895822d11061"}, + {file = "pandas-2.0.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:32fca2ee1b0d93dd71d979726b12b61faa06aeb93cf77468776287f41ff8fdc5"}, + {file = "pandas-2.0.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:258d3624b3ae734490e4d63c430256e716f488c4fcb7c8e9bde2d3aa46c29089"}, + {file = "pandas-2.0.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eae3dc34fa1aa7772dd3fc60270d13ced7346fcbcfee017d3132ec625e23bb0"}, + {file = "pandas-2.0.3-cp38-cp38-win32.whl", hash = "sha256:f3421a7afb1a43f7e38e82e844e2bca9a6d793d66c1a7f9f0ff39a795bbc5e02"}, + {file = "pandas-2.0.3-cp38-cp38-win_amd64.whl", hash = "sha256:69d7f3884c95da3a31ef82b7618af5710dba95bb885ffab339aad925c3e8ce78"}, + {file = "pandas-2.0.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5247fb1ba347c1261cbbf0fcfba4a3121fbb4029d95d9ef4dc45406620b25c8b"}, + {file = "pandas-2.0.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:81af086f4543c9d8bb128328b5d32e9986e0c84d3ee673a2ac6fb57fd14f755e"}, + {file = "pandas-2.0.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1994c789bf12a7c5098277fb43836ce090f1073858c10f9220998ac74f37c69b"}, + {file = "pandas-2.0.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ec591c48e29226bcbb316e0c1e9423622bc7a4eaf1ef7c3c9fa1a3981f89641"}, + {file = "pandas-2.0.3-cp39-cp39-win32.whl", hash = "sha256:04dbdbaf2e4d46ca8da896e1805bc04eb85caa9a82e259e8eed00254d5e0c682"}, + {file = "pandas-2.0.3-cp39-cp39-win_amd64.whl", hash = "sha256:1168574b036cd8b93abc746171c9b4f1b83467438a5e45909fed645cf8692dbc"}, + {file = "pandas-2.0.3.tar.gz", hash = "sha256:c02f372a88e0d17f36d3093a644c73cfc1788e876a7c4bcb4020a77512e2043c"}, +] + +[package.dependencies] +numpy = [ + {version = ">=1.20.3", markers = "python_version < \"3.10\""}, + {version = ">=1.21.0", markers = "python_version >= \"3.10\""}, + {version = ">=1.23.2", markers = "python_version >= \"3.11\""}, +] +python-dateutil = ">=2.8.2" +pytz = ">=2020.1" +tzdata = ">=2022.1" + +[package.extras] +all = ["PyQt5 (>=5.15.1)", "SQLAlchemy (>=1.4.16)", "beautifulsoup4 (>=4.9.3)", "bottleneck (>=1.3.2)", "brotlipy (>=0.7.0)", "fastparquet (>=0.6.3)", "fsspec (>=2021.07.0)", "gcsfs (>=2021.07.0)", "html5lib (>=1.1)", "hypothesis (>=6.34.2)", "jinja2 (>=3.0.0)", "lxml (>=4.6.3)", "matplotlib (>=3.6.1)", "numba (>=0.53.1)", "numexpr (>=2.7.3)", "odfpy (>=1.4.1)", "openpyxl (>=3.0.7)", "pandas-gbq (>=0.15.0)", "psycopg2 (>=2.8.6)", "pyarrow (>=7.0.0)", "pymysql (>=1.0.2)", "pyreadstat (>=1.1.2)", "pytest (>=7.3.2)", "pytest-asyncio (>=0.17.0)", "pytest-xdist (>=2.2.0)", "python-snappy (>=0.6.0)", "pyxlsb (>=1.0.8)", "qtpy (>=2.2.0)", "s3fs (>=2021.08.0)", "scipy (>=1.7.1)", "tables (>=3.6.1)", "tabulate (>=0.8.9)", "xarray (>=0.21.0)", "xlrd (>=2.0.1)", "xlsxwriter (>=1.4.3)", "zstandard (>=0.15.2)"] +aws = ["s3fs (>=2021.08.0)"] +clipboard = ["PyQt5 (>=5.15.1)", "qtpy (>=2.2.0)"] +compression = ["brotlipy (>=0.7.0)", "python-snappy (>=0.6.0)", "zstandard (>=0.15.2)"] +computation = ["scipy (>=1.7.1)", "xarray (>=0.21.0)"] +excel = ["odfpy (>=1.4.1)", "openpyxl (>=3.0.7)", "pyxlsb (>=1.0.8)", "xlrd (>=2.0.1)", "xlsxwriter (>=1.4.3)"] +feather = ["pyarrow (>=7.0.0)"] +fss = ["fsspec (>=2021.07.0)"] +gcp = ["gcsfs (>=2021.07.0)", "pandas-gbq (>=0.15.0)"] +hdf5 = ["tables (>=3.6.1)"] +html = ["beautifulsoup4 (>=4.9.3)", "html5lib (>=1.1)", "lxml (>=4.6.3)"] +mysql = ["SQLAlchemy (>=1.4.16)", "pymysql (>=1.0.2)"] +output-formatting = ["jinja2 (>=3.0.0)", "tabulate (>=0.8.9)"] +parquet = ["pyarrow (>=7.0.0)"] +performance = ["bottleneck (>=1.3.2)", "numba (>=0.53.1)", "numexpr (>=2.7.1)"] +plot = ["matplotlib (>=3.6.1)"] +postgresql = ["SQLAlchemy (>=1.4.16)", "psycopg2 (>=2.8.6)"] +spss = ["pyreadstat (>=1.1.2)"] +sql-other = ["SQLAlchemy (>=1.4.16)"] +test = ["hypothesis (>=6.34.2)", "pytest (>=7.3.2)", "pytest-asyncio (>=0.17.0)", "pytest-xdist (>=2.2.0)"] +xml = ["lxml (>=4.6.3)"] + +[[package]] +name = "python-dateutil" +version = "2.8.2" +description = "Extensions to the standard Python datetime module" +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7" +files = [ + {file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"}, + {file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"}, +] + +[package.dependencies] +six = ">=1.5" + +[[package]] +name = "pytz" +version = "2023.3" +description = "World timezone definitions, modern and historical" +optional = false +python-versions = "*" +files = [ + {file = "pytz-2023.3-py2.py3-none-any.whl", hash = "sha256:a151b3abb88eda1d4e34a9814df37de2a80e301e68ba0fd856fb9b46bfbbbffb"}, + {file = "pytz-2023.3.tar.gz", hash = "sha256:1d8ce29db189191fb55338ee6d0387d82ab59f3d00eac103412d64e0ebd0c588"}, +] + +[[package]] +name = "requests" +version = "2.31.0" +description = "Python HTTP for Humans." +optional = false +python-versions = ">=3.7" +files = [ + {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"}, + {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"}, +] + +[package.dependencies] +certifi = ">=2017.4.17" +charset-normalizer = ">=2,<4" +idna = ">=2.5,<4" +urllib3 = ">=1.21.1,<3" + +[package.extras] +socks = ["PySocks (>=1.5.6,!=1.5.7)"] +use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"] + +[[package]] +name = "six" +version = "1.16.0" +description = "Python 2 and 3 compatibility utilities" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" +files = [ + {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"}, + {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"}, +] + +[[package]] +name = "tqdm" +version = "4.65.0" +description = "Fast, Extensible Progress Meter" +optional = false +python-versions = ">=3.7" +files = [ + {file = "tqdm-4.65.0-py3-none-any.whl", hash = "sha256:c4f53a17fe37e132815abceec022631be8ffe1b9381c2e6e30aa70edc99e9671"}, + {file = "tqdm-4.65.0.tar.gz", hash = "sha256:1871fb68a86b8fb3b59ca4cdd3dcccbc7e6d613eeed31f4c332531977b89beb5"}, +] + +[package.dependencies] +colorama = {version = "*", markers = "platform_system == \"Windows\""} + +[package.extras] +dev = ["py-make (>=0.1.0)", "twine", "wheel"] +notebook = ["ipywidgets (>=6)"] +slack = ["slack-sdk"] +telegram = ["requests"] + +[[package]] +name = "tzdata" +version = "2023.3" +description = "Provider of IANA time zone data" +optional = false +python-versions = ">=2" +files = [ + {file = "tzdata-2023.3-py2.py3-none-any.whl", hash = "sha256:7e65763eef3120314099b6939b5546db7adce1e7d6f2e179e3df563c70511eda"}, + {file = "tzdata-2023.3.tar.gz", hash = "sha256:11ef1e08e54acb0d4f95bdb1be05da659673de4acbd21bf9c69e94cc5e907a3a"}, +] + +[[package]] +name = "urllib3" +version = "2.0.4" +description = "HTTP library with thread-safe connection pooling, file post, and more." +optional = false +python-versions = ">=3.7" +files = [ + {file = "urllib3-2.0.4-py3-none-any.whl", hash = "sha256:de7df1803967d2c2a98e4b11bb7d6bd9210474c46e8a0401514e3a42a75ebde4"}, + {file = "urllib3-2.0.4.tar.gz", hash = "sha256:8d22f86aae8ef5e410d4f539fde9ce6b2113a001bb4d189e0aed70642d602b11"}, +] + +[package.extras] +brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"] +secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"] +socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"] +zstd = ["zstandard (>=0.18.0)"] + +[metadata] +lock-version = "2.0" +python-versions = "^3.8" +content-hash = "37218d8b537e0d4f5686aa1d2b67ebd8dac946fb22625f4cfc9c4679e871f2c5" diff --git a/pyproject.toml b/pyproject.toml index f3132b3..7ba0b30 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,17 +1,9 @@ -[build-system] -requires = ["setuptools", "setuptools-scm"] -build-backend = "setuptools.build_meta" - -[project] -name = "stadata" +[tool.poetry] +name = "stadata-semver" version = "0.1.1" description = "API for get all statistics data from BPS" urls = {homepage = "https://github.com/bps-statistics/stadata"} -requires-python = ">=3.7" -authors = [ - {name = "Ignatius Sandyawan", email = "isandyawan@gmail.com"} -] -license = {text = "MIT"} +license = "MIT" classifiers = [ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", @@ -31,18 +23,17 @@ classifiers = [ keywords = [ "bps dataset utility indonesia" ] -dynamic = ["readme", "dependencies"] +packages = [{include = "stadata_semver"}] -[tool.setuptools] -packages = ["stadata"] +[tool.poetry.dependencies] +python = "^3.8" +requests = "^2.31.0" +tqdm = "^4.65.0" +pandas = "^2.0.3" -# Taken from https://github.com/pypa/setuptools/blob/d138ec08efc2dbaebb8752e215e324f38bd807a2/setuptools/tests/config/test_pyprojecttoml.py#L68 -[tool.setuptools.dynamic.readme] -file = ["README.md"] -content-type = "text/markdown" - -[tool.setuptools.dynamic.dependencies] -file = ["requirements.txt"] +[build-system] +requires = ["poetry-core"] +build-backend = "poetry.core.masonry.api" [tool.coverage.run] branch = true diff --git a/requirements.txt b/requirements.txt index 06d760b..25e982a 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,3 +1,14 @@ -requests>=2.0 -pandas>=0.25 -tqdm \ No newline at end of file +certifi==2023.7.22 ; python_version >= "3.8" and python_version < "4.0" +charset-normalizer==3.2.0 ; python_version >= "3.8" and python_version < "4.0" +colorama==0.4.6 ; python_version >= "3.8" and python_version < "4.0" and platform_system == "Windows" +idna==3.4 ; python_version >= "3.8" and python_version < "4.0" +numpy==1.24.4 ; python_version >= "3.8" and python_version < "3.9" +numpy==1.25.2 ; python_version >= "3.9" and python_version < "4.0" +pandas==2.0.3 ; python_version >= "3.8" and python_version < "4.0" +python-dateutil==2.8.2 ; python_version >= "3.8" and python_version < "4.0" +pytz==2023.3 ; python_version >= "3.8" and python_version < "4.0" +requests==2.31.0 ; python_version >= "3.8" and python_version < "4.0" +six==1.16.0 ; python_version >= "3.8" and python_version < "4.0" +tqdm==4.65.0 ; python_version >= "3.8" and python_version < "4.0" +tzdata==2023.3 ; python_version >= "3.8" and python_version < "4.0" +urllib3==2.0.4 ; python_version >= "3.8" and python_version < "4.0" From 4b205ac8eaa6c1041901e048a6e77e4ef523dcb2 Mon Sep 17 00:00:00 2001 From: Muhammad Luqman Date: Sun, 6 Aug 2023 13:41:41 +0700 Subject: [PATCH 2/8] ci: add ci with github action --- .github/workflows/ci.yml | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) create mode 100644 .github/workflows/ci.yml diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml new file mode 100644 index 0000000..a12d066 --- /dev/null +++ b/.github/workflows/ci.yml @@ -0,0 +1,30 @@ +# This workflow will install Python dependencies, run tests and lint with a single version of Python +# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions + +name: CI +on: + push: + branches: [main] + pull_request: + branches: [main] + +jobs: + Release: + if: github.event_name == 'push' && github.ref == 'refs/heads/main' && !contains(github.event.head_commit.message, 'chore(release):') + runs-on: ubuntu-latest + steps: + - uses: actions/setup-python@v3 + with: + python-version: 3.8.17 + - name: Checkout Code + uses: actions/checkout@v3 + - name: Install Python Poetry + uses: abatilo/actions-poetry@v2.1.0 + with: + poetry-version: 1.5.1 + - name: Semantic Release + uses: bjoluc/semantic-release-config-poetry@v2 + with: + GITHUB_TOKEN: ${{ secrets.GH_TOKEN }} + PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }} + RELEASE_BRANCH: main From 8d0c8f83dd51916cb45703685c0ec3e7f454fd14 Mon Sep 17 00:00:00 2001 From: Muhammad Luqman Date: Sun, 6 Aug 2023 13:51:24 +0700 Subject: [PATCH 3/8] refactor: use black code style --- .vscode/settings.json | 6 + stadata/main.py | 789 ++++++++++++++++++++++++++---------------- stadata/material.py | 20 +- 3 files changed, 500 insertions(+), 315 deletions(-) create mode 100644 .vscode/settings.json diff --git a/.vscode/settings.json b/.vscode/settings.json new file mode 100644 index 0000000..751bf1b --- /dev/null +++ b/.vscode/settings.json @@ -0,0 +1,6 @@ +{ + "[python]": { + "editor.defaultFormatter": "ms-python.black-formatter", + "editor.formatOnSave": true + } +} diff --git a/stadata/main.py b/stadata/main.py index a0ea59c..82391ed 100644 --- a/stadata/main.py +++ b/stadata/main.py @@ -1,25 +1,44 @@ -import requests +import html import warnings + import pandas as pd +import requests from tqdm import tqdm -import html + from .material import Material BASE_URL = "https://webapi.bps.go.id/v1/" + class Client(object): """ Object to connect with webapi """ + TOKEN = "" + def __init__(self, token): """ Initialize client object :param token: token from webapi website """ self.TOKEN = token - - def __get_list(self,lang = 'ind',domain='0000',model='statictable',keyword='',page=1,var='',turvar='',vervar='',th='',turth='',month='',year=''): + + def __get_list( + self, + lang="ind", + domain="0000", + model="statictable", + keyword="", + page=1, + var="", + turvar="", + vervar="", + th="", + turth="", + month="", + year="", + ): """ Method to get list data based on model :param lang: Language to display data. Default value: ind. Allowed values: "ind", "eng" @@ -35,29 +54,58 @@ def __get_list(self,lang = 'ind',domain='0000',model='statictable',keyword='',pa :param month: Month of publication or press release in int :param year: Year of publication or press release """ - if(model=='data'): - if(th != ''): - url_th = '/th/'f'{th}' + if model == "data": + if th != "": + url_th = "/th/" f"{th}" else: - url_th = '' - res = requests.get(f'{BASE_URL}api/list/model/'f'{model}/perpage/100000/lang/'f'{lang}/domain/'f'{domain}/key/'f'{self.TOKEN}/keyword/'f'{keyword}/page/'f'{str(page)}/var/'f'{str(var)}'f'{url_th}') - elif((model=='pressrelease')|(model=='publication')): - res = requests.get('https://webapi.bps.go.id/v1/api/list/model/'+model+'/perpage/100000/lang/'+lang+'/domain/'+domain+'/key/'+key+'/keyword/'+keyword+'/page/'+str(page)+ - (('/month/'+str(month)) if month != '' else '')+ - (('/year/'+str(year)) if year != '' else '')) + url_th = "" + res = requests.get( + f"{BASE_URL}api/list/model/" + f"{model}/perpage/100000/lang/" + f"{lang}/domain/" + f"{domain}/key/" + f"{self.TOKEN}/keyword/" + f"{keyword}/page/" + f"{str(page)}/var/" + f"{str(var)}" + f"{url_th}" + ) + elif (model == "pressrelease") | (model == "publication"): + res = requests.get( + "https://webapi.bps.go.id/v1/api/list/model/" + + model + + "/perpage/100000/lang/" + + lang + + "/domain/" + + domain + + "/key/" + + key + + "/keyword/" + + keyword + + "/page/" + + str(page) + + (("/month/" + str(month)) if month != "" else "") + + (("/year/" + str(year)) if year != "" else "") + ) else: - res = requests.get(f'{BASE_URL}api/list/model/'f'{model}/perpage/100000/lang/'f'{lang}/domain/'f'{domain}/key/'f'{self.TOKEN}/keyword/'f'{keyword}/page/'f'{str(page)}') - if(res.status_code!=200): + res = requests.get( + f"{BASE_URL}api/list/model/" + f"{model}/perpage/100000/lang/" + f"{lang}/domain/" + f"{domain}/key/" + f"{self.TOKEN}/keyword/" + f"{keyword}/page/" + f"{str(page)}" + ) + if res.status_code != 200: warnings.warn("Connection failed") else: res = res.json() - if(res['status']!='OK'): - raise Exception(res['message']) + if res["status"] != "OK": + raise Exception(res["message"]) return res - - - def __get_view(self,domain,model,lang,idx): + def __get_view(self, domain, model, lang, idx): """ Based Method view statictable :param lang: Language to display data. Default value: ind. Allowed values: "ind", "eng" @@ -65,20 +113,26 @@ def __get_view(self,domain,model,lang,idx): :param model: Type data to display :idx : ID static table to show """ - res = requests.get(f'{BASE_URL}api/view/model/'f'{model}/lang/'f'{lang}/domain/'f'{domain}/id/'f'{idx}/key/'+self.TOKEN+'/') - if(res.status_code!=200): + res = requests.get( + f"{BASE_URL}api/view/model/" + f"{model}/lang/" + f"{lang}/domain/" + f"{domain}/id/" + f"{idx}/key/" + self.TOKEN + "/" + ) + if res.status_code != 200: warnings.warn("Connection failed") else: res = res.json() - if(res['status']!='OK'): - raise Exception(res['message']) + if res["status"] != "OK": + raise Exception(res["message"]) return res - - def __format_list(self,list): - list['domain'] = list['domain'].map('{0:0>4}'.format) + + def __format_list(self, list): + list["domain"] = list["domain"].map("{0:0>4}".format) return list - - def __get_variable(self,domain='0000'): + + def __get_variable(self, domain="0000"): """ Based Method to get all variable of dynamic table :param domain: ID domain data @@ -97,312 +151,416 @@ def __get_variable(self,domain='0000'): unit = [] graph_id = [] graph_name = [] - df = self.__get_list(lang='ind',domain=domain,model='var',page=page) - pages = df['data'][0]['pages'] - if(pages>1): - for page in tqdm(range(1,pages+1)): - df = self.__get_list(lang='ind',domain=domain,model='var',page=page) - for item in df['data'][1]: - var_id.append(item.get('var_id')) - title.append(item.get('title')) - sub_id.append(item.get('sub_id')) - sub_name.append(item.get('sub_name')) - subcsa_id.append(item.get('subcsa_id')) - subcsa_name.append(item.get('subcsa_name')) - def_.append(item.get('def')) - notes.append(item.get('notes')) - vertical.append(item.get('vertical')) - unit.append(item.get('unit')) - graph_id.append(item.get('graph_id')) - graph_name.append(item.get('graph_name')) + df = self.__get_list(lang="ind", domain=domain, model="var", page=page) + pages = df["data"][0]["pages"] + if pages > 1: + for page in tqdm(range(1, pages + 1)): + df = self.__get_list(lang="ind", domain=domain, model="var", page=page) + for item in df["data"][1]: + var_id.append(item.get("var_id")) + title.append(item.get("title")) + sub_id.append(item.get("sub_id")) + sub_name.append(item.get("sub_name")) + subcsa_id.append(item.get("subcsa_id")) + subcsa_name.append(item.get("subcsa_name")) + def_.append(item.get("def")) + notes.append(item.get("notes")) + vertical.append(item.get("vertical")) + unit.append(item.get("unit")) + graph_id.append(item.get("graph_id")) + graph_name.append(item.get("graph_name")) df = { - 'var_id':var_id, - 'title':title, - 'sub_id':sub_id, - 'sub_name':sub_name, - 'subcsa_id':subcsa_id, - 'subcsa_name':subcsa_name, - 'def':def_, - 'notes':notes, - 'vertical':vertical, - 'unit':unit, - 'graph_id':graph_id, - 'graph_name':graph_name + "var_id": var_id, + "title": title, + "sub_id": sub_id, + "sub_name": sub_name, + "subcsa_id": subcsa_id, + "subcsa_name": subcsa_name, + "def": def_, + "notes": notes, + "vertical": vertical, + "unit": unit, + "graph_id": graph_id, + "graph_name": graph_name, } - return pd.DataFrame(df) - - def __get_pressRelease(self,domain="0000",year='',month='',keyword=''): - df = pd.DataFrame({ - 'brs_id':[], - 'subj_id':[], - 'subj':[], - 'title':[], - 'abstract':[], - 'rl_date':[], - 'updt_date':[], - 'pdf':[], - 'size':[], - 'slide':[], - 'thumbnail':[] - }) - res = self.__get_list(domain=domain,model='pressrelease',keyword=keyword,year=year,month=month) - if(res['data']==''): + return pd.DataFrame(df) + + def __get_pressRelease(self, domain="0000", year="", month="", keyword=""): + df = pd.DataFrame( + { + "brs_id": [], + "subj_id": [], + "subj": [], + "title": [], + "abstract": [], + "rl_date": [], + "updt_date": [], + "pdf": [], + "size": [], + "slide": [], + "thumbnail": [], + } + ) + res = self.__get_list( + domain=domain, model="pressrelease", keyword=keyword, year=year, month=month + ) + if res["data"] == "": print(res) return df - for item in res['data'][1]: - df = pd.concat([df, pd.DataFrame({ - 'brs_id':[item.get('brs_id')], - 'subj_id':[item.get('subj_id')], - 'subj':[item.get('subj')], - 'title':[item.get('title')], - 'abstract':[item.get('abstract')], - 'rl_date':[item.get('rl_date')], - 'updt_date':[item.get('updt_date')], - 'pdf':[item.get('pdf')], - 'size':[item.get('size')], - 'slide':[item.get('slide')], - 'thumbnail':[item.get('thumbnail')] - })], axis=0, ignore_index=True) - pages = res['data'][0]['pages'] - if(res['data'][0]['total']<2): + for item in res["data"][1]: + df = pd.concat( + [ + df, + pd.DataFrame( + { + "brs_id": [item.get("brs_id")], + "subj_id": [item.get("subj_id")], + "subj": [item.get("subj")], + "title": [item.get("title")], + "abstract": [item.get("abstract")], + "rl_date": [item.get("rl_date")], + "updt_date": [item.get("updt_date")], + "pdf": [item.get("pdf")], + "size": [item.get("size")], + "slide": [item.get("slide")], + "thumbnail": [item.get("thumbnail")], + } + ), + ], + axis=0, + ignore_index=True, + ) + pages = res["data"][0]["pages"] + if res["data"][0]["total"] < 2: print(res) - if(pages>1): - for i in tqdm(range(2,pages)): - res = self.__get_list(domain=domain,model='pressrelease',keyword=keyword,year=year,month=month,page=i) - if(res['data']==''): + if pages > 1: + for i in tqdm(range(2, pages)): + res = self.__get_list( + domain=domain, + model="pressrelease", + keyword=keyword, + year=year, + month=month, + page=i, + ) + if res["data"] == "": break - for item in res['data'][1]: - df = pd.concat([df, pd.DataFrame({ - 'brs_id':[item.get('brs_id')], - 'subj_id':[item.get('subj_id')], - 'subj':[item.get('subj')], - 'title':[item.get('title')], - 'abstract':[item.get('abstract')], - 'rl_date':[item.get('rl_date')], - 'updt_date':[item.get('updt_date')], - 'pdf':[item.get('pdf')], - 'size':[item.get('size')], - 'slide':[item.get('slide')], - 'thumbnail':[item.get('thumbnail')] - })], axis=0, ignore_index=True) + for item in res["data"][1]: + df = pd.concat( + [ + df, + pd.DataFrame( + { + "brs_id": [item.get("brs_id")], + "subj_id": [item.get("subj_id")], + "subj": [item.get("subj")], + "title": [item.get("title")], + "abstract": [item.get("abstract")], + "rl_date": [item.get("rl_date")], + "updt_date": [item.get("updt_date")], + "pdf": [item.get("pdf")], + "size": [item.get("size")], + "slide": [item.get("slide")], + "thumbnail": [item.get("thumbnail")], + } + ), + ], + axis=0, + ignore_index=True, + ) return df - - def __get_publication(self,domain="0000",year='',month='',keyword=''): - df = pd.DataFrame({ - 'pub_id':[], - 'title':[], - 'abstract':[], - 'issn':[], - 'sch_date':[], - 'rl_date':[], - 'updt_date':[], - 'cover':[], - 'pdf':[], - 'size':[] - }) - res = self.__get_list(domain=domain,model='pressrelease',keyword=keyword,year=year,month=month) - if(res['data']==''): + + def __get_publication(self, domain="0000", year="", month="", keyword=""): + df = pd.DataFrame( + { + "pub_id": [], + "title": [], + "abstract": [], + "issn": [], + "sch_date": [], + "rl_date": [], + "updt_date": [], + "cover": [], + "pdf": [], + "size": [], + } + ) + res = self.__get_list( + domain=domain, model="pressrelease", keyword=keyword, year=year, month=month + ) + if res["data"] == "": return df - for item in res['data'][1]: - df = pd.concat([df, pd.DataFrame({ - 'pub_id':[item.get('pub_id')], - 'title':[item.get('title')], - 'abstract':[item.get('abstract')], - 'issn':[item.get('issn')], - 'sch_date':[item.get('sch_date')], - 'rl_date':[item.get('rl_date')], - 'updt_date':[item.get('updt_date')], - 'cover':[item.get('cover')], - 'pdf':[item.get('pdf')], - 'size':[item.get('size')] - })], axis=0, ignore_index=True) - pages = res['data'][0]['pages'] - if(pages>1): - for i in tqdm(range(2,pages)): - res = self.__get_list(domain=domain,model='pressrelease',keyword=keyword,year=year,month=month,page=i) - if(res['data']==''): + for item in res["data"][1]: + df = pd.concat( + [ + df, + pd.DataFrame( + { + "pub_id": [item.get("pub_id")], + "title": [item.get("title")], + "abstract": [item.get("abstract")], + "issn": [item.get("issn")], + "sch_date": [item.get("sch_date")], + "rl_date": [item.get("rl_date")], + "updt_date": [item.get("updt_date")], + "cover": [item.get("cover")], + "pdf": [item.get("pdf")], + "size": [item.get("size")], + } + ), + ], + axis=0, + ignore_index=True, + ) + pages = res["data"][0]["pages"] + if pages > 1: + for i in tqdm(range(2, pages)): + res = self.__get_list( + domain=domain, + model="pressrelease", + keyword=keyword, + year=year, + month=month, + page=i, + ) + if res["data"] == "": break - for item in res['data'][1]: - df = pd.concat([df, pd.DataFrame({ - 'pub_id':[item.get('pub_id')], - 'title':[item.get('title')], - 'abstract':[item.get('abstract')], - 'issn':[item.get('issn')], - 'sch_date':[item.get('sch_date')], - 'rl_date':[item.get('rl_date')], - 'updt_date':[item.get('updt_date')], - 'cover':[item.get('cover')], - 'pdf':[item.get('pdf')], - 'size':[item.get('size')] - })], axis=0, ignore_index=True) + for item in res["data"][1]: + df = pd.concat( + [ + df, + pd.DataFrame( + { + "pub_id": [item.get("pub_id")], + "title": [item.get("title")], + "abstract": [item.get("abstract")], + "issn": [item.get("issn")], + "sch_date": [item.get("sch_date")], + "rl_date": [item.get("rl_date")], + "updt_date": [item.get("updt_date")], + "cover": [item.get("cover")], + "pdf": [item.get("pdf")], + "size": [item.get("size")], + } + ), + ], + axis=0, + ignore_index=True, + ) return df - - def __get_statictable(self,domain='0000',keyword=''): + + def __get_statictable(self, domain="0000", keyword=""): """ Based Method to get all static table :param domain: ID domain data :param keyword: keyword to search specific table """ - df = pd.DataFrame({ - 'table_id':[], - 'title':[], - 'subj_id':[], - 'subj':[], - 'updt_date':[], - 'size':[], - 'excel':[] - }) - res = self.__get_list(domain=domain,model='statictable',keyword=keyword) - if(res['data']==''): + df = pd.DataFrame( + { + "table_id": [], + "title": [], + "subj_id": [], + "subj": [], + "updt_date": [], + "size": [], + "excel": [], + } + ) + res = self.__get_list(domain=domain, model="statictable", keyword=keyword) + if res["data"] == "": return df - for item in res['data'][1]: - df = pd.concat([df, pd.DataFrame({ - 'table_id':[item.get('table_id')], - 'title':[item.get('title')], - 'subj_id':[item.get('subj_id')], - 'subj':[item.get('subj')], - 'updt_date':[item.get('updt_date')], - 'size':[item.get('size')], - 'excel':[item.get('excel')] - })], axis=0, ignore_index=True) - pages = res['data'][0]['pages'] - if(pages>1): - for i in tqdm(range(2,pages)): - res = self.__get_list(domain=domain,model='statictable',keyword=keyword,page=i) - if(res['data']==''): + for item in res["data"][1]: + df = pd.concat( + [ + df, + pd.DataFrame( + { + "table_id": [item.get("table_id")], + "title": [item.get("title")], + "subj_id": [item.get("subj_id")], + "subj": [item.get("subj")], + "updt_date": [item.get("updt_date")], + "size": [item.get("size")], + "excel": [item.get("excel")], + } + ), + ], + axis=0, + ignore_index=True, + ) + pages = res["data"][0]["pages"] + if pages > 1: + for i in tqdm(range(2, pages)): + res = self.__get_list( + domain=domain, model="statictable", keyword=keyword, page=i + ) + if res["data"] == "": break - for item in res['data'][1]: - df = pd.concat([df, pd.DataFrame({ - 'table_id':[item.get('table_id')], - 'title':[item.get('title')], - 'subj_id':[item.get('subj_id')], - 'subj':[item.get('subj')], - 'updt_date':[item.get('updt_date')], - 'size':[item.get('size')], - 'excel':[item.get('excel')] - })], axis=0, ignore_index=True) + for item in res["data"][1]: + df = pd.concat( + [ + df, + pd.DataFrame( + { + "table_id": [item.get("table_id")], + "title": [item.get("title")], + "subj_id": [item.get("subj_id")], + "subj": [item.get("subj")], + "updt_date": [item.get("updt_date")], + "size": [item.get("size")], + "excel": [item.get("excel")], + } + ), + ], + axis=0, + ignore_index=True, + ) return df - - def list_statictable(self, all=False, domain=[],latest=False): + + def list_statictable(self, all=False, domain=[], latest=False): """ Method to get all static table :param domain: array of ID domain data :param all: get all data from whole domain or not :param latest:get last data from webapi """ - if(not latest): - allStaticTable = pd.read_csv('https://gist.githubusercontent.com/isandyawan/31c29bd92039c4ff7b736826a7065028/raw/allStaticTable.csv',sep="|") - if(not all): + if not latest: + allStaticTable = pd.read_csv( + "https://gist.githubusercontent.com/isandyawan/31c29bd92039c4ff7b736826a7065028/raw/allStaticTable.csv", + sep="|", + ) + if not all: domain = [int(numeric_string) for numeric_string in domain] - allStaticTable.loc[allStaticTable['domain'].isin(domain)] - else: - if(all): + allStaticTable.loc[allStaticTable["domain"].isin(domain)] + else: + if all: warnings.warn("It will take around 2 hour") domain = self.list_domain() - domain = domain['domain_id'].values + domain = domain["domain_id"].values allStaticTable = [] index = 0 for row in domain: res = self.__get_statictable(domain=row) - res['domain'] = row - if(index==0): + res["domain"] = row + if index == 0: allStaticTable = res else: - allStaticTable = pd.concat([allStaticTable,res]) + allStaticTable = pd.concat([allStaticTable, res]) index += 1 allStaticTable = self.__format_list(allStaticTable) return allStaticTable - - def list_dynamictable(self, all=False, domain=[],latest=False): + + def list_dynamictable(self, all=False, domain=[], latest=False): """ Method to get all dynamic table :param domain: array of ID domain data :param all: get all data from whole domain or not :param latest:get last data from webapi """ - if(not latest): - allVariable = pd.read_csv('https://gist.githubusercontent.com/isandyawan/4d3efaeea4608c11b1e22b8a51fd0e4d/raw/allVariable.csv',sep="|") - if(not all): + if not latest: + allVariable = pd.read_csv( + "https://gist.githubusercontent.com/isandyawan/4d3efaeea4608c11b1e22b8a51fd0e4d/raw/allVariable.csv", + sep="|", + ) + if not all: domain = [int(numeric_string) for numeric_string in domain] - allVariable.loc[allVariable['domain'].isin(domain)] - else: + allVariable.loc[allVariable["domain"].isin(domain)] + else: index = 0 allVariable = [] - if(all): + if all: warnings.warn("It will take around 2 hour") domain = self.list_domain() - domain = domain['domain_id'].values + domain = domain["domain_id"].values for row in domain: res = self.__get_variable(domain=row) - res['domain'] = row - if(index==0): + res["domain"] = row + if index == 0: allVariable = res else: - allVariable = pd.concat([allVariable,res]) + allVariable = pd.concat([allVariable, res]) index += 1 allVariable = self.__format_list(allVariable) return allVariable - - def list_pressrelease(self, all=True, domain=[], month="",year="",latest=False): + + def list_pressrelease(self, all=True, domain=[], month="", year="", latest=False): """ Method to get all press release :param domain: array of ID domain data :param all: get all data from whole domain or not :param latest:get last data from webapi """ - if(not latest): - allPressRelease = pd.read_csv('https://gist.githubusercontent.com/isandyawan/4e67a8cf452838e914187e3597bf70c4/raw/allPressRelease.csv',sep="|", index_col=[0]) - if(not all): + if not latest: + allPressRelease = pd.read_csv( + "https://gist.githubusercontent.com/isandyawan/4e67a8cf452838e914187e3597bf70c4/raw/allPressRelease.csv", + sep="|", + index_col=[0], + ) + if not all: domain = [int(numeric_string) for numeric_string in domain] - allPressRelease.loc[allPressRelease['domain'].isin(domain)] - if((month!="") & (year !="")): - allPressRelease = allPressRelease.loc[allPressRelease['rl_date'].str.contains(year+'-'+'{0:0>2}'.format(month))] - else: - if(all): + allPressRelease.loc[allPressRelease["domain"].isin(domain)] + if (month != "") & (year != ""): + allPressRelease = allPressRelease.loc[ + allPressRelease["rl_date"].str.contains( + year + "-" + "{0:0>2}".format(month) + ) + ] + else: + if all: warnings.warn("It will take around 4 hour") domain = self.list_domain() - domain = domain['domain_id'].values + domain = domain["domain_id"].values allPressRelease = [] index = 0 for row in domain: - res = self.__get_pressRelease(domain=row,month=month,year=year) - res['domain'] = row - if(index==0): + res = self.__get_pressRelease(domain=row, month=month, year=year) + res["domain"] = row + if index == 0: allPressRelease = res else: - allPressRelease = pd.concat([allPressRelease,res]) + allPressRelease = pd.concat([allPressRelease, res]) index += 1 allPressRelease = self.__format_list(allPressRelease) return allPressRelease - - def list_publication(self, all=True, domain=[], month="",year="",latest=False): + + def list_publication(self, all=True, domain=[], month="", year="", latest=False): """ Method to get all publication :param domain: array of ID domain data :param all: get all data from whole domain or not :param latest:get last data from webapi """ - if(not latest): - allPublication = pd.read_csv('https://gist.githubusercontent.com/isandyawan/31b48670d76a199bc88fba3ec3c0672f/raw/allPublication.csv',sep="|", index_col=[0]) - if(not all): + if not latest: + allPublication = pd.read_csv( + "https://gist.githubusercontent.com/isandyawan/31b48670d76a199bc88fba3ec3c0672f/raw/allPublication.csv", + sep="|", + index_col=[0], + ) + if not all: domain = [int(numeric_string) for numeric_string in domain] - allPublication = allPublication.loc[allPublication['domain'].isin(domain)] - if((month!="") & (year !="")): - allPublication = allPublication.loc[allPublication['rl_date'].str.contains(year+'-'+'{0:0>2}'.format(month))] - else: - if(all): + allPublication = allPublication.loc[ + allPublication["domain"].isin(domain) + ] + if (month != "") & (year != ""): + allPublication = allPublication.loc[ + allPublication["rl_date"].str.contains( + year + "-" + "{0:0>2}".format(month) + ) + ] + else: + if all: warnings.warn("It will take around 4 hour") domain = self.list_domain() - domain = domain['domain_id'].values + domain = domain["domain_id"].values allPublication = [] index = 0 for row in domain: - res = self.__get_publication(domain=row,month=month,year=year) - res['domain'] = row - if(index==0): + res = self.__get_publication(domain=row, month=month, year=year) + res["domain"] = row + if index == 0: allPublication = res else: - allPublication = pd.concat([allPublication,res]) + allPublication = pd.concat([allPublication, res]) index += 1 allPublication = self.__format_list(allPublication) return allPublication @@ -411,97 +569,118 @@ def list_domain(self): """ Method to get all domain ID in level country till city """ - res = requests.get(f'{BASE_URL}api/domain/type/all/key/'f'{self.TOKEN}/') - if(res.status_code!=200): + res = requests.get(f"{BASE_URL}api/domain/type/all/key/" f"{self.TOKEN}/") + if res.status_code != 200: warnings.warn("Connection failed") return None else: res = res.json() - if(res['status']!='OK'): - raise Exception(res['message']) + if res["status"] != "OK": + raise Exception(res["message"]) domain_id = [] domain_name = [] domain_url = [] - for item in res['data'][1]: - domain_id.append(item['domain_id']) - domain_name.append(item['domain_name']) - domain_url.append(item['domain_url']) + for item in res["data"][1]: + domain_id.append(item["domain_id"]) + domain_name.append(item["domain_name"]) + domain_url.append(item["domain_url"]) df = { - 'domain_id':domain_id, - 'domain_name':domain_name, - 'domain_url':domain_url + "domain_id": domain_id, + "domain_name": domain_name, + "domain_url": domain_url, } result = pd.DataFrame(df) - result['level'] = 'kota/kabupaten' - result.loc[result['domain_id'].str.match('^.*00$'),'level'] = 'provinsi' - result.loc[result['domain_id'].str.match('0000'),'level'] = 'nasional' + result["level"] = "kota/kabupaten" + result.loc[result["domain_id"].str.match("^.*00$"), "level"] = "provinsi" + result.loc[result["domain_id"].str.match("0000"), "level"] = "nasional" return result - - def view_statictable(self,domain,table_id,lang='ind'): + + def view_statictable(self, domain, table_id, lang="ind"): """ Method to view one static table :param domain: Domains that will be displayed variable (see master domain on http://sig.bps.go.id/bridging-kode/index) :param table_id: ID static table :param lang: Language to display data. Default value: ind. Allowed values: "ind", "eng" """ - res = self.__get_view(domain,'statictable',lang,table_id) - res_clean = html.unescape(res['data']['table']) + res = self.__get_view(domain, "statictable", lang, table_id) + res_clean = html.unescape(res["data"]["table"]) df = pd.read_html(res_clean)[0] return df - - def view_(self,domain,table_id,lang='ind'): + + def view_(self, domain, table_id, lang="ind"): """ Method to view one static table :param domain: Domains that will be displayed variable (see master domain on http://sig.bps.go.id/bridging-kode/index) :param table_id: ID static table :param lang: Language to display data. Default value: ind. Allowed values: "ind", "eng" """ - res = self.__get_view(domain,'statictable',lang,table_id) - res_clean = html.unescape(res['data']['table']) + res = self.__get_view(domain, "statictable", lang, table_id) + res_clean = html.unescape(res["data"]["table"]) df = pd.read_html(res_clean)[0] return df - def view_pressrelease(self,domain,id): - res = self.__get_view(domain=domain,model='pressrelease',idx=id,lang='ind') - return Material(res['data']) - - def view_publication(self,domain,id): - res = self.__get_view(domain=domain,model='publication',idx=id,lang='ind') - return Material(res['data']) - - def view_dynamictable(self,domain,var,th=''): + def view_pressrelease(self, domain, id): + res = self.__get_view(domain=domain, model="pressrelease", idx=id, lang="ind") + return Material(res["data"]) + + def view_publication(self, domain, id): + res = self.__get_view(domain=domain, model="publication", idx=id, lang="ind") + return Material(res["data"]) + + def view_dynamictable(self, domain, var, th=""): """ Method to view one dynamic table :param var: Variable ID selected to display data :param th: Period data ID selected to display data """ - res = self.__get_list(lang = 'ind',domain=domain,model='data',page=1,var=var,th=th) - if(res['data']==''): + res = self.__get_list( + lang="ind", domain=domain, model="data", page=1, var=var, th=th + ) + if res["data"] == "": return None - res['datacontent'].values() - datacontent = pd.DataFrame({ - 'key':res['datacontent'].keys(), - 'value':res['datacontent'].values() - }) - - datacontent = datacontent.sort_values('key',ignore_index=True) + res["datacontent"].values() + datacontent = pd.DataFrame( + {"key": res["datacontent"].keys(), "value": res["datacontent"].values()} + ) + + datacontent = datacontent.sort_values("key", ignore_index=True) + + vervar = pd.DataFrame( + list(map(lambda x: [x["val"], x["label"]], res["vervar"])), + columns=["id_var", "variable"], + ) + vervar = vervar.sort_values("id_var", ignore_index=True) + + turvar = pd.DataFrame( + list(map(lambda x: [x["val"], x["label"]], res["turvar"])), + columns=["id_tur_var", "turunan variable"], + ) + turvar = turvar.sort_values("id_tur_var", ignore_index=True) + + result = vervar.merge(turvar, how="cross") - vervar = pd.DataFrame(list(map(lambda x: [x['val'],x['label']], res['vervar'])),columns=['id_var','variable']) - vervar = vervar.sort_values('id_var',ignore_index=True) + tahun = pd.DataFrame( + list(map(lambda x: [x["val"], x["label"]], res["tahun"])), + columns=["val", "label"], + ) + tahun = tahun.sort_values("val", ignore_index=True) - turvar = pd.DataFrame(list(map(lambda x: [x['val'],x['label']], res['turvar'])),columns=['id_tur_var','turunan variable']) - turvar = turvar.sort_values('id_tur_var',ignore_index=True) - - result = vervar.merge(turvar,how='cross') - - tahun = pd.DataFrame(list(map(lambda x: [x['val'],x['label']], res['tahun'])),columns=['val','label']) - tahun = tahun.sort_values('val',ignore_index=True) - for index, row in tahun.iterrows(): - result[row['label']]='' + result[row["label"]] = "" for index_result, row_result in result.iterrows(): - cell = datacontent.loc[datacontent['key'].str.match('^'+str(result.loc[index_result,'id_var'])+str(var)+str(result.loc[index_result,'id_tur_var'])+str(row['val'])),'value'] - if(len(cell)==0): + cell = datacontent.loc[ + datacontent["key"].str.match( + "^" + + str(result.loc[index_result, "id_var"]) + + str(var) + + str(result.loc[index_result, "id_tur_var"]) + + str(row["val"]) + ), + "value", + ] + if len(cell) == 0: continue - result.loc[index_result,str(row['label'])] = cell.reset_index(drop=True)[0] - return result \ No newline at end of file + result.loc[index_result, str(row["label"])] = cell.reset_index( + drop=True + )[0] + return result diff --git a/stadata/material.py b/stadata/material.py index 0b8fa9c..848394d 100644 --- a/stadata/material.py +++ b/stadata/material.py @@ -1,20 +1,20 @@ import requests + class Material(object): - DATA=None - CONTENT=None - + DATA = None + CONTENT = None + def __init__(self, data): - self.DATA=data - response = requests.get(data['pdf']) + self.DATA = data + response = requests.get(data["pdf"]) self.CONTENT = response.content - - + def desc(self): return self.DATA - def download(self,url): - pdf = open(url+"/"+self.DATA['title']+".pdf", 'wb') + def download(self, url): + pdf = open(url + "/" + self.DATA["title"] + ".pdf", "wb") pdf.write(self.CONTENT) pdf.close() - print("Download content success") \ No newline at end of file + print("Download content success") From bbb289fb4de60f3218bd617b49ef5cb98dd52725 Mon Sep 17 00:00:00 2001 From: Muhammad Luqman Date: Sun, 6 Aug 2023 13:52:51 +0700 Subject: [PATCH 4/8] feat: add new dummy feat --- stadata/semver_demo.py | 1 + 1 file changed, 1 insertion(+) create mode 100644 stadata/semver_demo.py diff --git a/stadata/semver_demo.py b/stadata/semver_demo.py new file mode 100644 index 0000000..f74a29e --- /dev/null +++ b/stadata/semver_demo.py @@ -0,0 +1 @@ +ADDED = "new feat" From 331bc63894ef8a3460f0827c271eabb4ee71f2e7 Mon Sep 17 00:00:00 2001 From: Muhammad Luqman Date: Sun, 6 Aug 2023 13:55:33 +0700 Subject: [PATCH 5/8] ci: add required dummy author --- pyproject.toml | 1 + 1 file changed, 1 insertion(+) diff --git a/pyproject.toml b/pyproject.toml index 7ba0b30..e5bd47d 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -2,6 +2,7 @@ name = "stadata-semver" version = "0.1.1" description = "API for get all statistics data from BPS" +authors = ["Muhammad Luqman "] urls = {homepage = "https://github.com/bps-statistics/stadata"} license = "MIT" classifiers = [ From e75d3856d5964ac777543b4bc4099460b46b20d6 Mon Sep 17 00:00:00 2001 From: Muhammad Luqman Date: Sun, 6 Aug 2023 14:00:49 +0700 Subject: [PATCH 6/8] ci: fix package folder --- pyproject.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pyproject.toml b/pyproject.toml index e5bd47d..e6cb80d 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -24,7 +24,7 @@ classifiers = [ keywords = [ "bps dataset utility indonesia" ] -packages = [{include = "stadata_semver"}] +packages = [{include = "stadata"}] [tool.poetry.dependencies] python = "^3.8" From 1c62a075a58b6f03bbb7947d28e82da3fb6ded76 Mon Sep 17 00:00:00 2001 From: Muhammad Luqman Date: Mon, 7 Aug 2023 07:31:45 +0700 Subject: [PATCH 7/8] chore: cleanup before PR --- README.md | 10 +++++----- pyproject.toml | 4 ++-- stadata/semver_demo.py | 1 - 3 files changed, 7 insertions(+), 8 deletions(-) delete mode 100644 stadata/semver_demo.py diff --git a/README.md b/README.md index 786f6d9..538bedb 100644 --- a/README.md +++ b/README.md @@ -1,10 +1,10 @@ # STADATA - Simplified Access to [WebAPI](https://webapi.bps.go.id/developer/) BPS -[![pyversion](https://img.shields.io/pypi/pyversions/stadata-semver)](https://img.shields.io/pypi/pyversions/stadata-semver) -[![pypi](https://img.shields.io/pypi/v/stadata-semver)](https://img.shields.io/pypi/v/stadata-semver) -[![status](https://img.shields.io/pypi/status/stadata-semver)](https://img.shields.io/pypi/status/stadata-semver) -[![downloads](https://img.shields.io/pypi/dm/stadata-semver.svg)](https://img.shields.io/pypi/dm/stadata-semver.svg) -[![sourcerank](https://img.shields.io/librariesio/sourcerank/pypi/stadata-semver.svg)](https://img.shields.io/librariesio/sourcerank/pypi/stadata-semver.svg) +[![pyversion](https://img.shields.io/pypi/pyversions/stadata)](https://img.shields.io/pypi/pyversions/stadata) +[![pypi](https://img.shields.io/pypi/v/stadata)](https://img.shields.io/pypi/v/stadata) +[![status](https://img.shields.io/pypi/status/stadata)](https://img.shields.io/pypi/status/stadata) +[![downloads](https://img.shields.io/pypi/dm/stadata.svg)](https://img.shields.io/pypi/dm/stadata.svg) +[![sourcerank](https://img.shields.io/librariesio/sourcerank/pypi/stadata.svg)](https://img.shields.io/librariesio/sourcerank/pypi/stadata.svg) [![contributors](https://img.shields.io/github/contributors/bps-statistics/stadata)](https://img.shields.io/github/contributors/bps-statistics/stadata) [![license](https://img.shields.io/github/license/bps-statistics/stadata)](https://img.shields.io/github/license/bps-statistics/stadata) diff --git a/pyproject.toml b/pyproject.toml index e6cb80d..cea5ecb 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,8 +1,8 @@ [tool.poetry] -name = "stadata-semver" +name = "stadata" version = "0.1.1" description = "API for get all statistics data from BPS" -authors = ["Muhammad Luqman "] +authors = ["Ignatius Sandyawan "] urls = {homepage = "https://github.com/bps-statistics/stadata"} license = "MIT" classifiers = [ diff --git a/stadata/semver_demo.py b/stadata/semver_demo.py deleted file mode 100644 index f74a29e..0000000 --- a/stadata/semver_demo.py +++ /dev/null @@ -1 +0,0 @@ -ADDED = "new feat" From fffcc54eeba5b603ed8ab8d87054dafc495700b4 Mon Sep 17 00:00:00 2001 From: Sandyawan Date: Sat, 21 Oct 2023 17:44:52 +0700 Subject: [PATCH 8/8] Update .gitignore --- .gitignore | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/.gitignore b/.gitignore index a577810..5726f01 100644 --- a/.gitignore +++ b/.gitignore @@ -2,4 +2,7 @@ dist/ stadata.egg-infostadata.egg-info stadata.egg-info/ .DS_Store -stadata/__pycache__ \ No newline at end of file +stadata/__pycache__ +*.pyc +.pytest_cache/v/cache/lastfailed +.pytest_cache/v/cache/stepwise