Skip to content

Latest commit

 

History

History
115 lines (87 loc) · 4.96 KB

README.md

File metadata and controls

115 lines (87 loc) · 4.96 KB

Open Data Sentence Corpora

This civic science project aims to analyze and sentenize all the open data of Riksdagen and other sources using spaCy to create an easily linkable dataset of sentences that can be refered to from Wikidata lexemes and other resources.

The advantage of such a dataset is huge from a language perspective. The sentences contain valuable information about what is going on in society. They contain a lot of words, phrases and idioms which are highly valuable to anyone interested in the language. The 600k documents to be analyzed contains a lot of political dialogue and written documents from institutions in the Swedish state.

Keywords: NLP, data science, open data, swedish, open government data, riksdagen, sweden, API

Author

Dennis Priskorn.

Idea

Use spaCy to create the first version. All sentences are language detected and given an UUID which is unique for each release.

As better sentenizing becomes available or Riksdagen improve their data over time, the hashes and UUIDs will change, but all released versions will be locked in time and can always be refered to consistently and reliably.

The resulting dataset is planned to be released in Zenodo and is expected to be around 1TB

Features

  • Reliability
  • Locked in time
  • Referencable
  • Language detected (using Fasttext langdetect)
  • Uniquely identifiable
  • Linkable (the individual sentences are not planned to be linkable at this stage, but the release is and line numbers or UUIDs can be used to link with no ambiguity)
  • Named Entity Recognition entities for each sentence and document
  • An evolvable API
    • /lookup endpoint to get sentences to use as usage examples for Wikidata lexemes (based on the needs of Luthor)

Scope

This way of chopping up open data can be applied to any open data, provided that it is in a machine readable form like TEXT, XML, JSON or HTML.

Riksdagen has about 600k documents that can be downloaded as open data.

This project is a stepping stone to an even larger database of sentences and tokens that we can use to enrich the lexicographic data in Wikidata.

Statistics

See STATISTICS.md

Design

API design inspired by

Data model

Datamodel

UML source

Installation

Clone the repo

Run

$ pip install poetry && poetry install

Also download the model needed

$ python -m spacy download sv_core_news_lg (250 MB)

Now download some of the source datasets from Riksdagen and put them in a data/sv/ folder hierarchy.

Use

$ python riksdagen_analyzer --analyze

Sources

Mostly unilingual

Related corpora

Inspiration

Alice Zhao https://www.youtube.com/watch?v=8Fw1nh8lR54

Thanks

Thanks to Nicolas Vigneron and Asof Bartov for dicussions about the needs of Luthor and how to make this project most suitable as a source of sentences used in usage examples on Wikidata lexemes.

License

GPLv3+

What I learned

  • the default sentenizer for Swedish in spaCy is not ideal
  • fasttext langdetect cannot reliably detect language of sentences with only one token/word
  • chatgpt can write good code, but it still outputs wonky code sometimes
  • chatgpt is very good at creating sql queries!
  • working on millions of sentences with NLP takes time even on a fast machine like my 8th gen 8-core i5 laptop
  • python langdetect was too slow and only utilized 1 CPU, swiching to fasttext langdetect was a bit challenging because I had to fix the python module
  • it's so nice to work with classes and small methods and combining them in ways that makes sense. KISS!