Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
jackjyzhang committed Dec 23, 2022
1 parent 30a4750 commit 3445d2d
Showing 1 changed file with 30 additions and 36 deletions.
66 changes: 30 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,48 +18,42 @@ To run the Carmen frontend, see:

$ python -m carmen.cli --help

### Geonames Mapping
### Carmen 2.0 Improvements
We are excited to release the improved Carmen Twitter geotagger, Carmen 2.0! We have implemented the following improvements:
- A new location database derived from the open-source [GeoNames](https://www.geonames.org/) geographical database. This multilingual database improves the coverage and robustness of Carmen as shown in our analysis paper "[Changes in Tweet Geolocation over Time: A Study with Carmen 2.0](https://aclanthology.org/2022.wnut-1.1/)".
- Compatibility with Twitter API V2.
- An up to 10x faster geocode resolver.

Alternatively, `locations.json` can be swapped out to use Geonames IDs
instead of arbitrary IDs used in the original version of Carmen. This
JSON file can be found in `carmen/data/new.json`.
### GeoNames Mapping

Below are instructions on how mappings can be generated.
We provide two different location databases.
- `carmen/data/geonames_locations_combined.json` is the new GeoNames database introduced in Carmen 2.0. It is derived by swapping out to use GeoNames IDs instead of arbitrary IDs used in the original version of Carmen. This database will be used by default.
- `carmen/data/locations.json` is the default database in original carmen. This is faster but less powerful compared to our new database. You can use the `--locations` flag to switch to this version of database for backward compatibility.

First, we need to get the data. This can be found at
http://download.geonames.org/export/dump/. The required files are
`countryInfo.txt`, `admin1CodesASCII.txt`, `admin2Codes.txt`, and
`cities1000.txt`. Download these files and move them into
`carmen/data/dump/`.
We refer reader to the Carmen 2.0 paper repo for more details of GeoNames mapping: https://github.com/AADeLucia/carmen-wnut22-submission

Next, we need to format our data. We can simply delete the comments in
`countryInfo.txt`. Afterwards, run the following.

$ python3 format_admin1_codes.py
$ python3 format_admin2_codes.py

Then, we need to set up a PostgreSQL database, as this allows finding
relations between the original Carmen IDs and Geonames IDs significantly
easier. To set up the database, create a PostgreSQL database named `carmen`
and reun the following SQL script:

$ psql -f carmen/sql/populate_db.sql carmen

Now we can begin constructing the mappings from Carmen IDs to
Geonames IDs. Run the following scripts.

$ python3 map_cities.py > ../mappings/cities.txt
$ python3 map_regions.py > ../mappings/regions.txt

With the mappings constructed, we can finally attempt to convert the
`locations.json` file into one that uses Geonames IDs. To do this, run
the following.

$ python3 rewrite_json.py

### building for release
### Building for Release

1. In the repo root folder, `python setup.py sdist bdist_wheel` to create the wheels in `dist/` directory
2. `python -m twine upload --repository testpypi dist/*` to upload to testpypi
3. **Create a brand new environment**, and do `pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple carmen` to make sure it can be installed correctly from testpypi
4. After checking correctness, use `python -m twine upload dist/*` to publish on actual pypi

### Reference
If you use the Carmen 2.0 package, please cite the following work:
```
@inproceedings{zhang-etal-2022-changes,
title = "Changes in Tweet Geolocation over Time: A Study with Carmen 2.0",
author = "Zhang, Jingyu and
DeLucia, Alexandra and
Dredze, Mark",
booktitle = "Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wnut-1.1",
pages = "1--14",
abstract = "Researchers across disciplines use Twitter geolocation tools to filter data for desired locations. These tools have largely been trained and tested on English tweets, often originating in the United States from almost a decade ago. Despite the importance of these tools for data curation, the impact of tweet language, country of origin, and creation date on tool performance remains largely unknown. We explore these issues with Carmen, a popular tool for Twitter geolocation. To support this study we introduce Carmen 2.0, a major update which includes the incorporation of GeoNames, a gazetteer that provides much broader coverage of locations. We evaluate using two new Twitter datasets, one for multilingual, multiyear geolocation evaluation, and another for usage trends over time. We found that language, country origin, and time does impact geolocation tool performance.",
}
```

0 comments on commit 3445d2d

Please sign in to comment.