-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: use cpm(characters per minute) for more accurate result #54
base: master
Are you sure you want to change the base?
Conversation
- `countWords` ignores non-word characters at the beginning and end of a paragraph - `isWordOrChar` function is declared for better readability
The main logic to count the words remains the same: if a non-word bound is followed by a non-CJK character, it counts it as a word. However, if a character is a CJK character, it counts it as a char instead of word. The main problem with the previous logic was that unlike Chinese and Japanese, a single Korean character is not a word, and since it was counted as a word, the reading time was significantly differ from the actual time. This commit suggests a change to the latin word counting logic as well. Previously, a link was counted as a single word. I suggest counting all the words that consist a link. For example, `https://google.com` is now a three-word-paragraph. Hopefully, this change will improve the accuracy of result.
The reading time of CJK text is now calcuated based on cpm(character per minute) and added to nonCJK reading time for better multi-language support. This changes the structure of output object.
Hello, @ngryman. It'a new year, so I just wanted to follow up on this pull request. I was wondering if there is any chance that it could be reviewed in the near future? I'm excited to contribute to the project and would really appreciate any feedback. Thank you for your time and consideration. I look forward to your response. |
@jcha0713 isn't it better to use WPM for Korean and use CPM for only Japanese and Chinese? Anyway, It's a shame this PR hasn't been merged yet. |
@ngryman Project dead? Please review (and merge) this MR. |
Hi: I'm co-maintaining. I'm not sure if @ngryman has time to review at all. I think it's very hard to gauge what a "word" means and whether reading speed can really be accurately measured by either "words" or "characters". Even in Chinese, I would say two-character words can be read faster than two words each of a single character. If anything, we should use |
Sounds possible, so I prototyped a naive implementation: // Korean wpm example
// example text taken from Korean wikipedia: https://ko.wikipedia.org/wiki/%ED%95%9C%EA%B5%AD%EC%96%B4
const text = `한국어(韓國語, 문화어: 조선말)는 대한민국과 조선민주주의인민공화국의 공용어이다. 둘은 표기나 문법에서는 차이가 없지만 표현에서 차이가 있다.`
const w = [
...new Intl.Segmenter(undefined, {
granularity: 'word',
}).segment(text),
].filter(({ isWordLike }) => isWordLike).length
// Korean wpm source: https://www.jkos.org/upload/pdf/JKOS057-04-17.pdf
console.log(w / 202.3)
// result: 0.08403361344537814m ≈ 5.0420168067s But the locale matters. If we leave locale |
I agree that we should use more robust method if possible. I'm not sure if measuring the reading speed based on the semantic segments is the best way, but I guess it's the most optimal solution we have as of now. In fact, I first created a library that uses the I'm happy to do some more work based on this if wanted. |
Motivation
The library treats each CJK character as a separate word. However, unlike Chinese and Japanese, Korean characters should not be treated as words as single character is often meaningless but used to form a word. For this reason, the result does not seem very accurate when computing reading time for Korean text.
I'm not an expert in Chinese nor Japanese but this might also be true for both languages and therefore there is a possibility that this library is giving wrong results for those languages as well.
As a solution, I suggest counting all CJK characters as individual characters (rather than as words) and using cpm (characters per minutes) for more accurate results. This way, we can count CJK characters and latin words separately and have two reading time values that we can simply add up.
Major changes
In this PR, I made several changes and added more test cases to ensure everything is working fine.
First I changed the
WordCountStats
type to have two variables:words
andchars
instead oftotal
. Then I replacedwords
inReadingTimeResult
type withcounts
object to groupwords
andchars
together. I also changedOptions
type to take optionalcharactersPerMinute
value. The default for cpm is 500. ref: mediumAs mentioned above, it now calculates two different reading time values for CJK characters and non-CJK words and adds the numbers together to get
minutes
.I fixed a bug which was occurring when the first character of text is a punctuation.
I introduced some new variables to improve the readability of code.
Another proposal
Currently, the
countWords
is handling links as one-word texts. For example,https://google.com
or[google](https://google.com)
would be treated as one word. However, I believe we should count these as multiple-word texts as it's more natural to read the link word by word. So I changed the logic to count all the words that are in the link and also altered the test cases accordingly. Please Let me know if you have any concerns with this.I believe this PR would help CJK users to have very accurate reading time estimation. In fact, when I tested this with my blog post which was written in Korean, it gave me 13 minutes which is pretty accurate. (previously it was 28 min 😓)