The Real First Universal Charset Detector, Rust version
A library that helps you read text from an unknown charset encoding.
Motivated by original Python version ofcharset-normalizer
, I'm trying to resolve the issue by taking a new approach. All IANA character set names for which the Rustencoding
library provides codecs are supported.
This project is port of original Pyhon version of Charset Normalizer. The biggest difference between Python and Rust versions - number of supported encodings as each langauge has own encoding / decoding library. In Rust version only encoding from WhatWG standard are supported. Python version supports more encodings, but a lot of them are old almost unused ones.
| Feature | Chardet | Charset Normalizer | cChardet |
|--------------------------------------------------|:---------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:-----------------------------------------------:|
| Fast
| β
| β
| β
|
| Universal**
| β | β
| β |
| Reliable
without distinguishable standards | β | β
| β
|
| Reliable
with distinguishable standards | β
| β
| β
|
| License
| LGPL-2.1
restrictive | MIT | MPL-1.1
restrictive |
| Native Python
| β
| β
| β |
| Detect spoken language
| β | β
| N/A |
| UnicodeDecodeError Safety
| β | β
| β |
| Whl Size
| 193.6 kB | 40 kB | ~200 kB |
| Supported Encoding
| 33 | π 90 | 40 |
** : They are clearly using specific code for a specific encoding even if covering most of used one
This package offer better performance than its counterpart Chardet. Here are some numbers.
| Package | Accuracy | Mean per file (ms) | File per sec (est) | |-----------------------------------------------|:--------:|:------------------:|:------------------:| | chardet | 86 % | 200 ms | 5 file/sec | | charset-normalizer | 98 % | 10 ms | 100 file/sec |
| Package | 99th percentile | 95th percentile | 50th percentile | |-----------------------------------------------|:---------------:|:---------------:|:---------------:| | chardet | 1200 ms | 287 ms | 23 ms | | charset-normalizer | 100 ms | 50 ms | 5 ms |
Chardet's performance on larger file (1MB+) can be very poor. Expect huge difference on large payload.
Stats are generated using 400+ files using default parameters. These results might change at any time. The dataset can be updated to include more files. The actual delays heavily depends on your CPU capabilities. The factors should remain the same. Rust version dataset has been reduced as number of supported encodings is lower than in Python version.
Using cargo:
sh
cargo add charset-normalizer-rs
This package comes with a CLI, which supposes to be compatible with Python version CLI tool.
```
Usage: normalizer [OPTIONS]
Arguments:
Options:
-v, --verbose Display complementary information about file if any. Stdout will contain logs about the detection process
-a, --with-alternative Output complementary possibilities if any. Top-level JSON WILL be a list
-n, --normalize Permit to normalize input file. If not set, program does not write anything
-m, --minimal Only output the charset detected to STDOUT. Disabling JSON output
-r, --replace Replace file when trying to normalize it instead of creating a new one
-f, --force Replace file without asking if you are sure, use this flag with caution
-t, --threshold
bash
normalizer ./data/sample.1.fr.srt
π The CLI produces easily usable stdout result in JSON format (should be the same as in Python version).
json
{
"path": "/home/default/projects/charset_normalizer/data/sample.1.fr.srt",
"encoding": "cp1252",
"encoding_aliases": [
"1252",
"windows_1252"
],
"alternative_encodings": [
"cp1254",
"cp1256",
"cp1258",
"iso8859_14",
"iso8859_15",
"iso8859_16",
"iso8859_3",
"iso8859_9",
"latin_1",
"mbcs"
],
"language": "French",
"alphabets": [
"Basic Latin",
"Latin-1 Supplement"
],
"has_sig_or_bom": false,
"chaos": 0.149,
"coherence": 97.152,
"unicode_path": null,
"is_preferred": true
}
Just print out normalized text ```python from charsetnormalizer import frompath
results = frompath('./mysubtitle.srt')
print(str(results.best())) ```
Upgrade your code without effort
python
from charset_normalizer import detect
The above code will behave the same as chardet. We ensure that we offer the best (reasonable) BC result possible.
See the docs for advanced usage : readthedocs.io
When I started using Chardet (Python version), I noticed that it was not suited to my expectations, and I wanted to propose a reliable alternative using a completely different method. Also! I never back down on a good challenge!
I don't care about the originating charset encoding, because two different tables can produce two identical rendered string. What I want is to get readable text, the best I can.
In a way, I'm brute forcing text decoding. How cool is that? π
Wait a minute, what is noise/mess and coherence according to YOU?
Noise : I opened hundred of text files, written by humans, with the wrong encoding table. I observed, then I established some ground rules about what is obvious when it seems like a mess. I know that my interpretation of what is noise is probably incomplete, feel free to contribute in order to improve or rewrite it.
Coherence : For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought that intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design.
Contributions, issues and feature requests are very much welcome.
Feel free to check issues page if you want to contribute.
Copyright Β© Nikolay Yarovoy @nickspring - porting to Rust.
Copyright Β© Ahmed TAHRI @Ousret - original Python version and some parts of this document.
This project is MIT licensed.
Characters frequencies used in this project Β© 2012 Denny VrandeΔiΔ