= i18nlexer Rizzen Yazston :DataProvider: https://docs.rs/icuprovider/1.2.0/icu_provider/trait.DataProvider.html :url-unicode: https://home.unicode.org/ :CLDR: https://cldr.unicode.org/ :icu4x: https://github.com/unicode-org/icu4x
String lexer and resultant tokens.
The Lexer
is initialised using any data provider implementing the {DataProvider}[DataProvider]
trait to an {url-unicode}[Unicode Consortium] {url-unicode}[CLDR] data repository (even a custom database). Usually the repository is just a local copy of the CLDR in the application's data directory. Once the Lexer
has been initialised it may be used to tokenise strings, without needing to re-initialising the Lexer
before use.
Consult the {icu4x}[ICU4X] website for instructions on generating a suitable data repository for the application, by leaving out data that is not used by the application.
Strings are tokenised using the method tokenise()
taking string slice and a vector containing grammar syntax characters.
== Acknowledgement
Stefano Angeleri for advice on various design aspects of implementing the components of the internationalisation project, and also providing the Italian translation of error message strings.
== Cargo.toml
``` [dependencies] i18nicu-rizzen-yazston = "0.6.1" icuprovider = "1.2.0"
icuproperties = "1.2.0" icusegmenter = "1.2.0" icuplurals = "1.2.0" icudecimal = "1.2.0" icucalendar = "1.2.0" icudatetime = "1.2.0"
[dependencies.fixed_decimal] version = "0.5.3"
features = [ "ryu" ] ```
== Examples
``` use i18nicu::IcuDataProvider; use i18nlexer::{Token, TokenType, tokenise}; use icutestdata::buffer; use icuprovider::serde::AsDeserializingBufferProvider; use std::rc::Rc; use std::error::Error;
fn testtokenise() -> Result<(), Box