Web: https://deepcausality.com
DeepCausality is a hyper-geometric computational causality library that enables fast and deterministic context-aware causal reasoning over complex multi-stage causality models. Deep Causality adds only minimal overhead and thus is suitable for real-time applications without additional acceleration hardware. Take a look at how is deep causality different from deep learning?
1) DeepCausality is written in Rust with safety, reliability, and performance in mind. 2) DeepCausality provides recursive causal data structures that concisely express arbitrary complex causal structures. 3) DeepCausality enables context awareness across data-like, time-like, space-like, spacetime-like entities stored within (multiple) context-hyper-graphs. 4) DeepCausality simplifies modeling of complex tempo-spatial patterns. 5) DeepCausality comes with Causal State Machine (CSM)
In your project folder, just run in a terminal:
bash
cargo add deep_causality
See:
A causal state machine models a context-free system where each cause maps to a known effect. The example below models a sensor network that screens an industry site for smoke, fire, and explosions. Because the sensors are reliable, an alert will be raised whenever the sensor exceeds a certain threshold. You could implement this kind of system in many different ways, but as the example shows, the causal state machine makes the system relatively easy to maintain. New sensors, for example, from a drone inspection, can be added and evaluated dynamically.
DeepCausality reasons uses the causaloid as its central data structure. A causaloid encodes a causal relation as a causal function that maps input data to an output decision determining whether the causal relation applied to the input data holds true.
The causaloid, however, can be a singleton, a collection, or a graph. The causaloid-graph, however, is a hypergraph with each node being a causaloid. This recursive structure means a sub-graph can be encapsulated as a causaloid which then becomes a node of a graph. A HashMap of causes can be encapsulated as a causaloid and embedded into the same graph. Then, the entire causaloid-graph can be analyzed in a variety of ways, for example:
As long as causal mechanisms can be expressed as a hyper-graph, the graph is guaranteed to evaluate. That means, any combination of single cause, multi cause, or partial cause can be expressed across many layers.
Also note, once activated, a causaloid stays activated until a different dataset evaluates negatively that will then deactivate the causaloid. Therefore, if parts of the dataset remain unchanged, the corresponding causaloids will remain active.
By default, the causaloid ID is matched to the data index. For example, the root causaloid at index 0 will match to the data at index 0 and the data from index 0 will be used to evaluated the root causaloid. If, for any reason, the data set is ordered differently, an optional data_index parameter can be specified that basically is a hashmap that maps the causaloid ID to a custom data index position.
Reasoning performance for basic causality functions is guaranteed sub-second for graphs below 10k nodes and micro seconds for graphs below 1k nodes. However, graphs with well above 100k nodes may require a large amount of memory (> 10GB) because the underlying matrix representation.
See tests as code examples:
Cargo works as expected, but in addition to cargo, a makefile exists that abstracts over several additional tools you may have to install before all make commands work:
bash
make build Builds the code base incrementally (fast).
make bench Runs all benchmarks across all crates.
make check Checks the code base for security vulnerabilities.
make coverage Checks test coverage and generates a html report.
make example Runs the code examples.
make fix Auto-fixes linting issues as reported by cargo and clippy.
make test Runs all tests across all crates.
Contributions are welcomed especially related to documentation, example code, and fixes. If unsure where to start, open an issue and ask. For more significant code contributions, please run make test and make check locally before opening a PR.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in deep_causality by you, shall be licensed under the MIT license without additional terms or conditions.
For details:
The project took inspiration from several researchers and their projects in the field:
Parts of the implementation are inspired by:
Finally, inspiration, especially related to the hypergraph structure, was derived from reading the Quanta Magazine.
This project is licensed under the MIT license.
For details about security, please read the security policy.