A library for creating TensorFlow models that handle structured data with dynamic computation graphs using dynamic batching.
TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the computation graph dynamically adapts to the input's structure. It solves the problem of handling arbitrarily shaped data, such as parse trees in natural language processing, by using dynamic batching to transform variable graphs into static ones for efficient execution. This enables deep learning on non-uniform data without manual padding or fixed-size constraints.
Machine learning researchers and developers working with structured or variable-sized data, such as natural language parse trees, graphs, or recursive structures, who need to build models in TensorFlow.
Developers choose TensorFlow Fold for its ability to seamlessly integrate structured data into TensorFlow with dynamic computation graphs and batching, offering flexibility and performance without the overhead of manual graph management. Its unique approach allows training on arbitrary data shapes while leveraging TensorFlow's optimized execution.
Deep learning with dynamic computation graphs in TensorFlow
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Enables processing of arbitrarily shaped inputs like parse trees, as shown in the TreeLSTM example for sentiment analysis, without manual padding.
Transforms batches of variable graphs into static graphs for optimized TensorFlow execution, improving performance on structured data through operation batching.
Specifically designed for consuming structured data, making it essential for NLP tasks such as recursive neural networks on parse trees.
Provides the Loom API to create TensorFlow ops like concat and while_loop before variable initialization, offering fine-grained control for advanced users.
Primarily useful for dynamic computation graphs on structured data, reducing its relevance for general deep learning tasks with uniform inputs.
Requires understanding of both TensorFlow and structured data handling, with non-trivial setup steps highlighted in the documentation links.
As noted in the README, it is not an official Google product, which may lead to fewer updates and community resources over time.
The dynamic batching mechanism introduces computational overhead that may not be justified for models with fixed-size or uniformly structured data.