Showing 13 of 13 projects
A web UI and optimization library for running and fine-tuning open-source AI models locally with 2x faster training and 70% less VRAM.
An open-source framework for building LLM-powered applications with data ingestion, indexing, and retrieval capabilities.
TensorFlow implementation and pre-trained models for BERT, a bidirectional Transformer for language understanding.
An open-source framework for financial large language models, enabling cost-effective fine-tuning for tasks like sentiment analysis and forecasting.
A comprehensive library for post-training foundation models using reinforcement learning and fine-tuning techniques.
Official JAX/Flax implementation of Vision Transformer (ViT) and MLP-Mixer for image recognition, with pre-trained models.
Unofficial Go client library for the OpenAI API, supporting ChatGPT, GPT-4, DALL·E, Whisper, and more.
A collection of libraries to optimize AI model performance through inference acceleration, infrastructure efficiency, and fine-tuning optimization.
A curated collection of resources for building, training, serving, and optimizing production-grade Large Language Model applications.
A free course teaching how to design, train, and deploy a production-ready real-time financial advisor LLM system using RAG and LLMOps.
A high-performance, scalable LLM library and reference implementation written in pure Python/JAX for training on TPUs and GPUs.
A JAX research toolkit for building, editing, and visualizing neural networks as legible, functional pytree data structures.
A Python package for fine-tuning and generating text with GPT-2 and GPT Neo models using PyTorch and Hugging Face Transformers.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.