All Framework Listings
evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
swarm
Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.
lm-evaluation-harness
A framework for few-shot evaluation of language models.
NeMo
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
PyRIT
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
Qwen-Agent
Agent framework and Others built upon Qwen>=2.0, featuring Function Calling, Code Interpreter, RAG, and Chrome extension.
MemGPT
Letta (formerly MemGPT) is a framework for creating LLM services with memory.
Fay
Fay is an open-source digital human framework integrating language models and digital characters. It offers retail, assistant, and agent versions for diverse Others like virtual shopping guides, broadcasters, assistants, waiters, teachers, and voice or text-based mobile assistants.
sglang
SGLang is a fast serving framework for large language models and vision language models.
haystack
AI orchestration framework to build customizable, production-ready LLM Others. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Listings per page