All Evals Listings
Loading
...
Loading
...
Loading
...
Loading
...
Loading
...
Loading
...
...
...
...
...
...
...
...
...
...
...
...
...
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024
Code for the paper "Evaluating Large Language Models Trained on Code"
[ICLR 2024] SWE-bench: Can Language Models Resolve Real-world Github Issues?
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
A framework for few-shot evaluation of language models.
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
Listings per page