We evaluate DeepCode on the PaperBench benchmark (released by OpenAI), a rigorous testbed requiring AI agents to independently reproduce 20 ICML 2024 papers from scratch. The benchmark comprises 8,316 ...
This is a high-performance C utility for evaluating basic fairness metrics from large tabular datasets. It provides a lightweight alternative to the FairBench Python framework for bias and fairness ...