There is no such thing as a scarcity of AI benchmarks out there at present, with widespread choices like Humanity's Final Examination (HLE), ARC-AGI-2 and GDPval, amongst quite a few others.
AI brokers excel at fixing summary math issues and passing PhD-level exams that the majority benchmarks are based mostly on, however Databricks has a query for the enterprise: Can they really deal with the document-heavy work most enterprises want them to do?
The reply, in response to new analysis from the information and AI platform firm, is sobering. Even the best-performing AI brokers obtain lower than 45% accuracy on duties that mirror actual enterprise workloads, exposing a crucial hole between tutorial benchmarks and enterprise actuality.
"If we focus our analysis efforts on getting higher at [existing benchmarks], then we're most likely not fixing the proper issues to make Databricks a greater platform," Erich Elsen, principal analysis scientist at Databricks, defined to VentureBeat. "In order that's why we have been trying round. How can we create a benchmark that, if we get higher at it, we're really getting higher at fixing the issues that our clients have?"
The result’s OfficeQA, a benchmark designed to check AI brokers on grounded reasoning: Answering questions based mostly on complicated proprietary datasets containing unstructured paperwork and tabular information. In contrast to present benchmarks that target summary capabilities, OfficeQA proxies for the economically helpful duties enterprises really carry out.
Why tutorial benchmarks miss the enterprise mark
There are quite a few shortcomings of widespread AI benchmarks from an enterprise perspective, in response to Elsen.
HLE options questions requiring PhD-level experience throughout various fields. ARC-AGI evaluates summary reasoning by means of visible manipulation of coloured grids. Each push the frontiers of AI capabilities, however don't mirror day by day enterprise work. Even GDPval, which was particularly created to judge economically helpful duties, misses the goal.
"We come from a reasonably heavy science or engineering background, and generally we create evals that mirror that," Elsen stated. " In order that they're both extraordinarily math-heavy, which is a good, helpful process, however advancing the frontiers of human arithmetic shouldn’t be what clients try to do with Databricks."
Whereas AI is usually used for buyer help and coding apps, Databricks' buyer base has a broader set of necessities. Elsen famous that answering questions on paperwork or corpora of paperwork is a typical enterprise process. These require parsing complicated tables with nested headers, retrieving info throughout dozens or tons of of paperwork and performing calculations the place a single-digit error can cascade into organizations making incorrect enterprise selections.
Constructing a benchmark that mirrors enterprise doc complexity
To create a significant take a look at of grounded reasoning capabilities, Databricks wanted a dataset that approximates the messy actuality of proprietary enterprise doc corpora, whereas remaining freely accessible for analysis. The staff landed on U.S. Treasury Bulletins, revealed month-to-month for 5 many years starting in 1939 and quarterly thereafter.
The Treasury Bulletins examine each field for enterprise doc complexity. Every bulletin runs 100 to 200 pages and consists of prose, complicated tables, charts and figures describing Treasury operations: The place federal cash got here from, the place it went and the way it financed authorities operations. The corpus spans roughly 89,000 pages throughout eight many years. Till 1996, the bulletins have been scans of bodily paperwork; afterwards, they have been digitally produced PDFs. USAFacts, a corporation whose mission is "to make authorities information simpler to entry and perceive," partnered with Databricks to develop the benchmark, figuring out Treasury Bulletins as preferrred and making certain questions mirrored real looking use instances.
The 246 questions require brokers to deal with messy, real-world doc challenges: Scanned photos, hierarchical desk constructions, temporal information spanning a number of experiences and the necessity for exterior data like inflation changes. Questions vary from easy worth lookups to multi-step evaluation requiring statistical calculations and cross-year comparisons.
To make sure the benchmark requires precise document-grounded retrieval, Databricks filtered out questions that LLMs might reply utilizing parametric data or internet search alone. This eliminated easier questions and a few surprisingly complicated ones the place fashions leveraged historic monetary information memorized throughout pre-training.
Each query has a validated floor reality reply (usually a quantity, generally dates or small lists), enabling automated analysis with out human judging. This design alternative issues: It permits reinforcement studying (RL) approaches that require verifiable rewards, just like how fashions prepare on coding issues.
Present efficiency exposes basic gaps
Databricks examined Claude Opus 4.5 Agent (utilizing Claude's SDK) and GPT-5.1 Agent (utilizing OpenAI's File Search API). The outcomes ought to give pause to any enterprise betting closely on present agent capabilities.
When supplied with uncooked PDF paperwork:
-
Claude Opus 4.5 Agent (with default pondering=excessive) achieved 37.4% accuracy.
-
GPT-5.1 Agent (with reasoning_effort=excessive) achieved 43.5% accuracy.
Nonetheless, efficiency improved noticeably when supplied with pre-parsed variations of pages utilizing Databricks' ai_parse_document, indicating that the poor uncooked PDF efficiency stems from LLM APIs battling parsing quite than reasoning. Even with parsed paperwork, the experiments present room for enchancment.
When supplied with paperwork parsed utilizing Databricks' ai_parse_document:
-
Claude Opus 4.5 Agent achieved 67.8% accuracy (a +30.4 share level enchancment)
-
GPT-5.1 Agent achieved a 52.8% accuracy (a +9.3 share level enchancment)
Three findings that matter for enterprise deployments
The testing recognized crucial insights for practitioners:
Parsing stays the basic blocker: Complicated tables with nested headers, merged cells and strange formatting often produce misaligned values. Even when given precise oracle pages, brokers struggled primarily resulting from parsing errors, though efficiency roughly doubled with pre-parsed paperwork.
Doc versioning creates ambiguity: Monetary and regulatory paperwork get revised and reissued, which means a number of legitimate solutions exist relying on the publication date. Brokers typically cease looking as soon as they discover a believable reply, lacking extra authoritative sources.
Visible reasoning is a spot: About 3% of questions require chart or graph interpretation, the place present brokers constantly fail. For enterprises the place information visualizations talk crucial insights, this represents a significant functionality limitation.
How enterprises can use OfficeQA
The benchmark's design allows particular enchancment paths past easy scoring.
"Because you're ready to take a look at the proper reply, it's straightforward to inform if the error is coming from parsing," Elsen defined.
This automated analysis allows fast iteration on parsing pipelines. The verified floor reality solutions additionally allow RL coaching just like coding benchmarks, since there's no human judgment required.
Elsen stated the benchmark offers "a extremely sturdy suggestions sign" for builders engaged on search options. Nonetheless, he cautioned in opposition to treating it as coaching information.
"Not less than in my creativeness, the purpose of releasing that is extra as an eval and never as a supply of uncooked coaching information," he stated. "Should you tune too particularly into this surroundings, then it's not clear how generalizable your agent outcomes can be."
What this implies for enterprise AI deployments
For enterprises presently deploying or planning document-heavy AI agent techniques, OfficeQA offers a sobering actuality examine. Even the most recent frontier fashions obtain solely 43% accuracy on unprocessed PDFs and fall in need of 70% accuracy even with optimum doc parsing. Efficiency on the toughest questions plateaus at 40%, indicating substantial room for enchancment.
Three speedy implications:
Consider your doc complexity: In case your paperwork resemble the complexity profile of Treasury Bulletins (scanned photos, nested desk constructions, cross-document references), anticipate accuracy effectively beneath vendor advertising claims. Check in your precise paperwork earlier than manufacturing deployment.
Plan for the parsing bottleneck: The take a look at outcomes point out that parsing stays a basic blocker. Funds time and sources for customized parsing options quite than assuming off-the-shelf OCR will suffice.
Plan for exhausting query failure modes: Even with optimum parsing, brokers plateau at 40% on complicated multi-step questions. For mission-critical doc workflows that require multi-document evaluation, statistical calculations or visible reasoning, present agent capabilities is probably not prepared with out important human oversight.
For enterprises seeking to lead in AI-powered doc intelligence, this benchmark offers a concrete analysis framework and identifies particular functionality gaps that want fixing.
Keep forward of the curve with NextBusiness 24. Discover extra tales, subscribe to our e-newsletter, and be part of our rising neighborhood at nextbusiness24.com

