The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 90, in _split_generators
inferred_arrow_schema = pa.concat_tables(pa_tables, promote_options="default").schema
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 6319, in pyarrow.lib.concat_tables
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowTypeError: Unable to merge: Field json has incompatible types: struct<manifests: list<item: struct<annotations: struct<io.containerd.image.name: string, org.opencontainers.image.ref.name: string>, digest: string, mediaType: string, size: int64>>, mediaType: string, schemaVersion: int64> vs list<item: struct<Config: string, LayerSources: struct<sha256:0419a2c79fe3076abc1f421db6ec9fbfc705c15c62397f6ec1553195a472feac: struct<digest: string, mediaType: string, size: int64>, sha256:0e9561121601abac7643543d82000c52279dc746ef9bca9c90d0a028bf2953dc: struct<digest: string, mediaType: string, size: int64>, sha256:102ea32e0609cd854458a91add95b98f4fd3d0d3fa10ca4a734ac9ccddd6e76a: struct<digest: string, mediaType: string, size: int64>, sha256:1157aa1c473d3cd8bf2ce734850242b17601a65bce61c2123cb85e5452aca6d1: struct<digest: string, mediaType: string, size: int64>, sha256:1b527432e55d3c4e1a1323eb88b2a1f05abebf375a0a5ef0999f4884d242f048: struct<digest: string, mediaType: string, size: int64>, sha256:29473d95d06283b4febf9e7a67b080bcaa1f50155032d6bbe811b69d5c0d2c8e: struct<digest: string, mediaType: string, size: int64>, sha256:499e7a3a977cb4bad3e6a0387810cb73ae5dcfe908a9e5b940aacdfd64af4f48: struct<digest: string, mediaType: string, size: int64>, sha256:5ca28657b8d62b3f3c36151d858c4299f147e962f579cd48495bc23ecfcd47e9: struct<digest: string, mediaType: string, size: int64>, sha256:62e97cc660d529898f4ac07c003a888835f4917f40b263a719c21a10ac8c89fe: struct<digest: string, mediaType: string, size: int64>, sha256:7b034e607fe417aa2b9d6f24aae96317bbb4618e9007dccd893654d3028bd32d: struct<digest: string, mediaType: string, size: int64>, sha256:8747ddf1dc456d701301acb4efe193ccb56ee38f9392a39153e0671befdf1d2c: struct<digest: string, mediaType: string, size: int64>, sha256:975e45ff98ca18ebc9ac508f39e8b979f2f6d6ec56484764ca4429acc8c03cb1: struct<digest: string, mediaType: string, size: int64>, sha256:abe743884ea0f81e9c440e82f83e69d90f4a9c127436d36b77b0aae5eb96ce25: struct<digest: string, mediaType: string, size: int64>, sha256:af7f4a7b7e664c3ed836a23f4bb262c36a8d849ebddcfccf7edcdc523fbd5ce7: struct<digest: string, mediaType: string, size: int64>, sha256:f2c82d70eed479681124abd5551ab2808e4db7dd359422197e060bf49b212e33: struct<digest: string, mediaType: string, size: int64>, sha256:fde1622bb3aa78b9d60f37d0a5502e0be8b5774b3cf590bfa1e13c0d1750b424: struct<digest: string, mediaType: string, size: int64>>, Layers: list<item: string>, RepoTags: list<item: string>>>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SaaS-Bench Docker Images
Docker image archives for the SaaS-Bench benchmark — a suite of 23 self-hosted SaaS applications used to evaluate computer-use LLM agents on real, multi-step business workflows.
This repository hosts the prebuilt .tar images (≈ 52 GB total) so you
can reproduce the benchmark environment without rebuilding each app from
source. The eval harness, task definitions, and verifiers live in the main
SaaS-Bench repository.
Paper: SaaS-Bench: Can Computer-Use Agents Leverage Real-World SaaS to Solve Professional Workflows?
Overview
SaaS-Bench evaluates browser-driving LLM agents on 106 task instances
across 6 domains, running on 23 self-hosted SaaS applications.
Each task asks the agent to complete a multi-step workflow (e.g. create a
purchase order, configure a project board, schedule a patient visit); a
per-task verify.py script then inspects the running application's state
(DB rows, API responses, filesystem) and returns a pass/fail.
| Track | Domain | Tasks | Representative apps |
|---|---|---|---|
| uni-m | BOF | 15 | Twenty, Bigcapital, HRMS, Pretix |
| uni-m | HA | 16 | OpenEMR, OnlyOffice, OpnForm |
| uni-m | SEPM | 31 | Baserow, OpenProject, code-server, Metabase |
| uni-m | TCDW | 12 | OnlyOffice, Mattermost, RoundcubeMail, ownCloud |
| multi-m | AASC | 12 | Grocy, farmOS, Recipya, e-label |
| multi-m | IMC | 20 | SiYuan, Watcharr, BookLore, PhotoPrism, MediaCMS |
Domains: BOF = Business Operations & Finance · HA = Healthcare & Administration · SEPM = Software Eng. & Project Mgmt. · TCDW = Team Comms & Document Workflows · AASC = Agriculture, Authoring & Supply Chain · IMC = Information Mgmt. & Creative.
Contents
23 Docker image archives (mw-*.tar) covering every app used by the
benchmark.
| File | App / Stack | Size |
|---|---|---|
mw-baserow.tar |
Baserow | 2.86 GB |
mw-bigcapital.tar |
Bigcapital | 2.74 GB |
mw-booklore.tar |
BookLore | 1.45 GB |
mw-code-server.tar |
code-server | 4.60 GB |
mw-elabel.tar |
e-label | 1.86 GB |
mw-farmos.tar |
farmOS | 1.05 GB |
mw-grocy.tar |
Grocy | 273 MB |
mw-hrms.tar |
HRMS | 5.47 GB |
mw-mattermost.tar |
Mattermost (+Postgres) | 1.42 GB |
mw-mediacms.tar |
MediaCMS | 1.64 GB |
mw-metabase.tar |
Metabase | 847 MB |
mw-onlyoffice.tar |
OnlyOffice Community | 9.74 GB |
mw-openemr.tar |
OpenEMR | 2.61 GB |
mw-openproject.tar |
OpenProject | 2.11 GB |
mw-opnform.tar |
OpnForm | 474 MB |
mw-owncloud.tar |
ownCloud | 1.97 GB |
mw-photoprism.tar |
PhotoPrism | 3.56 GB |
mw-pretix.tar |
Pretix | 2.23 GB |
mw-recipya.tar |
Recipya | 593 MB |
mw-roundcubemail.tar |
Roundcube Mail | 1.25 GB |
mw-siyuan.tar |
SiYuan Notes | 2.85 GB |
mw-twenty.tar |
Twenty CRM | 1.97 GB |
mw-watcharr.tar |
Watcharr | 239 MB |
Each tar already contains the :latest tag; image names follow the
mw-<app>[-<component>] convention so loaders can resolve them
deterministically.
Download
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="Marti844/SaaS-Bench-docker",
repo_type="dataset",
local_dir="docker/images",
allow_patterns=["*.tar"],
)
Or with the CLI:
huggingface-cli download anonymous8722/SaaS-Bench \
--repo-type dataset --local-dir docker \
--include "*.tar"
System requirements
- Disk: ≥ 60 GB free for the image archives plus loaded images.
- RAM: ≥ 500 GB recommended if you run the full eval with the default 4-way parallelism — most stacks bundle their own DB / search / document-server, so total memory grows quickly under concurrency.
- Host OS: Linux. Tested on Ubuntu 22.04 and Alibaba Cloud Linux.
- Docker: 24+ with the
composeplugin.
Licensing
- This card: Apache 2.0.
- Each bundled Docker image retains the license of its upstream project (e.g. OnlyOffice — AGPLv3, Mattermost — MIT/AGPLv3 dual, OpenEMR — GPLv3, etc.). The images are redistributed for benchmarking convenience only. Verify upstream terms before any non-research use.
- Downloads last month
- 22