The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
metadata: struct<updated: timestamp[s], source: string, url: string, providers_count: int64, total_skus: int64 (... 279 chars omitted)
child 0, updated: timestamp[s]
child 1, source: string
child 2, url: string
child 3, providers_count: int64
child 4, total_skus: int64
child 5, methodology: string
child 6, last_curated: string
child 7, pricing_sources: struct<Azure: string, RunPod: string, Lambda: string, CoreWeave: string, Together AI: string, Vast.a (... 116 chars omitted)
child 0, Azure: string
child 1, RunPod: string
child 2, Lambda: string
child 3, CoreWeave: string
child 4, Together AI: string
child 5, Vast.ai: string
child 6, Vultr: string
child 7, Nebius: string
child 8, OCI: string
child 9, Cudo Compute: string
child 10, Fluidstack: string
child 11, Paperspace: string
providers: struct<RunPod: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double, spot: doubl (... 877 chars omitted)
child 0, RunPod: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double, spot: double>>
child 0, item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double, spot: double>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
child 4, spot: double
child 1, Lambda: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>>
child 0, item: struct<
...
>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
child 7, OCI: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>>
child 0, item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
child 8, Cudo Compute: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>>
child 0, item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
child 9, Fluidstack: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>>
child 0, item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
child 10, Paperspace: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>>
child 0, item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
provider: null
gpu_name: null
gpu_memory_gb: null
price_per_hour_usd: null
gpu_arch: null
vram_gb: null
fp16_tflops: null
source: null
to
{'provider': Value('string'), 'gpu_name': Value('string'), 'gpu_memory_gb': Value('string'), 'price_per_hour_usd': Value('float64'), 'gpu_arch': Value('string'), 'vram_gb': Value('int64'), 'fp16_tflops': Value('float64'), 'source': Value('string')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2092, in _iter_arrow
pa_table = cast_table_to_features(pa_table, self.features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
raise CastError(
datasets.table.CastError: Couldn't cast
metadata: struct<updated: timestamp[s], source: string, url: string, providers_count: int64, total_skus: int64 (... 279 chars omitted)
child 0, updated: timestamp[s]
child 1, source: string
child 2, url: string
child 3, providers_count: int64
child 4, total_skus: int64
child 5, methodology: string
child 6, last_curated: string
child 7, pricing_sources: struct<Azure: string, RunPod: string, Lambda: string, CoreWeave: string, Together AI: string, Vast.a (... 116 chars omitted)
child 0, Azure: string
child 1, RunPod: string
child 2, Lambda: string
child 3, CoreWeave: string
child 4, Together AI: string
child 5, Vast.ai: string
child 6, Vultr: string
child 7, Nebius: string
child 8, OCI: string
child 9, Cudo Compute: string
child 10, Fluidstack: string
child 11, Paperspace: string
providers: struct<RunPod: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double, spot: doubl (... 877 chars omitted)
child 0, RunPod: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double, spot: double>>
child 0, item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double, spot: double>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
child 4, spot: double
child 1, Lambda: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>>
child 0, item: struct<
...
>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
child 7, OCI: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>>
child 0, item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
child 8, Cudo Compute: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>>
child 0, item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
child 9, Fluidstack: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>>
child 0, item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
child 10, Paperspace: list<item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>>
child 0, item: struct<gpu: string, cnt: int64, mem: int64, on_demand: double>
child 0, gpu: string
child 1, cnt: int64
child 2, mem: int64
child 3, on_demand: double
provider: null
gpu_name: null
gpu_memory_gb: null
price_per_hour_usd: null
gpu_arch: null
vram_gb: null
fp16_tflops: null
source: null
to
{'provider': Value('string'), 'gpu_name': Value('string'), 'gpu_memory_gb': Value('string'), 'price_per_hour_usd': Value('float64'), 'gpu_arch': Value('string'), 'vram_gb': Value('int64'), 'fp16_tflops': Value('float64'), 'source': Value('string')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
AI Infrastructure Index
Dataset Description
The AI Infrastructure Index is a comprehensive open-source reference for AI hardware specifications, cloud GPU pricing, and infrastructure intelligence. It catalogs major AI hardware platforms currently in production, covering data center GPUs, custom AI accelerators (TPUs, LPUs, IPUs, WSEs), cloud pricing, benchmarks, and cost optimization data.
- Homepage: https://github.com/alpha-one-index/ai-infra-index
- Repository: https://github.com/alpha-one-index/ai-infra-index
- Live Data: https://alpha-one-index.github.io/ai-infra-index/
- API Access: https://alphaoneindex.com/api-access/
- Research: https://alphaoneindex.com/research/
- Point of Contact: Alpha One Index
Dataset Summary
This dataset provides structured, machine-readable data on:
- Cloud GPU Pricing — Real-time pricing from 12 cloud providers (Azure, RunPod, Lambda Labs, CoreWeave, Together AI, Vast.ai, etc.)
- GPU Specifications — Detailed specs for NVIDIA (H100, H200, B200, GB200), AMD (MI300X, MI325X), and Intel (Gaudi 3) data center accelerators
- Performance Benchmarks — FP16/FP32/INT8 throughput, memory bandwidth, and interconnect specs
- Cost Optimization — Price-per-TFLOP calculations and cost efficiency rankings across providers
API Access
The AI Infrastructure Index offers a REST API for programmatic access to cloud GPU pricing data.
Base URL: https://gpu-pricing-api.alphaoneindex.workers.dev
Endpoints
| Endpoint | Description |
|---|---|
GET /api/v1/pricing |
All cloud GPU pricing data |
GET /api/v1/pricing?provider=runpod |
Filter by provider |
GET /api/v1/pricing?gpu=H100 |
Filter by GPU model |
GET /api/v1/gpu-specs |
GPU specifications |
GET /api/v1/gpu-specs?vendor=nvidia |
Filter specs by vendor |
Quick Start
import requests
# Get all cloud GPU pricing
response = requests.get("https://gpu-pricing-api.alphaoneindex.workers.dev/api/v1/pricing")
data = response.json()
# Filter by provider
runpod = requests.get("https://gpu-pricing-api.alphaoneindex.workers.dev/api/v1/pricing?provider=runpod")
print(runpod.json())
Python (Hugging Face Datasets)
from datasets import load_dataset
# Load cloud pricing data
ds = load_dataset("alpha-one-index/ai-infra-index", split="cloud_pricing")
print(ds[0])
# Load GPU specifications
gpu_specs = load_dataset("alpha-one-index/ai-infra-index", split="gpu_specs")
print(gpu_specs[0])
Data Fields
Cloud Pricing Split
| Field | Type | Description |
|---|---|---|
provider |
string | Cloud provider name |
gpu_name |
string | GPU model name |
gpu_memory_gb |
string | GPU VRAM |
price_per_hour_usd |
float | Hourly price in USD |
GPU Specs Split
| Field | Type | Description |
|---|---|---|
gpu_name |
string | GPU model name |
gpu_arch |
string | Architecture (Hopper, Ada, CDNA3, etc.) |
vram_gb |
int | Video memory in GB |
fp16_tflops |
float | FP16 performance in TFLOPS |
source |
string | Data source |
Supported Providers
| Provider | GPUs Tracked |
|---|---|
| RunPod | H100, A100, A6000, RTX 4090 |
| Lambda Labs | H100, A100, A10 |
| CoreWeave | H100, H200, A100 |
| Together AI | H100, A100 |
| Vast.ai | H100, A100, RTX 4090, RTX 3090 |
| Azure | H100, A100, T4 |
| AWS | H100, A100, T4, Inferentia |
| GCP | H100, A100, T4, TPU v5 |
| Paperspace | H100, A100, A6000 |
| Fluidstack | H100, A100 |
| Tensordock | A100, A6000, RTX 4090 |
| Oblivus | H100, A100 |
Use Cases
- MLOps Cost Planning — Compare GPU pricing across providers for training and inference workloads
- Hardware Selection — Choose optimal GPU based on performance-per-dollar metrics
- Market Research — Track cloud GPU pricing trends over time
- Academic Research — Reference data for AI infrastructure studies and papers
Citation
If you use this dataset in your research, please cite:
@dataset{alpha_one_index_2025,
title={AI Infrastructure Index: Cloud GPU Pricing and Hardware Specifications},
author={Alpha One Index},
year={2025},
url={https://huggingface.co/datasets/alpha-one-index/ai-infra-index},
note={Comprehensive open-source reference for AI hardware specifications and cloud GPU pricing}
}
License
This dataset is released under the MIT License.
Updates
This dataset is updated regularly with the latest cloud GPU pricing and hardware specifications. For real-time data, visit the live dashboard or use the API.
- Downloads last month
- 10