Delete README-75m.md
Browse files- README-75m.md +0 -38
README-75m.md
DELETED
|
@@ -1,38 +0,0 @@
|
|
| 1 |
-
# BigDocs-7.5M
|
| 2 |
-
#### Training data for the paper: [BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks](https://huggingface.co/datasets/ServiceNow/BigDocs-Bench-Collections/)
|
| 3 |
-
|
| 4 |
-
🌐 [Homepage](https://bigdocs.github.io) | 📖 [arXiv](https://arxiv.org/pdf/2412.04626)
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
## Guide on Data Loading
|
| 8 |
-
Some parts of BigDocs-7.5M are distributed without their "image" column, and instead have an "img_id" column. The file `get_bigdocs_75m.py`, part of this repository, provides tooling to substitutes such images back in.
|
| 9 |
-
|
| 10 |
-
```python
|
| 11 |
-
from get_bigdocs_75m import get_bigdocs_75m
|
| 12 |
-
|
| 13 |
-
arxivocr = get_bigdocs_75m("ArxivOCR")
|
| 14 |
-
arxivtablecap = get_bigdocs_75m("ArxivTableCap")
|
| 15 |
-
cocotext = get_bigdocs_75m("COCOtext", user_local_path=".../train2014")
|
| 16 |
-
pubtables1m = get_bigdocs_75m("pubtables-1m", user_local_path=".../PubTables-1M-Detection/images")
|
| 17 |
-
textocr = get_bigdocs_75m("TextOCR", user_local_path=".../train")
|
| 18 |
-
tabfact = get_bigdocs_75m("TabFact", user_local_path=".../Table-Fact-Checking")
|
| 19 |
-
open4business = get_bigdocs_75m("Open4Business", user_local_path=".../Open4Business")
|
| 20 |
-
wikitq = get_bigdocs_75m("WikiTQ", user_local_path=".../WikiTableQuestions")
|
| 21 |
-
```
|
| 22 |
-
|
| 23 |
-
When specified, `user_local_path` must point to one of the third-party datasets listed below.
|
| 24 |
-
|
| 25 |
-
- COCOtext: http://images.cocodataset.org/zips/train2014.zip
|
| 26 |
-
- pubtables-1m: https://www.microsoft.com/en-us/research/publication/pubtables-1m
|
| 27 |
-
- TextOCR: https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip
|
| 28 |
-
- TabFact: https://github.com/wenhuchen/Table-Fact-Checking
|
| 29 |
-
- Open4Business: https://github.com/amanpreet692/Open4Business
|
| 30 |
-
- WikiTQ: https://github.com/ppasupat/WikiTableQuestions
|
| 31 |
-
|
| 32 |
-
You may specify `num_proc` as you would for `datasets.map`. See the docstring in `get_bigdocs_75m.py` for more details.
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
## Licensing
|
| 36 |
-
The part of this repository generated by us is Copyright ServiceNow 2024 and licensed under the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
| 37 |
-
|
| 38 |
-
Multiple datasets, documents, and tools were involved in the generation of BigDocs-Bench. We document these dependencies on a per-sample basis through the `query_info`, `annotation_info` and `image_info` fields, respectively documenting the `query`, `annotations` and `image` fields of our datasets.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|