Update README.md
Browse files
README.md
CHANGED
|
@@ -493,20 +493,20 @@ dataset_info:
|
|
| 493 |
π [Homepage](https://bigdocs.github.io) | π [arXiv](https://arxiv.org/pdf/2412.04626)
|
| 494 |
|
| 495 |
|
| 496 |
-
## π News
|
| 497 |
-
|
| 498 |
-
- **[2025-04-23]: Initial release of the BigDocs-7.5M data.**
|
| 499 |
-
|
| 500 |
-
|
| 501 |
## Guide on Data Loading
|
| 502 |
Some parts of BigDocs-7.5M are distributed without their "image" column, and instead have an "img_id" column. The file `get_bigdocs_75m.py`, part of this repository, provides tooling to substitutes such images back in.
|
| 503 |
|
| 504 |
```python
|
| 505 |
from get_bigdocs_75m import get_bigdocs_75m
|
| 506 |
|
|
|
|
|
|
|
| 507 |
cocotext = get_bigdocs_75m("COCOtext", user_local_path=".../train2014")
|
| 508 |
pubtables1m = get_bigdocs_75m("pubtables-1m", user_local_path=".../PubTables-1M-Detection/images")
|
| 509 |
textocr = get_bigdocs_75m("TextOCR", user_local_path=".../train")
|
|
|
|
|
|
|
|
|
|
| 510 |
```
|
| 511 |
|
| 512 |
When specified, `user_local_path` must point to one of the third-party datasets listed below.
|
|
@@ -514,8 +514,11 @@ When specified, `user_local_path` must point to one of the third-party datasets
|
|
| 514 |
- COCOtext: http://images.cocodataset.org/zips/train2014.zip
|
| 515 |
- pubtables-1m: https://www.microsoft.com/en-us/research/publication/pubtables-1m
|
| 516 |
- TextOCR: https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip
|
|
|
|
|
|
|
|
|
|
| 517 |
|
| 518 |
-
See the docstring in `get_bigdocs_75m.py` for more details.
|
| 519 |
|
| 520 |
|
| 521 |
## Licensing
|
|
|
|
| 493 |
π [Homepage](https://bigdocs.github.io) | π [arXiv](https://arxiv.org/pdf/2412.04626)
|
| 494 |
|
| 495 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 496 |
## Guide on Data Loading
|
| 497 |
Some parts of BigDocs-7.5M are distributed without their "image" column, and instead have an "img_id" column. The file `get_bigdocs_75m.py`, part of this repository, provides tooling to substitutes such images back in.
|
| 498 |
|
| 499 |
```python
|
| 500 |
from get_bigdocs_75m import get_bigdocs_75m
|
| 501 |
|
| 502 |
+
arxivocr = get_bigdocs_75m("ArxivOCR")
|
| 503 |
+
arxivtablecap = get_bigdocs_75m("ArxivTableCap")
|
| 504 |
cocotext = get_bigdocs_75m("COCOtext", user_local_path=".../train2014")
|
| 505 |
pubtables1m = get_bigdocs_75m("pubtables-1m", user_local_path=".../PubTables-1M-Detection/images")
|
| 506 |
textocr = get_bigdocs_75m("TextOCR", user_local_path=".../train")
|
| 507 |
+
tabfact = get_bigdocs_75m("TabFact", user_local_path=".../Table-Fact-Checking")
|
| 508 |
+
open4business = get_bigdocs_75m("Open4Business", user_local_path=".../Open4Business")
|
| 509 |
+
wikitq = get_bigdocs_75m("WikiTQ", user_local_path=".../WikiTableQuestions")
|
| 510 |
```
|
| 511 |
|
| 512 |
When specified, `user_local_path` must point to one of the third-party datasets listed below.
|
|
|
|
| 514 |
- COCOtext: http://images.cocodataset.org/zips/train2014.zip
|
| 515 |
- pubtables-1m: https://www.microsoft.com/en-us/research/publication/pubtables-1m
|
| 516 |
- TextOCR: https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip
|
| 517 |
+
- TabFact: https://github.com/wenhuchen/Table-Fact-Checking
|
| 518 |
+
- Open4Business: https://github.com/amanpreet692/Open4Business
|
| 519 |
+
- WikiTQ: https://github.com/ppasupat/WikiTableQuestions
|
| 520 |
|
| 521 |
+
You may specify `num_proc` as you would for `datasets.map`. See the docstring in `get_bigdocs_75m.py` for more details.
|
| 522 |
|
| 523 |
|
| 524 |
## Licensing
|