Datasets:
				
			
			
	
			
	
		
			
	
		
		Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -35,7 +35,7 @@ Check out [chug](https://github.com/huggingface/chug), our optimized library for 
     | 
|
| 35 | 
         
             
            import chug
         
     | 
| 36 | 
         
             
            task_cfg = chug.DataTaskDocReadCfg(page_sampling='all')
         
     | 
| 37 | 
         
             
            data_cfg = chug.DataCfg(
         
     | 
| 38 | 
         
            -
                source='pixparse/ 
     | 
| 39 | 
         
             
                split='train',
         
     | 
| 40 | 
         
             
                batch_size=None,
         
     | 
| 41 | 
         
             
                format='hfids',
         
     | 
| 
         @@ -56,7 +56,7 @@ This dataset can also be used with webdataset library or current releases of Hug 
     | 
|
| 56 | 
         
             
            Here is an example using the "streaming" parameter. We do recommend downloading the dataset to save bandwidth. 
         
     | 
| 57 | 
         | 
| 58 | 
         
             
            ```python
         
     | 
| 59 | 
         
            -
            dataset = load_dataset('pixparse/ 
     | 
| 60 | 
         
             
            print(next(iter(dataset['train'])).keys())
         
     | 
| 61 | 
         
             
            >> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
         
     | 
| 62 | 
         
             
            ```
         
     | 
| 
         @@ -73,7 +73,7 @@ For faster download, you can directly use the `huggingface_hub` library. Make su 
     | 
|
| 73 | 
         | 
| 74 | 
         
             
             #logging.set_verbosity_debug()
         
     | 
| 75 | 
         
             
             hf = HfApi()
         
     | 
| 76 | 
         
            -
             hf.snapshot_download("pixparse/ 
     | 
| 77 | 
         | 
| 78 | 
         
             
            ```
         
     | 
| 79 | 
         | 
| 
         | 
|
| 35 | 
         
             
            import chug
         
     | 
| 36 | 
         
             
            task_cfg = chug.DataTaskDocReadCfg(page_sampling='all')
         
     | 
| 37 | 
         
             
            data_cfg = chug.DataCfg(
         
     | 
| 38 | 
         
            +
                source='pixparse/idl-wds',
         
     | 
| 39 | 
         
             
                split='train',
         
     | 
| 40 | 
         
             
                batch_size=None,
         
     | 
| 41 | 
         
             
                format='hfids',
         
     | 
| 
         | 
|
| 56 | 
         
             
            Here is an example using the "streaming" parameter. We do recommend downloading the dataset to save bandwidth. 
         
     | 
| 57 | 
         | 
| 58 | 
         
             
            ```python
         
     | 
| 59 | 
         
            +
            dataset = load_dataset('pixparse/idl-wds', streaming=True)
         
     | 
| 60 | 
         
             
            print(next(iter(dataset['train'])).keys())
         
     | 
| 61 | 
         
             
            >> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
         
     | 
| 62 | 
         
             
            ```
         
     | 
| 
         | 
|
| 73 | 
         | 
| 74 | 
         
             
             #logging.set_verbosity_debug()
         
     | 
| 75 | 
         
             
             hf = HfApi()
         
     | 
| 76 | 
         
            +
             hf.snapshot_download("pixparse/idl-wds", repo_type="dataset", local_dir_use_symlinks=False)
         
     | 
| 77 | 
         | 
| 78 | 
         
             
            ```
         
     | 
| 79 | 
         |