Datasets:

Languages:
Polish
ArXiv:
DOI:
jbinkowski commited on
Commit
42d8a10
·
0 Parent(s):

initial commit

Browse files
.gitattributes ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
56
+ data/judgement_graph.json filter=lfs diff=lfs merge=lfs -text
57
+ data/judgment_graph.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: pl
3
+ size_categories: 100K<n<1M
4
+ source_datasets:
5
+ - pl-court-raw
6
+ pretty_name: Polish Court Judgments Graph
7
+ viewer: false
8
+ tags:
9
+ - graph
10
+ - bipartite
11
+ - polish court
12
+ ---
13
+
14
+ # Polish Court Judgments Graph
15
+
16
+ ## Dataset description
17
+ We introduce a graph dataset of Polish Court Judgments. This dataset is primarily based on the [`JuDDGES/pl-court-raw`](https://huggingface.co/datasets/JuDDGES/pl-court-raw). The dataset consists of nodes representing either judgments or legal bases, and edges connecting judgments to the legal bases they refer to. Also, the graph was cleaned from small disconnected components, leaving single giant component. Consequently, the resulting graph is bipartite. We provide the dataset in both `JSON` and `PyG` formats, each has different purpose. While structurally graphs in these formats are the same, their attributes differ.
18
+
19
+ The `JSON` format is intended for analysis and contains most of the attributes available in [`JuDDGES/pl-court-raw`](https://huggingface.co/datasets/JuDDGES/pl-court-raw). We excluded some less-useful attributes and text content, which can be easily retrieved from the raw dataset and added to the graph as needed.
20
+
21
+ The `PyG` format is designed for machine learning applications, such as link prediction on graphs, and is fully compatible with the [`Pytorch Geometric`](https://github.com/pyg-team/pytorch_geometric) framework.
22
+
23
+ In the following sections, we provide a more detailed explanation and use case examples for each format.
24
+
25
+ ## Dataset statistics
26
+
27
+ | feature | value |
28
+ |----------------------------|----------------------|
29
+ | #nodes | 369033 |
30
+ | #edges | 1131458 |
31
+ | #nodes (type=`judgment`) | 366212 |
32
+ | #nodes (type=`legal_base`) | 2819 |
33
+ | avg(degree) | 6.132015294025195 |
34
+
35
+
36
+ ![png](assets/degree_distribution.png)
37
+
38
+
39
+
40
+ ## `JSON` format
41
+
42
+ The `JSON` format contains graph node types differentiated by `node_type` attrbute. Each `node_type` has its additional corresponding attributes (see [`JuDDGES/pl-court-raw`](https://huggingface.co/datasets/JuDDGES/pl-court-raw) for detailed description of each attribute):
43
+
44
+ | node_type | attributes |
45
+ |--------------|---------------------------------------------------------------------------------------------------------------------|
46
+ | `judgment` | `_id`,`chairman`,`court_name`,`date`,`department_name`,`judges`,`node_type`,`publisher`,`recorder`,`signature`,`type` |
47
+ | `legal_base` | `isap_id`,`node_type`,`title` |
48
+
49
+ ### Loading
50
+ Graph the `JSON` format is saved in node-link format, and can be readily loaded with `networkx` library:
51
+
52
+ ```python
53
+ import json
54
+ import networkx as nx
55
+ from huggingface_hub import hf_hub_download
56
+
57
+ DATA_DIR = "<your_local_data_directory>"
58
+ JSON_FILE = "data/judgment_graph.json"
59
+ hf_hub_download(repo_id="JuDDGES/pl-court-graph", repo_type="dataset", filename=JSON_FILE, local_dir=DATA_DIR)
60
+
61
+ with open(f"{DATA_DIR}/{JSON_FILE}") as file:
62
+ g_data = json.load(file)
63
+
64
+ g = nx.node_link_graph(g_data)
65
+ ```
66
+
67
+ ### Example usage
68
+ ```python
69
+ # TBD
70
+ ```
71
+
72
+ ## `PyG` format
73
+
74
+ The `PyTorch Geometric` format includes embeddings of the judgment content, obtained with [sdadas/mmlw-roberta-large](https://huggingface.co/sdadas/mmlw-roberta-large) for judgment nodes,
75
+ and one-hot-vector identifiers for legal-base nodes (note that for efficiency one can substitute it with random noise identifiers,
76
+ like in [(Abboud et al., 2021)](https://arxiv.org/abs/2010.01179)).
77
+
78
+
79
+
80
+ ### Loading
81
+ In order to load graph as pytorch geometric, one can leverage the following code snippet
82
+ ```python
83
+ import torch
84
+ import os
85
+ from torch_geometric.data import InMemoryDataset, download_url
86
+
87
+
88
+ class PlCourtGraphDataset(InMemoryDataset):
89
+ URL = (
90
+ "https://huggingface.co/datasets/JuDDGES/pl-court-graph/resolve/main/"
91
+ "data/pyg_judgment_graph.pt?download=true"
92
+ )
93
+
94
+ def __init__(self, root_dir: str, transform=None, pre_transform=None):
95
+ super(PlCourtGraphDataset, self).__init__(root_dir, transform, pre_transform)
96
+ data_file, index_file = self.processed_paths
97
+ self.load(data_file)
98
+ self.judgment_idx_2_iid, self.legal_base_idx_2_isap_id = torch.load(index_file).values()
99
+
100
+ @property
101
+ def raw_file_names(self) -> str:
102
+ return "pyg_judgment_graph.pt"
103
+
104
+ @property
105
+ def processed_file_names(self) -> list[str]:
106
+ return ["processed_pyg_judgment_graph.pt", "index_map.pt"]
107
+
108
+ def download(self) -> None:
109
+ os.makedirs(self.root, exist_ok=True)
110
+ download_url(self.URL + self.raw_file_names, self.raw_dir)
111
+
112
+ def process(self) -> None:
113
+ dataset = torch.load(self.raw_paths[0])
114
+ data = dataset["data"]
115
+
116
+ if self.pre_transform is not None:
117
+ data = self.pre_transform(data)
118
+
119
+ data_file, index_file = self.processed_paths
120
+ self.save([data], data_file)
121
+
122
+ torch.save(
123
+ {
124
+ "judgment_idx_2_iid": dataset["judgment_idx_2_iid"],
125
+ "legal_base_idx_2_isap_id": dataset["legal_base_idx_2_isap_id"],
126
+ },
127
+ index_file,
128
+ )
129
+
130
+ def __repr__(self) -> str:
131
+ return f"{self.__class__.__name__}({len(self)})"
132
+
133
+
134
+ ds = PlCourtGraphDataset(root_dir="data/datasets/pyg")
135
+ print(ds)
136
+ ```
137
+
138
+ ### Example usage
139
+ ```python
140
+ # TBD
141
+ ```
assets/degree_distribution.png ADDED

Git LFS Details

  • SHA256: e2f3f4aad74ad47908fc0eefe72ceef65fef81e5e5443e91d58f9903491bd49c
  • Pointer size: 130 Bytes
  • Size of remote file: 25.3 kB
data/judgment_graph.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08662835e73891d4d7c1e3b4aa35740588f87c9d9755c27f973634309c290c50
3
+ size 249745157
data/pyg_judgment_graph.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a24bd2be304d638a6d722b796ae8a40d49c27f98a27f2b3c6b57d75925328e93
3
+ size 1574015137