Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Danish
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
This view is limited to 50 files because it contains too many changes.  See the raw diff here.
Files changed (50) hide show
  1. .gitignore +1 -17
  2. .vscode/data/memo/tmp/Corpus-v1.1 +0 -1
  3. .vscode/settings.json +2 -2
  4. CHANGELOG.md +0 -175
  5. CONTRIBUTING.md +8 -69
  6. README.md +86 -469
  7. data/adl/adl.md +40 -82
  8. data/adl/adl.parquet +2 -2
  9. data/adl/descriptive_stats.json +0 -9
  10. data/adl/images/dist_document_length.png +0 -3
  11. data/ai-aktindsigt/ai-aktindsigt.md +0 -85
  12. data/ai-aktindsigt/create.py +0 -64
  13. data/ai-aktindsigt/descriptive_stats.json +0 -9
  14. data/ai-aktindsigt/images/dist_document_length.png +0 -3
  15. data/botxt/botxt.md +40 -77
  16. data/botxt/botxt.parquet +2 -2
  17. data/botxt/descriptive_stats.json +0 -9
  18. data/botxt/images/dist_document_length.png +0 -3
  19. data/cellar/cellar.md +0 -77
  20. data/cellar/cellar.parquet +0 -3
  21. data/cellar/create.py +0 -60
  22. data/cellar/descriptive_stats.json +0 -9
  23. data/cellar/images/dist_document_length.png +0 -3
  24. data/dannet/dannet.md +63 -89
  25. data/dannet/dannet.parquet +2 -2
  26. data/dannet/descriptive_stats.json +0 -9
  27. data/dannet/images/dist_document_length.png +0 -3
  28. data/danske-taler/create.py +0 -314
  29. data/danske-taler/danske-taler.log +0 -167
  30. data/danske-taler/danske-taler.md +0 -135
  31. data/danske-taler/danske-taler.parquet +0 -3
  32. data/danske-taler/descriptive_stats.json +0 -9
  33. data/danske-taler/images/dist_document_length.png +0 -3
  34. data/depbank/depbank.md +33 -97
  35. data/depbank/depbank.parquet +2 -2
  36. data/depbank/descriptive_stats.json +0 -9
  37. data/depbank/images/dist_document_length.png +0 -3
  38. data/domsdatabasen/create.py +0 -344
  39. data/domsdatabasen/descriptive_stats.json +0 -9
  40. data/domsdatabasen/domsdatabasen.md +0 -119
  41. data/domsdatabasen/domsdatabasen.parquet +0 -3
  42. data/domsdatabasen/images/dist_document_length.png +0 -3
  43. data/enevaeldens_nyheder/create.py +0 -96
  44. data/enevaeldens_nyheder/descriptive_stats.json +0 -9
  45. data/enevaeldens_nyheder/enevaeldens_nyheder.log +0 -9
  46. data/enevaeldens_nyheder/enevaeldens_nyheder.md +0 -172
  47. data/enevaeldens_nyheder/enevaeldens_nyheder.parquet +0 -3
  48. data/enevaeldens_nyheder/images/coverage-of-newspapers.jpeg +0 -3
  49. data/enevaeldens_nyheder/images/dist_document_length.png +0 -3
  50. data/enevaeldens_nyheder/images/distribution-pr-year.jpeg +0 -3
.gitignore CHANGED
@@ -5,21 +5,5 @@ __pycache__/*
5
  # cSpell
6
  cspell.json
7
 
8
- # debugfile
9
- .vscode/launch.json
10
-
11
- # tmp files
12
- tmp.py
13
- tmp.png
14
-
15
- # MacOS
16
- .DS_Store
17
-
18
  # tmp files
19
- tmp.py
20
-
21
- ## to allow temporary data drops without pushing it to the hub
22
- data/*/tmp/*
23
-
24
- ## node_modules
25
- **/node_modules/
 
5
  # cSpell
6
  cspell.json
7
 
 
 
 
 
 
 
 
 
 
 
8
  # tmp files
9
+ tmp.py
 
 
 
 
 
 
.vscode/data/memo/tmp/Corpus-v1.1 DELETED
@@ -1 +0,0 @@
1
- Subproject commit 7205897f1f3ee65e296072f3e96d49488e54e8ce
 
 
.vscode/settings.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
  "python.testing.pytestArgs": [
3
- "src/tests"
4
  ],
5
  "python.testing.unittestEnabled": false,
6
- "python.testing.pytestEnabled": true,
7
  }
 
1
  {
2
  "python.testing.pytestArgs": [
3
+ "."
4
  ],
5
  "python.testing.unittestEnabled": false,
6
+ "python.testing.pytestEnabled": true
7
  }
CHANGELOG.md DELETED
@@ -1,175 +0,0 @@
1
-
2
- # Changelog
3
-
4
- All notable changes to this project will be documented in this file.
5
-
6
- The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
7
-
8
- ## [v1.2.12] - 2025-09-16
9
-
10
- ### Added
11
-
12
- - Added dataset: historical-danish-handwriting
13
-
14
- ## [v1.2.11 - 2025-09-02
15
-
16
- ### Changed
17
-
18
- - Updated Contributing.md to include the activation of the environment
19
-
20
- ### Added
21
-
22
- - Added dataset: wiki-comments
23
-
24
- ## [v1.2.10] - 2025-08-18
25
-
26
- ### Changed
27
-
28
- - Updated the wiki, wikibooks, wikisource datasets.
29
- - Changed `wiki` to `wikipedia`
30
- - Fixed rounding error in average token count
31
- - Improved the speed of token counting
32
-
33
- ### Added
34
-
35
- - Added `create.py` for wiki, wikibooks, wikisource.
36
-
37
- ## [v1.2.9] - 2025-08-05
38
-
39
- ### Docs
40
-
41
- - Average document length now uses tokens instead of characters
42
- - Added vizualization for checking document length in sub datasets
43
- - Changes to `*/descriptive_stats.json`:
44
- - The object no longer includes revision.
45
- - Now include character-level metrics along with minimum and maximum length. Removed average document length as it is computable from existing metrics.
46
- - Removed per-dataset histograms from the main readme. The goal is to avoid loading the entire dataset when updating the readme. This should make it easier for contributors.
47
- - Simplifying PR workflow in `contributing.md`
48
-
49
- ### CI
50
- - Fixes bug causing `make update-descriptive-stats` to fail when not having a linear commit history. The script now skips a dataset update based on revision, but only if the `descriptive_stats.json` file does not exist. To ensure that the main readme is always up to date, we change the make command always to update it.
51
-
52
- ## [v1.2.8] - 2025-08-05
53
-
54
- ### Added
55
-
56
- - Added dataset: Enevældens Nyheder Online (`enevaeldens_nyheder`). This brings us to >5B tokens!
57
-
58
- ## [v1.2.7] - 2025-07-22
59
-
60
- ### Added
61
-
62
- - Added dataset: Grundtvigs Works (`grundtvig`)
63
- - Added bias and risk section to the README
64
-
65
- ## [v1.2.6] - 2025-07-21
66
-
67
- ### Added
68
-
69
- - Added two table to get an overview of data by license and domain
70
-
71
- ### Changed
72
-
73
- - Dataset overview table now appears in a drop down menu
74
-
75
- ## [v1.2.5] - 2025-07-08
76
-
77
- ### Added
78
-
79
- - Added the `domsdatabasen` dataset.
80
-
81
- ## [v1.2.4] - 2025-07-08
82
-
83
- ### Added
84
-
85
- - Add a plot for tokens over time to see how the dataset develops
86
- - Minor documentation improvements in the main readme
87
-
88
- ### Changed
89
-
90
- - Rename `scrape_hovedstaden` to `health_hovedstaden` avoid confusion with its pretty name
91
-
92
- ## [v1.2.3] - 2025-06-30
93
-
94
- ### Added
95
-
96
- - Added a `create.py` script for the `retsinformationdk` dataset.
97
- - Resulted in a boost in tokens and documents
98
-
99
- ### Changed
100
-
101
- - Did a full stats update on datasets, resulting in minor changes in a few datasheets
102
-
103
- ## [v1.2.2] - 2025-06-26
104
-
105
- ### Added
106
-
107
- - Added the new `scrape_hovedstaden` dataset.
108
- - Added a new domain type `Medical`.
109
-
110
- ## [v1.2.1] - 2025-06-24
111
-
112
- ### Fixed
113
-
114
- - Updated the danske-taler dataset. This version fixes a problem where the texts from the API contains no newlines, and where there should have been newline there is now space between words and punctuation.
115
-
116
- ## [v1.2.0] - 2025-06-23
117
-
118
- ### Fixed
119
-
120
- - Updated the memo dataset, this second version fixed previous [issues](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/67) with the download and processing of the Danish Memo which cut off the text leading to notably smaller documents.
121
-
122
- ## [v1.1.1] - 2025-06-16
123
-
124
- ### Added
125
-
126
- - Added tests to ensure that 1 tokens document don't appear in the data. This filtered out 0 documents in total.
127
-
128
- ## [v1.1.0] - 2025-04-29
129
-
130
- ### Added
131
-
132
- - Added multiple quality controls
133
- - Removed all empty string
134
- - Removed duplicates across within datasets
135
- - Restructured datasets
136
- - Removed columns from the dataset to make the structure more lightweight, these include domain, metadata, and license. These have been moved to the individual datasheets. It is still possible to filter for license by using the dataset name
137
- - Added column for number of tokens
138
- - For developers
139
- - Restructered CI codebase substantially
140
- - Added `DataSheet` to make CI for convenient
141
- - factored out plots and tables
142
-
143
- ### Docs
144
-
145
- - Sorted overview table
146
- - Minor changes to dataset documentation
147
-
148
-
149
- ## [v1.0.12] - 2025-05-08
150
-
151
- ### Added
152
-
153
- - Added new datasets
154
- - Norwegian Colossal Corpus (newspapers) (~191.08K tokens)
155
- - Norwegian Colossal Corpus (books) (~531.97M tokens)
156
- - Norwegian Colossal Corpus (maalfrid) (~29.26M tokens)
157
- - Norwegian Colossal Corpus (parliament) (~338.87M tokens)
158
-
159
- ## [v1.0.11] - 2025-03-29
160
-
161
- ### Added
162
-
163
- - Added new datasets (more than 1B tokens 🎉)
164
- - AI Aktindsigt
165
- - Cellar
166
- - Danske Taler
167
- - Miljøportalen
168
- - EUR-Lex SUM
169
- - Finansministeriets Udgivelser
170
-
171
- ### Docs
172
-
173
- - Sorted main table in readme
174
- - Added Changelog
175
- - Minor changes to dataset documentation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
CONTRIBUTING.md CHANGED
@@ -3,9 +3,8 @@
3
  A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
4
 
5
  ```bash
6
- git clone https://huggingface.co/datasets/danish-foundation-models/danish-dynaword
7
- cd danish-dynaword
8
- git lfs pull # download large files to ensure that tests works
9
  ```
10
 
11
  You can the work with the dataset locally like so:
@@ -13,27 +12,13 @@ You can the work with the dataset locally like so:
13
  ```py
14
  from datasets import load_dataset
15
 
16
- name = "../." # instead of "danish-foundation-models/danish-dynaword"
17
  dataset = load_dataset("../.", split="train")
18
  # make transformations here
19
  ```
20
 
21
  > Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using `dataset.cache_files`.
22
 
23
- ## Adding a new dataset
24
-
25
- To add a new dataset you will have to create a folder under `data/{dataset_name}/`, which should look as follows:
26
-
27
- ```
28
- data/dataset_name
29
- |- dataset_name.md
30
- |- dataset_name.parquet
31
- |- create.py # optional
32
- ```
33
-
34
- The create.py is an optional python script that allow you to recreate the dataset from the source. This is typically to allow us to reproduce the
35
- dataset with fixes or update the dataset to the latest version using an API.
36
-
37
  ## Installing dependencies
38
 
39
  This repo comes with a few dependencies you need to install to make this run. It uses a [makefile](https://opensource.com/article/18/8/what-how-makefile) to run commands and a [uv](https://docs.astral.sh/uv/) for package management. Once you have uv installed you can install the dependencies using:
@@ -42,12 +27,6 @@ This repo comes with a few dependencies you need to install to make this run. It
42
  make install
43
  ```
44
 
45
- Now you can activate the environment with:
46
-
47
- ```
48
- source .venv/bin/activate
49
- ```
50
-
51
  ## Running dataset tests
52
 
53
  This dataset is special as it comes with a test suite, e.g. testing in the ids are unique and that the format is consistent. You can run the suite using
@@ -63,55 +42,15 @@ Creating a PR on Huggingface is a bit different from creating one on Github.
63
  1) Go to the community tab on huggingface press *new pull request* and choose *on your machine*. Specify the title of the your PR. Then you can simply:
64
 
65
  ```bash
66
- git checkout -b {new branch name}
67
- # make your changes here
68
-
69
- # push to hub
70
- # you might need to first login:
71
- # huggingface-cli login
72
- git push origin HEAD:refs/pr/{PR NUMBER}
73
- ```
74
- Where HEAD refers to the current branch.
75
-
76
- Before you make the PR do be sure to make sure that you have completed the checklist below.
77
-
78
- ### Making changes to an existing PR
79
-
80
- As a contributor you might need to develop on an existing branch. To do so you you
81
- ```bash
82
- # fetch and checkout existing branch:
83
  git fetch origin refs/pr/{PR NUMBER}:pr/{PR NUMBER}
84
  git checkout pr/{PR NUMBER}
85
- # make your changes here
86
-
87
- # push changes
88
  ```
89
 
90
- ### Checklist
91
 
92
- - [ ] I have run the test suite using `make test` and all tests pass
93
- - [ ] I have added/changed a dataset:
94
- - [ ] I have updated descriptive statistics using `make update-descriptive-statistics`
95
- - [ ] I have bumped the version use `make bump-version`
96
- - [ ] If I have added a `create.py` script I have added the [script dependencies](https://docs.astral.sh/uv/guides/scripts/#declaring-script-dependencies) required to run that script.
97
- - [ ] I have updated the CHANGELOG.md if appropriate
98
-
99
-
100
- ### Examples of Previous PRs
101
  To see example PR you can see the following:
102
 
103
- - [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/11)
104
- - [Adding a new dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/15)
105
- - Updated [dataset description and metadata](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/20)
106
-
107
- ## Frequently asked questions
108
-
109
- ### Do you accept synthetic dataets
110
-
111
- Yes we do generally accept synthetic datasets since it will likely be a promising research direction for low- to mid-resource languages.
112
- However, you should be aware that synthetic dataset will probably require a more detailed examination and description.
113
- We will for instance examine the quality of the synthetic subset and whether the model used for the creation permits resharing of the synthetic data under permissible licenses.
114
-
115
- ### Do you accept non-Danish data
116
-
117
- Generally this repository is intended for Danish text, however quite broadly defined. For instance, we do accept data containing [code-switching](https://www.google.com/search?client=safari&rls=en&q=code+switching&ie=UTF-8&oe=UTF-8) and historical Danish text.
 
3
  A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
4
 
5
  ```bash
6
+ git clone https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2
7
+ cd danish-gigaword-2
 
8
  ```
9
 
10
  You can the work with the dataset locally like so:
 
12
  ```py
13
  from datasets import load_dataset
14
 
15
+ name = "../." # instead of "danish-foundation-models/danish-gigaword-2"
16
  dataset = load_dataset("../.", split="train")
17
  # make transformations here
18
  ```
19
 
20
  > Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using `dataset.cache_files`.
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ## Installing dependencies
23
 
24
  This repo comes with a few dependencies you need to install to make this run. It uses a [makefile](https://opensource.com/article/18/8/what-how-makefile) to run commands and a [uv](https://docs.astral.sh/uv/) for package management. Once you have uv installed you can install the dependencies using:
 
27
  make install
28
  ```
29
 
 
 
 
 
 
 
30
  ## Running dataset tests
31
 
32
  This dataset is special as it comes with a test suite, e.g. testing in the ids are unique and that the format is consistent. You can run the suite using
 
42
  1) Go to the community tab on huggingface press *new pull request* and choose *on your machine*. Specify the title of the your PR. Then you can simply:
43
 
44
  ```bash
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  git fetch origin refs/pr/{PR NUMBER}:pr/{PR NUMBER}
46
  git checkout pr/{PR NUMBER}
47
+ # make your changes here
48
+ # push to hub
49
+ git push origin pr/{PR NUMBER}:refs/pr/{PR NUMBER}
50
  ```
51
 
52
+ Before you make the PR do be sure to make sure that the tests have been run.
53
 
 
 
 
 
 
 
 
 
 
54
  To see example PR you can see the following:
55
 
56
+ - [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/discussions/11)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -1,85 +1,10 @@
1
  ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - da
8
- license: cc0-1.0
9
- multilinguality:
10
- - monolingual
11
- source_datasets:
12
- - original
13
- task_categories:
14
- - text-generation
15
- task_ids:
16
- - language-modeling
17
- tags:
18
- - text-corpus
19
- - continual-development
20
- - community-collaboration
21
- pretty_name: Danish Dynaword
22
  configs:
23
  - config_name: default
24
  data_files:
25
  - split: train
26
- path: data/*/*.parquet
27
- - config_name: ai-aktindsigt
28
- data_files:
29
- - split: train
30
- path: data/ai-aktindsigt/*.parquet
31
- - config_name: cellar
32
- data_files:
33
- - split: train
34
- path: data/cellar/*.parquet
35
- - config_name: enevaeldens_nyheder
36
- data_files:
37
- - split: train
38
- path: data/enevaeldens_nyheder/*.parquet
39
- - config_name: grundtvig
40
- data_files:
41
- - split: train
42
- path: data/grundtvig/*.parquet
43
- - config_name: danske-taler
44
- data_files:
45
- - split: train
46
- path: data/danske-taler/*.parquet
47
- - config_name: ncc_books
48
- data_files:
49
- - split: train
50
- path: data/ncc_books/*.parquet
51
- - config_name: ncc_newspaper
52
- data_files:
53
- - split: train
54
- path: data/ncc_newspaper/*.parquet
55
- - config_name: ncc_maalfrid
56
- data_files:
57
- - split: train
58
- path: data/ncc_maalfrid/*.parquet
59
- - config_name: ncc_parliament
60
- data_files:
61
- - split: train
62
- path: data/ncc_parliament/*.parquet
63
- - config_name: eur-lex-sum-da
64
- data_files:
65
- - split: train
66
- path: data/eur-lex-sum-da/*.parquet
67
- - config_name: miljoeportalen
68
- data_files:
69
- - split: train
70
- path: data/miljoeportalen/*.parquet
71
- - config_name: fm-udgivelser
72
- data_files:
73
- - split: train
74
- path: data/fm-udgivelser/*.parquet
75
- - config_name: memo
76
- data_files:
77
- - split: train
78
- path: data/memo/*.parquet
79
- - config_name: opensubtitles
80
- data_files:
81
- - split: train
82
- path: data/opensubtitles/*.parquet
83
  - config_name: retsinformationdk
84
  data_files:
85
  - split: train
@@ -152,14 +77,10 @@ configs:
152
  data_files:
153
  - split: train
154
  path: data/synne/*.parquet
155
- - config_name: wikipedia
156
- data_files:
157
- - split: train
158
- path: data/wikipedia/*.parquet
159
- - config_name: wiki-comments
160
  data_files:
161
  - split: train
162
- path: data/wiki-comments/*.parquet
163
  - config_name: nordjyllandnews
164
  data_files:
165
  - split: train
@@ -168,22 +89,21 @@ configs:
168
  data_files:
169
  - split: train
170
  path: data/relig/*.parquet
171
- - config_name: nota
172
- data_files:
173
- - split: train
174
- path: data/nota/*.parquet
175
- - config_name: health_hovedstaden
176
- data_files:
177
- - split: train
178
- path: data/health_hovedstaden/*.parquet
179
- - config_name: domsdatabasen
180
- data_files:
181
- - split: train
182
- path: data/domsdatabasen/*.parquet
183
- - config_name: historical-danish-handwriting
184
- data_files:
185
- - split: train
186
- path: data/historical-danish-handwriting/*.parquet
187
  language_bcp47:
188
  - da
189
  - da-bornholm
@@ -192,25 +112,17 @@ language_bcp47:
192
 
193
  <!--
194
  readme structure is inspired by:
195
- https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
196
- -->
197
-
198
 
199
  # 🧨 Danish Dynaword
200
 
 
 
 
 
 
 
201
 
202
- <!-- START README TABLE -->
203
- | | |
204
- | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
205
- | **Version** | 1.2.12 ([Changelog](/CHANGELOG.md)) |
206
- | **Language** | dan, dansk, Danish |
207
- | **License** | Openly Licensed, See the respective dataset |
208
- | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
209
- | **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions) |
210
-
211
-
212
-
213
- <!-- END README TABLE -->
214
 
215
  ## Table of Contents
216
  - [🧨 Danish Dynaword](#-danish-dynaword)
@@ -218,9 +130,7 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
218
  - [Dataset Description](#dataset-description)
219
  - [Dataset Summary](#dataset-summary)
220
  - [Loading the dataset](#loading-the-dataset)
221
- - [Languages](#languages)
222
- - [Domains](#domains)
223
- - [Licensing](#licensing)
224
  - [Dataset Structure](#dataset-structure)
225
  - [Data Instances](#data-instances)
226
  - [Data Fields](#data-fields)
@@ -229,30 +139,17 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
229
  - [Curation Rationale](#curation-rationale)
230
  - [Annotations](#annotations)
231
  - [Source Data](#source-data)
232
- - [Data Collection and Processing](#data-collection-and-processing)
233
- - [Dataset Statistics](#dataset-statistics)
234
  - [Contributing to the dataset](#contributing-to-the-dataset)
235
- - [Citation Information](#citation-information)
236
- - [License information](#license-information)
237
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
238
- - [Bias, Risks, and Limitations](#bias-risks-and-limitations)
239
- - [Notice and takedown policy](#notice-and-takedown-policy)
240
 
241
  ## Dataset Description
242
 
243
- <!-- START-DESC-STATS -->
244
- - **Number of samples**: 5.61M
245
- - **Number of tokens (Llama 3)**: 5.89B
246
- - **Average document length in tokens (min, max)**: 1.05K (2, 9.81M)
247
- <!-- END-DESC-STATS -->
248
-
249
 
250
  ### Dataset Summary
251
 
252
- The Danish dynaword is a collection of Danish free-form text datasets from various domains. All of the datasets in Danish Dynaword are openly licensed
253
- and deemed permissible for training large language models.
254
 
255
- Danish Dynaword is continually developed, which means that the dataset will actively be updated as new datasets become available. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset).
256
 
257
  ### Loading the dataset
258
 
@@ -283,154 +180,14 @@ You can also load a single subset at a time:
283
  ds = load_dataset(name, revision="{desired revision}")
284
  ```
285
 
286
- ### Languages
287
  This dataset includes the following languages:
288
 
289
- - Danish (dan-Latn) as we as the dialects Bornholmsk (dan-Latn-bornholm) and Synderjysk (dan-Latn-synnejyl)
290
-
291
- In addition it likely contains small amounts of English due to code-switching and Norwegian due to the historical relation between the two languages and language misclassificaitons due to their similarity.
292
-
293
- Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_tag), using the langauge code ISO 639-3 and the script code ISO 15924. The third element denote the region variant.
294
-
295
-
296
- ### Domains
297
-
298
- This dynaword consist of data from various domains (e.g., legal, books, social media). The following table and figure give an overview of the relative distributions of these domains. To see a full overview of the source check out the [source data section](#source-data)
299
-
300
- <div style="display: flex; gap: 20px; align-items: flex-start;">
301
-
302
- <div style="flex: 1;">
303
-
304
-
305
- <!-- START-DOMAIN TABLE -->
306
- | Domain | Sources | N. Tokens |
307
- |:-------------|:---------------------------------------------------------------------------------------------------------|:------------|
308
- | Legal | [cellar], [eur-lex-sum-da], [fm-udgivelser], [retsinformationdk], [skat], [retspraksis], [domsdatabasen] | 2.32B |
309
- | News | [enevaeldens_nyheder], [ncc_newspaper], [tv2r], [nordjyllandnews] | 1.09B |
310
- | Books | [grundtvig], [ncc_books], [memo], [adl], [wikibooks], [jvj], [gutenberg], [relig] | 733.92M |
311
- | Conversation | [danske-taler], [opensubtitles], [ep], [ft], [spont], [naat] | 497.09M |
312
- | Social Media | [hest] | 389.32M |
313
- | Other | [ncc_parliament], [dannet], [depbank], [synne], [historical-danish-handwriting] | 345.79M |
314
- | Web | [ai-aktindsigt], [ncc_maalfrid], [miljoeportalen] | 295.87M |
315
- | Encyclopedic | [wikisource], [wikipedia], [wiki-comments] | 185.75M |
316
- | Medical | [health_hovedstaden] | 27.07M |
317
- | Readaloud | [nota] | 7.30M |
318
- | Dialect | [botxt] | 847.97K |
319
- | **Total** | | 5.89B |
320
-
321
- [ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
322
- [cellar]: data/cellar/cellar.md
323
- [enevaeldens_nyheder]: data/enevaeldens_nyheder/enevaeldens_nyheder.md
324
- [grundtvig]: data/grundtvig/grundtvig.md
325
- [danske-taler]: data/danske-taler/danske-taler.md
326
- [ncc_books]: data/ncc_books/ncc_books.md
327
- [ncc_newspaper]: data/ncc_newspaper/ncc_newspaper.md
328
- [ncc_maalfrid]: data/ncc_maalfrid/ncc_maalfrid.md
329
- [ncc_parliament]: data/ncc_parliament/ncc_parliament.md
330
- [eur-lex-sum-da]: data/eur-lex-sum-da/eur-lex-sum-da.md
331
- [miljoeportalen]: data/miljoeportalen/miljoeportalen.md
332
- [fm-udgivelser]: data/fm-udgivelser/fm-udgivelser.md
333
- [memo]: data/memo/memo.md
334
- [opensubtitles]: data/opensubtitles/opensubtitles.md
335
- [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
336
- [ep]: data/ep/ep.md
337
- [ft]: data/ft/ft.md
338
- [wikisource]: data/wikisource/wikisource.md
339
- [spont]: data/spont/spont.md
340
- [tv2r]: data/tv2r/tv2r.md
341
- [adl]: data/adl/adl.md
342
- [hest]: data/hest/hest.md
343
- [skat]: data/skat/skat.md
344
- [dannet]: data/dannet/dannet.md
345
- [retspraksis]: data/retspraksis/retspraksis.md
346
- [wikibooks]: data/wikibooks/wikibooks.md
347
- [jvj]: data/jvj/jvj.md
348
- [gutenberg]: data/gutenberg/gutenberg.md
349
- [botxt]: data/botxt/botxt.md
350
- [depbank]: data/depbank/depbank.md
351
- [naat]: data/naat/naat.md
352
- [synne]: data/synne/synne.md
353
- [wikipedia]: data/wikipedia/wikipedia.md
354
- [wiki-comments]: data/wiki-comments/wiki-comments.md
355
- [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
356
- [relig]: data/relig/relig.md
357
- [nota]: data/nota/nota.md
358
- [health_hovedstaden]: data/health_hovedstaden/health_hovedstaden.md
359
- [domsdatabasen]: data/domsdatabasen/domsdatabasen.md
360
- [historical-danish-handwriting]: data/historical-danish-handwriting/historical-danish-handwriting.md
361
- <!-- END-DOMAIN TABLE -->
362
-
363
- </div>
364
-
365
- <div style="flex: 1;">
366
-
367
- <p align="center">
368
- <img src="./images/domain_distribution.png" width="400" style="margin-right: 10px;" />
369
- </p>
370
-
371
- </div>
372
-
373
- </div>
374
-
375
-
376
- ### Licensing
377
-
378
- The following gives an overview of the licensing in the Dynaword. To get the exact license of the individual datasets check out the [overview table](#source-data).
379
- These license is applied to the constituent data, i.e., the text. The collection of datasets (metadata, quality control, etc.) is licensed under [CC-0](https://creativecommons.org/publicdomain/zero/1.0/legalcode.en).
380
-
381
- <!-- START-LICENSE TABLE -->
382
- | License | Sources | N. Tokens |
383
- |:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------|
384
- | CC-BY-SA 4.0 | [cellar], [enevaeldens_nyheder], [eur-lex-sum-da], [fm-udgivelser], [memo], [tv2r], [jvj], [depbank] | 2.41B |
385
- | CC-0 | [grundtvig], [danske-taler], [ncc_books], [ncc_newspaper], [miljoeportalen], [opensubtitles], [ep], [ft], [wikisource], [spont], [adl], [hest], [skat], [retspraksis], [wikibooks], [botxt], [naat], [synne], [wikipedia], [wiki-comments], [nordjyllandnews], [relig], [nota], [health_hovedstaden] | 2.06B |
386
- | Other (No attribution required) | [retsinformationdk], [domsdatabasen] | 904.61M |
387
- | Other (Attribution required) | [ai-aktindsigt], [ncc_maalfrid], [ncc_parliament], [dannet], [gutenberg] | 515.61M |
388
- | CC-BY 4.0 | [historical-danish-handwriting] | 5.20M |
389
- | **Total** | | 5.89B |
390
-
391
- [ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
392
- [cellar]: data/cellar/cellar.md
393
- [enevaeldens_nyheder]: data/enevaeldens_nyheder/enevaeldens_nyheder.md
394
- [grundtvig]: data/grundtvig/grundtvig.md
395
- [danske-taler]: data/danske-taler/danske-taler.md
396
- [ncc_books]: data/ncc_books/ncc_books.md
397
- [ncc_newspaper]: data/ncc_newspaper/ncc_newspaper.md
398
- [ncc_maalfrid]: data/ncc_maalfrid/ncc_maalfrid.md
399
- [ncc_parliament]: data/ncc_parliament/ncc_parliament.md
400
- [eur-lex-sum-da]: data/eur-lex-sum-da/eur-lex-sum-da.md
401
- [miljoeportalen]: data/miljoeportalen/miljoeportalen.md
402
- [fm-udgivelser]: data/fm-udgivelser/fm-udgivelser.md
403
- [memo]: data/memo/memo.md
404
- [opensubtitles]: data/opensubtitles/opensubtitles.md
405
- [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
406
- [ep]: data/ep/ep.md
407
- [ft]: data/ft/ft.md
408
- [wikisource]: data/wikisource/wikisource.md
409
- [spont]: data/spont/spont.md
410
- [tv2r]: data/tv2r/tv2r.md
411
- [adl]: data/adl/adl.md
412
- [hest]: data/hest/hest.md
413
- [skat]: data/skat/skat.md
414
- [dannet]: data/dannet/dannet.md
415
- [retspraksis]: data/retspraksis/retspraksis.md
416
- [wikibooks]: data/wikibooks/wikibooks.md
417
- [jvj]: data/jvj/jvj.md
418
- [gutenberg]: data/gutenberg/gutenberg.md
419
- [botxt]: data/botxt/botxt.md
420
- [depbank]: data/depbank/depbank.md
421
- [naat]: data/naat/naat.md
422
- [synne]: data/synne/synne.md
423
- [wikipedia]: data/wikipedia/wikipedia.md
424
- [wiki-comments]: data/wiki-comments/wiki-comments.md
425
- [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
426
- [relig]: data/relig/relig.md
427
- [nota]: data/nota/nota.md
428
- [health_hovedstaden]: data/health_hovedstaden/health_hovedstaden.md
429
- [domsdatabasen]: data/domsdatabasen/domsdatabasen.md
430
- [historical-danish-handwriting]: data/historical-danish-handwriting/historical-danish-handwriting.md
431
- <!-- END-LICENSE TABLE -->
432
-
433
 
 
434
 
435
  ## Dataset Structure
436
 
@@ -440,15 +197,16 @@ The dataset contains text from different sources which are thoroughly defined in
440
 
441
  Each entry in the dataset consists of a single text with associated metadata
442
 
443
- <!-- START-SAMPLE -->
444
  ```py
445
  {
446
- "id": "digibok_2009033103031",
447
- "text": "P. FR. RIST. OLAF RYES SAGA. OPTEGNELSER, DAGBØGER OG BREVE. DET NORDISKE FORLAG. Denne Bog søger at[...]",
448
- "source": "ncc_books",
449
- "added": "2025-05-08",
450
- "created": "1899-01-01, 1899-12-31",
451
- "token_count": 192301
 
 
452
  }
453
  ```
454
 
@@ -456,13 +214,16 @@ Each entry in the dataset consists of a single text with associated metadata
456
 
457
  An entry in the dataset consists of the following fields:
458
 
459
- - `id` (`str`): An unique identifier for each document.
460
  - `text`(`str`): The content of the document.
461
  - `source` (`str`): The source of the document (see [Source Data](#source-data)).
 
462
  - `added` (`str`): An date for when the document was added to this collection.
463
  - `created` (`str`): An date range for when the document was originally created.
464
- - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
465
- <!-- END-SAMPLE -->
 
 
 
466
 
467
  ### Data Splits
468
 
@@ -472,9 +233,7 @@ The entire corpus is provided in the `train` split.
472
 
473
  ### Curation Rationale
474
 
475
- These datasets were collected and curated with the intention of making openly license Danish data available. While this was collected with the intention of developing language models it is likely to have multiple other uses such as examining language development and differences across domains.
476
-
477
-
478
 
479
  ### Annotations
480
 
@@ -482,186 +241,44 @@ This data generally contains no annotation besides the metadata attached to each
482
 
483
  ### Source Data
484
 
485
-
486
- Below follows a brief overview of the sources in the corpus along with their individual license. To get more information about the individual dataset click the hyperlink in the table.
487
-
488
- <details>
489
- <summary><b>Overview Table (click to unfold)</b></summary>
490
-
491
- You can learn more about each dataset by pressing the link in the first column.
492
-
493
- <!-- START-MAIN TABLE -->
494
- | Source | Description | Domain | N. Tokens | License |
495
- |:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------|:------------|:-----------------------|
496
- | [cellar] | The official digital repository for European Union legal documents and open data | Legal | 1.15B | [CC-BY-SA 4.0] |
497
- | [enevaeldens_nyheder] | High quality OCR'd texts from Danish and Norwegian newspapers during the period of constitutional absolutism in Denmark (1660–1849) | News | 1.03B | [CC-BY-SA 4.0] |
498
- | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | Legal | 818.25M | [Danish Copyright Law] |
499
- | [ncc_books] | Danish books extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from OCR | Books | 531.97M | [CC-0] |
500
- | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | Social Media | 389.32M | [CC-0] |
501
- | [ncc_parliament] | Collections from the Norwegian parliament in Danish. Extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from ocr | Other | 338.87M | [NLOD 2.0] |
502
- | [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | Conversation | 271.60M | [CC-0] |
503
- | [wikipedia] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page) | Encyclopedic | 173.33M | [CC-0] |
504
- | [ai-aktindsigt] | Multiple web scrapes from municipality websites collected as a part of the [AI-aktindsigt](https://ai-aktindsigt.dk) project | Web | 139.23M | [Apache 2.0] |
505
- | [miljoeportalen] | Data from [Danmarks Miljøportalen](https://www.miljoeportal.dk/om-danmarks-miljoeportal/) (Denmark's Environment Portal) | Web | 127.38M | [CC-0] |
506
- | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | Legal | 122.11M | [CC-0] |
507
- | [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | Conversation | 114.09M | [CC-0] |
508
- | [memo] | The MeMo corpus comprising almost all Danish novels from the period 1870-1899, known as the Modern Breakthrough | Books | 113.74M | [CC-BY-SA 4.0] |
509
- | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | Conversation | 100.84M | [CC-0] |
510
- | [domsdatabasen] | [Domsdatabasen.dk](https://domsdatabasen.dk/) is a public database containing selected judgments from the Danish courts | Legal | 86.35M | [Danish Copyright Law] |
511
- | [adl] | Danish literature from 1700-2023 from the [Archive for Danish Literature](https://tekster.kb.dk/text?editorial=no&f%5Bsubcollection_ssi%5D%5B%5D=adl&match=one&search_field=Alt) (ADL) | Books | 58.49M | [CC-0] |
512
- | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | Legal | 56.26M | [CC-0] |
513
- | [fm-udgivelser] | The official publication series of the Danish Ministry of Finance containing economic analyses, budget proposals, and fiscal policy documents | Legal | 50.34M | [CC-BY-SA 4.0] |
514
- | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | News | 37.90M | [CC-0] |
515
- | [eur-lex-sum-da] | The Danish subsection of EUR-lex SUM consisting of EU legislation paired with professionally written summaries | Legal | 31.37M | [CC-BY-SA 4.0] |
516
- | [ncc_maalfrid] | Danish content from Norwegian institutions websites | Web | 29.26M | [NLOD 2.0] |
517
- | [health_hovedstaden] | Guidelines and informational documents for healthcare professionals from the Capital Region | Medical | 27.07M | [CC-0] |
518
- | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | News | 21.67M | [CC-BY-SA 4.0] |
519
- | [grundtvig] | The complete collection of [Grundtvig](https://en.wikipedia.org/wiki/N._F._S._Grundtvig) (1783-1872) one of Denmark’s most influential figures | Books | 10.53M | [CC-0] |
520
- | [danske-taler] | Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk) | Conversation | 8.72M | [CC-0] |
521
- | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | Books | 7.63M | [CC-0] |
522
- | [nota] | The text only part of the [Nota lyd- og tekstdata](https://sprogteknologi.dk/dataset/nota-lyd-og-tekstdata) dataset | Readaloud | 7.30M | [CC-0] |
523
- | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | Books | 6.76M | [Gutenberg] |
524
- | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | Encyclopedic | 6.28M | [CC-0] |
525
- | [wiki-comments] | Text from the comments sections of the Danish Wikipedia | Encyclopedic | 6.14M | [CC-0] |
526
- | [historical-danish-handwriting] | Minutes from City and Parish Council meetings between 1841 and 1939 from [The Historical Danish handwriting dataset](https://huggingface.co/datasets/aarhus-city-archives/historical-danish-handwriting) | Other | 5.20M | [CC-BY 4.0] |
527
- | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | Books | 3.55M | [CC-BY-SA 4.0] |
528
- | [spont] | Conversational samples collected as a part of research projects at Aarhus University | Conversation | 1.56M | [CC-0] |
529
- | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | Other | 1.48M | [DanNet 1.0] |
530
- | [relig] | Danish religious text from the 1700-2022 | Books | 1.24M | [CC-0] |
531
- | [ncc_newspaper] | OCR'd Newspapers derived from [NCC](https://huggingface.co/datasets/NbAiLab/NCC) | News | 1.05M | [CC-0] |
532
- | [botxt] | The Bornholmsk Ordbog Dictionary Project | Dialect | 847.97K | [CC-0] |
533
- | [naat] | Danish speeches from 1930-2022 | Conversation | 286.68K | [CC-0] |
534
- | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | Other | 185.45K | [CC-BY-SA 4.0] |
535
- | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | Other | 52.02K | [CC-0] |
536
- | **Total** | | | 5.89B | |
537
-
538
- [ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
539
- [cellar]: data/cellar/cellar.md
540
- [enevaeldens_nyheder]: data/enevaeldens_nyheder/enevaeldens_nyheder.md
541
- [grundtvig]: data/grundtvig/grundtvig.md
542
- [danske-taler]: data/danske-taler/danske-taler.md
543
- [ncc_books]: data/ncc_books/ncc_books.md
544
- [ncc_newspaper]: data/ncc_newspaper/ncc_newspaper.md
545
- [ncc_maalfrid]: data/ncc_maalfrid/ncc_maalfrid.md
546
- [ncc_parliament]: data/ncc_parliament/ncc_parliament.md
547
- [eur-lex-sum-da]: data/eur-lex-sum-da/eur-lex-sum-da.md
548
- [miljoeportalen]: data/miljoeportalen/miljoeportalen.md
549
- [fm-udgivelser]: data/fm-udgivelser/fm-udgivelser.md
550
- [memo]: data/memo/memo.md
551
- [opensubtitles]: data/opensubtitles/opensubtitles.md
552
- [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
553
- [ep]: data/ep/ep.md
554
- [ft]: data/ft/ft.md
555
- [wikisource]: data/wikisource/wikisource.md
556
- [spont]: data/spont/spont.md
557
- [tv2r]: data/tv2r/tv2r.md
558
- [adl]: data/adl/adl.md
559
- [hest]: data/hest/hest.md
560
- [skat]: data/skat/skat.md
561
- [dannet]: data/dannet/dannet.md
562
- [retspraksis]: data/retspraksis/retspraksis.md
563
- [wikibooks]: data/wikibooks/wikibooks.md
564
- [jvj]: data/jvj/jvj.md
565
- [gutenberg]: data/gutenberg/gutenberg.md
566
- [botxt]: data/botxt/botxt.md
567
- [depbank]: data/depbank/depbank.md
568
- [naat]: data/naat/naat.md
569
- [synne]: data/synne/synne.md
570
- [wikipedia]: data/wikipedia/wikipedia.md
571
- [wiki-comments]: data/wiki-comments/wiki-comments.md
572
- [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
573
- [relig]: data/relig/relig.md
574
- [nota]: data/nota/nota.md
575
- [health_hovedstaden]: data/health_hovedstaden/health_hovedstaden.md
576
- [domsdatabasen]: data/domsdatabasen/domsdatabasen.md
577
- [historical-danish-handwriting]: data/historical-danish-handwriting/historical-danish-handwriting.md
578
-
579
-
580
- [CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
581
- [CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en
582
- [CC-BY 4.0]: https://creativecommons.org/licenses/by/4.0/deed.en
583
- [Apache 2.0]: https://www.apache.org/licenses/LICENSE-2.0
584
- [NLOD 2.0]: ./data/ncc_maalfrid/ncc_maalfrid.md#license-information
585
- [NLOD 2.0]: ./data/ncc_parliament/ncc_parliament.md#license-information
586
- [Danish Copyright Law]: ./data/retsinformationdk/retsinformationdk.md#license-information
587
- [DanNet 1.0]: ./data/dannet/dannet.md#license-information
588
- [Gutenberg]: ./data/gutenberg/gutenberg.md#license-information
589
- [Danish Copyright Law]: ./data/domsdatabasen/domsdatabasen.md#license-information
590
- <!-- END-MAIN TABLE -->
591
-
592
- </details>
593
-
594
-
595
- ### Data Collection and Processing
596
-
597
- Danish Dynaword is continually developed, which means that the dataset will actively be updated as new datasets become available. This means that the size of Dynaword increases over time as seen in the following plot:
598
-
599
- <p align="center">
600
- <img src="./images/tokens_over_time.svg" width="600" style="margin-right: 10px;" />
601
- </p>
602
-
603
- The data collection and processing varies depending on the dataset and is documentationed the individual datasheets, which is linked in the above table. If possible the collection is documented both in the datasheet and in the reproducible script (`data/{dataset}/create.py`).
604
-
605
- In addition to data specific processing we also run a series automated quality checks to ensure formatting (e.g. ensuring correctly formatted columns and unique IDs), quality checks (e.g. duplicate and empty string detection) and datasheet documentation checks. These checks are there to ensure a high quality of documentation and a minimal level of quality. To allow for the development of novel cleaning methodologies we do not provide more extensive cleaning.
606
-
607
- ### Dataset Statistics
608
- The following plot(s) are intended to give an overview of docuements length in the various sources.
609
-
610
- <p align="center">
611
- <img src="./images/dataset_size_plot.svg" width="600" style="margin-right: 10px;" />
612
- </p>
613
-
614
-
615
 
616
  ### Contributing to the dataset
617
 
618
- We welcome contributions to the dataset, including new sources, improved data filtering, and other enhancements. To get started on contributing, please see [the contribution guidelines](CONTRIBUTING.md)
619
 
620
- ## Citation Information
621
-
622
- If you use this work, please cite the [scientific article](https://arxiv.org/abs/2508.02271), we recommend citing the following:
623
-
624
- > Enevoldsen, K.C., Jensen, K.N., Kostkan, J., Szab'o, B.I., Kardos, M., Vad, K., Heinsen, J., N'unez, A.B., Barmina, G., Nielsen, J., Larsen, R., Vahlstrup, P.B., Dalum, P.M., Elliott, D., Galke, L., Schneider-Kamp, P., & Nielbo, K.L. (2025). Dynaword: From One-shot to Continuously Developed Datasets.
625
-
626
-
627
- ```
628
- @article{enevoldsen2025dynaword,
629
- title={Dynaword: From One-shot to Continuously Developed Datasets},
630
- author={Enevoldsen, Kenneth and Jensen, Kristian N{\o}rgaard and Kostkan, Jan and Szab{\'o}, Bal{\'a}zs and Kardos, M{\'a}rton and Vad, Kirten and N{\'u}{\~n}ez, Andrea Blasi and Barmina, Gianluca and Nielsen, Jacob and Larsen, Rasmus and others},
631
- journal={arXiv preprint arXiv:2508.02271},
632
- year={2025}
633
- }
634
- ```
635
-
636
- Additionally, we recommend citing the relevant source datasets as well. See the individual datasheets for more information.
637
-
638
- ## License information
639
-
640
- The license for each constituent dataset is supplied in the [Source data](#source-data) table. This license is applied to the constituent data, i.e., the text. The collection of datasets (metadata, quality control, etc.) is licensed under [CC-0](https://creativecommons.org/publicdomain/zero/1.0/legalcode.en).
641
-
642
- ### Personal and Sensitive Information
643
-
644
- As far as we are aware the dataset does not contain information identifying sexual orientation, political beliefs, religion, or health connected with utterer ID. In case that such information is present in the data we have been removed utterer information from social media content.
645
-
646
- ### Bias, Risks, and Limitations
647
-
648
- Certain works in this collection are historical works and thus reflect the linguistic, cultural, and ideological norms of their time.
649
- As such, it includes perspectives, assumptions, and biases characteristic of the period. For instance, the works of N.F.S. Grundtvig (`grundtvig`) were known to nationalistic views and critical stances toward specific groups, such as Germans, which may be considered offensive or exclusionary by contemporary standards.
650
-
651
-
652
- ### Notice and takedown policy
653
- We redistribute files shared with us under a license permitting such redistribution. If you have concerns about the licensing of these files, please [contact us](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/new). If you consider that the data contains material that infringe your copyright, please:
654
- - Clearly identify yourself with detailed contact information such as an address, a telephone number, or an email address at which you can be contacted.
655
- - Clearly reference the original work claimed to be infringed
656
- - Clearly identify the material claimed to be infringing and information reasonably sufficient to allow us to locate the material.
657
- You can contact us through this channel.
658
- We will comply with legitimate requests by removing the affected sources from the next release of the corpus
659
-
660
- ---
661
 
662
- <h3 style="display: flex; align-items: center;">
663
- <a href="https://www.foundationmodels.dk">
664
- <img src="./docs/icon.png" width="30" style="margin-right: 10px;" />
665
- </a>
666
- A&nbsp;<a href=https://www.foundationmodels.dk>Danish Foundation Models</a>&nbsp;dataset
667
- </h3>
 
1
  ---
2
+ license: other
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  configs:
4
  - config_name: default
5
  data_files:
6
  - split: train
7
+ path: 'data/*/*.parquet'
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - config_name: retsinformationdk
9
  data_files:
10
  - split: train
 
77
  data_files:
78
  - split: train
79
  path: data/synne/*.parquet
80
+ - config_name: wiki
 
 
 
 
81
  data_files:
82
  - split: train
83
+ path: data/wiki/*.parquet
84
  - config_name: nordjyllandnews
85
  data_files:
86
  - split: train
 
89
  data_files:
90
  - split: train
91
  path: data/relig/*.parquet
92
+ annotations_creators:
93
+ - no-annotation
94
+ language_creators:
95
+ - crowdsourced
96
+ language:
97
+ - da
98
+ multilinguality:
99
+ - monolingual
100
+ source_datasets:
101
+ - original
102
+ task_categories:
103
+ - text-generation
104
+ task_ids:
105
+ - language-modeling
106
+ pretty_name: Danish Dynaword
 
107
  language_bcp47:
108
  - da
109
  - da-bornholm
 
112
 
113
  <!--
114
  readme structure is inspired by:
115
+ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
 
 
116
 
117
  # 🧨 Danish Dynaword
118
 
119
+ | | |
120
+ | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
121
+ | **Language** | dan, dansk, Danish |
122
+ | **License** | Permissible, See the respective dataset |
123
+ | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
124
+ | **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/discussions) |
125
 
 
 
 
 
 
 
 
 
 
 
 
 
126
 
127
  ## Table of Contents
128
  - [🧨 Danish Dynaword](#-danish-dynaword)
 
130
  - [Dataset Description](#dataset-description)
131
  - [Dataset Summary](#dataset-summary)
132
  - [Loading the dataset](#loading-the-dataset)
133
+ - [Languages:](#languages)
 
 
134
  - [Dataset Structure](#dataset-structure)
135
  - [Data Instances](#data-instances)
136
  - [Data Fields](#data-fields)
 
139
  - [Curation Rationale](#curation-rationale)
140
  - [Annotations](#annotations)
141
  - [Source Data](#source-data)
142
+ - [Additional Information](#additional-information)
 
143
  - [Contributing to the dataset](#contributing-to-the-dataset)
144
+ - [Citation Information](#citation-information)
 
 
 
 
145
 
146
  ## Dataset Description
147
 
 
 
 
 
 
 
148
 
149
  ### Dataset Summary
150
 
151
+ The Danish dynaword is a continually developed collection of Danish free-form text datasets from various domains. It is intended to be continually updated with new data sources. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset)
 
152
 
 
153
 
154
  ### Loading the dataset
155
 
 
180
  ds = load_dataset(name, revision="{desired revision}")
181
  ```
182
 
183
+ ### Languages:
184
  This dataset includes the following languages:
185
 
186
+ - dan-Latn
187
+ - dan-Latn-bornholm
188
+ - dan-Latn-synnejyl
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
189
 
190
+ Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_tag), using the langauge code ISO 639-3 and the script code ISO 15924. The last element denote the region variant.
191
 
192
  ## Dataset Structure
193
 
 
197
 
198
  Each entry in the dataset consists of a single text with associated metadata
199
 
 
200
  ```py
201
  {
202
+ "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL...",
203
+ "source": "adl",
204
+ "id": "adl_aakjaer06val",
205
+ "added": "2020-09-14",
206
+ "created": "1700-01-01, 2022-01-01",
207
+ "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
208
+ "domain": "Wiki & Books",
209
+ "metadata": {"source-pretty": "Archive for Danish Literature"},
210
  }
211
  ```
212
 
 
214
 
215
  An entry in the dataset consists of the following fields:
216
 
 
217
  - `text`(`str`): The content of the document.
218
  - `source` (`str`): The source of the document (see [Source Data](#source-data)).
219
+ - `id` (`str`): An unique identifier for each document.
220
  - `added` (`str`): An date for when the document was added to this collection.
221
  - `created` (`str`): An date range for when the document was originally created.
222
+ - `license` (`str`): The license of the document. The licenses vary according to the source.
223
+ - `domain` (`str`): The domain of the source
224
+ - `metadata/source-pretty` (`str`): The long form version of the short-form source name
225
+ - `metadata/*`: Potentially additional metadata
226
+
227
 
228
  ### Data Splits
229
 
 
233
 
234
  ### Curation Rationale
235
 
236
+ These datasets were collected and curated with the intention of making large quantities of Danish text data available. While this was collected with the intention of developing language models it is likely to have multiple other uses such as examining language development and differences across domains.
 
 
237
 
238
  ### Annotations
239
 
 
241
 
242
  ### Source Data
243
 
244
+ Below follows a brief overview of the sources in the corpus along with their individual license.
245
+
246
+ | Source | License |
247
+ | ----------------- | -------------------------------------------------------- |
248
+ | adl | Creative Commons Legal Code 1.0 Universal |
249
+ | botxt | Creative Commons Legal Code 1.0 Universal |
250
+ | dannet | [dannet license] |
251
+ | depbank | Attribution-ShareAlike 4.0 International |
252
+ | ep | Creative Commons Legal Code 1.0 Universal |
253
+ | ft | Creative Commons Legal Code 1.0 Universal |
254
+ | gutenberg | [gutenberg license] |
255
+ | hest | Creative Commons Legal Code 1.0 Universal |
256
+ | jvj | Attribution-ShareAlike 4.0 International |
257
+ | naat | Creative Commons Legal Code 1.0 Universal |
258
+ | relig | Creative Commons Legal Code 1.0 Universal |
259
+ | retsinformationdk | [Other (Danish Law)] |
260
+ | retspraksis | Creative Commons Legal Code 1.0 Universal |
261
+ | skat | Creative Commons Legal Code 1.0 Universal |
262
+ | spont | Creative Commons Legal Code 1.0 Universal |
263
+ | synne | Creative Commons Legal Code 1.0 Universal |
264
+ | tv2r | [Custom, Creative Commons Attribution 4.0 International] |
265
+ | wiki | Creative Commons Legal Code 1.0 Universal |
266
+ | wikibooks | Creative Commons Legal Code 1.0 Universal |
267
+ | wikisource | Creative Commons Legal Code 1.0 Universal |
268
+
269
+ [Custom, Creative Commons Attribution 4.0 International]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/tv2r/tv2r.md#license-information
270
+ [gutenberg license]: https://www.gutenberg.org/policy/license.html
271
+ [dannet license]: https://cst.ku.dk/projekter/dannet/license.txt
272
+ [Other (Danish Law)]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/retsinformationdk/retsinformationdk.md#license-information
273
+
274
+
275
+
276
+ ## Additional Information
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
277
 
278
  ### Contributing to the dataset
279
 
280
+ We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see [the contribution guidelines](CONTRIBUTING.md)
281
 
282
+ ### Citation Information
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
283
 
284
+ This version expand upon existing dataset sources such as the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite the source of the dataset when using these datasets.
 
 
 
 
 
data/adl/adl.md CHANGED
@@ -1,99 +1,57 @@
1
  ---
2
- pretty_name: Archive for Danish Literature
3
  language:
4
- - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
- - 1-10k
9
  task_categories:
10
- - text-generation
11
- - fill-mask
12
  task_ids:
13
- - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
- domains:
17
- - Books
18
  ---
19
-
20
- # Dataset Card for Archive for Danish Literature
21
-
22
  ## Dataset Description
23
-
24
- <!-- START-SHORT DESCRIPTION -->
25
- Danish literature from 1700-2023 from the [Archive for Danish Literature](https://tekster.kb.dk/text?editorial=no&f%5Bsubcollection_ssi%5D%5B%5D=adl&match=one&search_field=Alt) (ADL).
26
- <!-- END-SHORT DESCRIPTION -->
27
-
28
- Archive for Danish Literature (ADL) is a literary-historical collection of selected parts of older Danish literature, from the Middle Ages up to the mid-20th century.
29
- It provides access to both the texts and introductory material on most of the authors. ADL is a resource for research, teaching, and broad dissemination of older Danish
30
- literature. Currently, ADL contains works by 78 authors. The texts are reproduced from standard printed editions. All texts are searchable, and many can also be viewed as facsimiles (photographs of the original edition)
31
- on the Danish Royal Library's [website](https://tekster.kb.dk/text?editorial=no&f%5Bsubcollection_ssi%5D%5B%5D=adl&match=one&search_field=Alt).
32
-
33
- See also dataset [entry](https://sprogteknologi.dk/dataset/public-adl-text-sources) on sprogteknologi.dk and an [API](https://rawgit.com/Det-Kongelige-Bibliotek/access-digital-objects/master/form-demos/adl-form.html).
34
-
35
- <!-- START-DESC-STATS -->
36
- - **Number of samples**: 498
37
- - **Number of tokens (Llama 3)**: 58.49M
38
- - **Average document length in tokens (min, max)**: 117.46K (53, 662.14K)
39
- <!-- END-DESC-STATS -->
40
-
41
-
42
-
43
- ## Dataset Structure
44
  An example from the dataset looks as follows.
45
-
46
-
47
- <!-- START-SAMPLE -->
48
- ```py
49
  {
50
- "id": "adl_aakjaer06val",
51
- "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL - NORDISK FORLAG KJØBENHAVN OG\nKRISTIANIA 1919 0[...]",
52
- "source": "adl",
53
- "added": "2020-09-14",
54
- "created": "1700-01-01, 2022-01-01",
55
- "token_count": 439908
 
 
 
 
 
 
 
 
56
  }
57
  ```
58
 
59
- ### Data Fields
60
-
61
- An entry in the dataset consists of the following fields:
62
 
63
- - `id` (`str`): An unique identifier for each document.
64
- - `text`(`str`): The content of the document.
65
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
66
- - `added` (`str`): An date for when the document was added to this collection.
67
- - `created` (`str`): An date range for when the document was originally created.
68
- - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
69
- <!-- END-SAMPLE -->
70
 
 
 
 
 
 
71
 
72
-
73
- ### Dataset Statistics
74
-
75
- <!-- START-DATASET PLOTS -->
76
- <p align="center">
77
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
78
  </p>
79
- <!-- END-DATASET PLOTS -->
80
-
81
-
82
- ## Additional Information
83
-
84
-
85
- ### Citation Information
86
-
87
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
88
-
89
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
90
-
91
- ```bash
92
- @inproceedings{dagw,
93
- title = {{The Danish Gigaword Corpus}},
94
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
95
- year = 2021,
96
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
97
- publisher = {NEALT}
98
- }
99
- ```
 
1
  ---
2
+ pretty_name: Archive for Danish Literature
3
  language:
4
+ - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
+ - 1-10k
9
  task_categories:
10
+ - text-generation
11
+ - fill-mask
12
  task_ids:
13
+ - language-modeling
 
 
 
 
14
  ---
15
+ # Dataset Card for Archive for Danish Literature
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 498
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'SAMLEDE VÆRKER
24
+
25
+ JEPPE AAKJÆR GYLDENDALSKE BOGHANDE',
26
+ 'source': 'adl',
27
+ 'id': 'adl_aakjaer06val',
28
+ 'added': '2020-09-14',
29
+ 'created': '1700-01-01, 2022-01-01',
30
+ 'metadata': {
31
+ 'domain': 'Wiki & Books',
32
+ 'license': 'Creative Commons Legal Code
33
+
34
+ CC0 1.0 Universal',
35
+ 'source-pretty': ' Archive for Danish Literature'
36
+ }
37
  }
38
  ```
39
 
40
+ ## Data Fields
 
 
41
 
42
+ - **id**: source-specific identifier.
43
+ - **text**: textual content of the document.
44
+ - **source**: source of the data.
45
+ - **added**: timestamp ai2 acquired this data.
46
+ - **created**": timestamp when original document was created (best-guess if not available)
47
+ - **metadata**: source-specific metadata.
 
48
 
49
+ ## License Information
50
+ <details>
51
+ <summary>Creative Commons Zero v1.0 Universal</summary>
52
+ <p>
53
+ Creative Commons Legal Code
54
 
55
+ CC0 1.0 Universal
 
 
 
 
 
56
  </p>
57
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/adl/adl.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7511f1ff1da6a3c04148ca5bd0395d9e2e702520b0c0bf3c8774428b5dc27f7f
3
- size 106403262
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5af9444529d92c37f35161829c652f8b928f9f1dfb5836065f320d1e1d698818
3
+ size 106401744
data/adl/descriptive_stats.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "number_of_samples": 498,
3
- "number_of_tokens": 58493311,
4
- "min_length_tokens": 53,
5
- "max_length_tokens": 662143,
6
- "number_of_characters": 161816257,
7
- "min_length_characters": 136,
8
- "max_length_characters": 1879004
9
- }
 
 
 
 
 
 
 
 
 
 
data/adl/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: c720774f1c72e77402153edfa8f3390872bae88638dc3bfe9f2551815994f8eb
  • Pointer size: 131 Bytes
  • Size of remote file: 539 kB
data/ai-aktindsigt/ai-aktindsigt.md DELETED
@@ -1,85 +0,0 @@
1
- ---
2
- pretty_name: AI Aktindsigt
3
- language:
4
- - da
5
- license: apache-2.0
6
- license_name: Apache 2.0
7
- task_categories:
8
- - text-generation
9
- - fill-mask
10
- task_ids:
11
- - language-modeling
12
- domains:
13
- - Web
14
- source_datasets:
15
- - AI-aktindsigt/Skrabet_kommunale_hjemmesider
16
- ---
17
-
18
- # Dataset Card for AI Aktindsigt
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- Multiple web scrapes from municipality websites collected as a part of the [AI-aktindsigt](https://ai-aktindsigt.dk) project.
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
- The dataset consists of multiple scrapes of municipal websites compiled in connection with the work on the [AI-aktindsigt](https://ai-aktindsigt.dk) project. The scrape is made across different domains from several different municipalities.
25
-
26
- ## Dataset Description
27
-
28
-
29
- <!-- START-DESC-STATS -->
30
- - **Number of samples**: 200.91K
31
- - **Number of tokens (Llama 3)**: 139.23M
32
- - **Average document length in tokens (min, max)**: 693.0064405666105 (9, 152.60K)
33
- <!-- END-DESC-STATS -->
34
-
35
-
36
- ## Dataset Structure
37
- An example from the dataset looks as follows.
38
-
39
-
40
- <!-- START-SAMPLE -->
41
- ```py
42
- {
43
- "id": "ai-aktindsigt_0",
44
- "text": "Vallensbæk Stationstorv 100 2665 Vallensbæk Strand Telefon: +45 4797 4000",
45
- "source": "ai-aktindsigt",
46
- "added": "2025-03-24",
47
- "created": "2010-01-01, 2024-03-18",
48
- "token_count": 29
49
- }
50
- ```
51
-
52
- ### Data Fields
53
-
54
- An entry in the dataset consists of the following fields:
55
-
56
- - `id` (`str`): An unique identifier for each document.
57
- - `text`(`str`): The content of the document.
58
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
59
- - `added` (`str`): An date for when the document was added to this collection.
60
- - `created` (`str`): An date range for when the document was originally created.
61
- - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
62
- <!-- END-SAMPLE -->
63
-
64
-
65
- ### Dataset Statistics
66
-
67
- <!-- START-DATASET PLOTS -->
68
- <p align="center">
69
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
70
- </p>
71
- <!-- END-DATASET PLOTS -->
72
-
73
-
74
-
75
- ## Additional Information
76
-
77
-
78
-
79
- ### Sourced data
80
- This dataset is derived from [`AI-aktindsigt/Skrabet_kommunale_hjemmesider`](https://huggingface.co/datasets/AI-aktindsigt/Skrabet_kommunale_hjemmesider/tree/main
81
- )
82
-
83
- ### Citation Information
84
-
85
- No citation is applicable for this work. We recommend citing the huggingface repository.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/ai-aktindsigt/create.py DELETED
@@ -1,64 +0,0 @@
1
- # /// script
2
- # requires-python = ">=3.12"
3
- # dependencies = [
4
- # "datasets>=3.2.0",
5
- # ]
6
- # ///
7
- """
8
- This script is used to create the data for the AI-aktindsigt project.
9
-
10
- This derived the data from a .json.gz file.
11
- """
12
-
13
- from pathlib import Path
14
- from typing import cast
15
-
16
- from datasets import Dataset, load_dataset
17
-
18
- source = "ai-aktindsigt"
19
-
20
-
21
- def convert_sample(example):
22
- # {'text': 'Vallensbæk Stationstorv 100 2665 Vallensbæk Strand Telefon: +45 4797 4000',
23
- # 'id': '0_03fe7662f6d37df0ffbf5013907414f935350db9931043891a95ed830965a507a7bcb4df93741429bdfa4958cf25f6c273aa73146f2be80948f767eb5fa04645',
24
- # 'source': 'AI-aktindsigt',
25
- # 'added': '2024-04-16T12:35:52.000Z',
26
- # 'metadata': {'url': 'https://vallensbaek.dk/', 'kommune': 'vallensbaek', 'sentence': 1,
27
- # 'ppl_score': [634.6341],
28
- # 'sha512': '03fe7662f6d37df0ffbf5013907414f935350db9931043891a95ed830965a507a7bcb4df93741429bdfa4958cf25f6c273aa73146f2be80948f767eb5fa04645'}
29
- # }
30
-
31
- new_example = dict(
32
- text_new=example["text"],
33
- source=source,
34
- domain="Web",
35
- license="Apache-2.0",
36
- added="2025-03-24",
37
- created="2010-01-01, 2024-03-18", # Start date is approximate guess end date is the date of the last update
38
- metadata={"source-pretty": "AI Aktindsigt"},
39
- )
40
-
41
- return new_example
42
-
43
-
44
- def main():
45
- data_path = Path(
46
- "/work/dfm-data/pre-training/ai_aktindsigt/documents/ai_aktindsigt.jsonl.gz"
47
- )
48
- ds = load_dataset("json", data_files=data_path.as_posix(), split="train")
49
-
50
- ds = cast(Dataset, ds)
51
-
52
- ds = ds.map(convert_sample, remove_columns=ds.column_names)
53
- ds = ds.rename_columns({"text_new": "text"})
54
- ds = ds.add_column("id", [f"{source}_{i}" for i in range(len(ds))]) # type: ignore
55
- ds = ds.select_columns(
56
- ["text", "source", "id", "added", "created", "license", "domain", "metadata"]
57
- )
58
-
59
- save_path = Path(__file__).parent / f"{source}.parquet"
60
- ds.to_parquet(save_path)
61
-
62
-
63
- if __name__ == "__main__":
64
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/ai-aktindsigt/descriptive_stats.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "number_of_samples": 200914,
3
- "number_of_tokens": 139234696,
4
- "min_length_tokens": 9,
5
- "max_length_tokens": 152599,
6
- "number_of_characters": 408005923,
7
- "min_length_characters": 29,
8
- "max_length_characters": 406832
9
- }
 
 
 
 
 
 
 
 
 
 
data/ai-aktindsigt/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 32d7c50d2b47fd31198d4fd28ead503c423562c8a4cdc317c45271785b3a6393
  • Pointer size: 131 Bytes
  • Size of remote file: 562 kB
data/botxt/botxt.md CHANGED
@@ -1,94 +1,57 @@
1
  ---
2
- pretty_name: Bornholmsk
3
  language:
4
- - da
5
  license: cc0-1.0
6
- license_name: CC-0
7
  size_categories:
8
- - 1-10k
9
  task_categories:
10
- - text-generation
11
- - fill-mask
12
  task_ids:
13
- - language-modeling
14
- domains:
15
- - Dialect
16
- - Web
17
- source_datasets:
18
- - danish-foundation-models/danish-gigaword
19
  ---
20
-
21
- # Dataset Card for Bornholmsk
22
-
23
  ## Dataset Description
24
-
25
- <!-- START-SHORT DESCRIPTION -->
26
- The Bornholmsk Ordbog Dictionary Project
27
- <!-- END-SHORT DESCRIPTION -->
28
-
29
- Fictional texts of various kinds written in Bornholmsk, the dialect spoken on the Danish island of Bornholm (The language code for Bornholmsk under IETF BCP-47 is da-bornholm), have been digitized (OCR’ed and proofread) by volunteers working within the recently resumed Bornholmsk Ordbog dictionary project (Kjeldsen, 2019). Most of the material included is written by Otto J. Lund in the period 1930-48 (novels, short stories, and poems). The Bornholmsk subcorpus, which in its present state amounts to circa 400 K words, also includes folk stories published by J. P. Kuhre in 1938, and by K. M. Kofoed in 1935, fictional letters by various authors published in the 1930s, as well as poems by Alfred Jensen published in 1948 and various other texts from the same period. The non-standardized orthography varies considerably from source to source. The Bornholmsk part of the Danish Gigaword is a significantly extended dataset, well beyond that studied in earlier NLP work on the dialect [(Derczynski and Kjeldsen, 2019)](https://aclanthology.org/W19-6138/).
30
-
31
-
32
- <!-- START-DESC-STATS -->
33
- - **Number of samples**: 106
34
- - **Number of tokens (Llama 3)**: 847.97K
35
- - **Average document length in tokens (min, max)**: 8.00K (407, 83.79K)
36
- <!-- END-DESC-STATS -->
37
-
38
-
39
-
40
- ## Dataset Structure
41
  An example from the dataset looks as follows.
42
-
43
-
44
- <!-- START-SAMPLE -->
45
- ```py
46
  {
47
- "id": "botxt_0000040",
48
- "text": "Ræua-Lârs\n\nRæua-Lârs å hans Konna, Stina, bode uda i Torpabakkana. Hanj hed nok æjla Lârs\nNielsen, m[...]",
49
- "source": "botxt",
50
- "added": "2024-05-16",
51
- "created": "2000-01-01, 2022-01-01",
52
- "token_count": 7229
 
 
 
 
 
 
 
 
53
  }
54
  ```
55
 
56
- ### Data Fields
57
 
58
- An entry in the dataset consists of the following fields:
 
 
 
 
 
59
 
60
- - `id` (`str`): An unique identifier for each document.
61
- - `text`(`str`): The content of the document.
62
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
63
- - `added` (`str`): An date for when the document was added to this collection.
64
- - `created` (`str`): An date range for when the document was originally created.
65
- - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
66
- <!-- END-SAMPLE -->
67
 
68
- ### Dataset Statistics
69
-
70
- <!-- START-DATASET PLOTS -->
71
- <p align="center">
72
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
73
  </p>
74
- <!-- END-DATASET PLOTS -->
75
-
76
-
77
- ## Additional Information
78
-
79
-
80
- ### Citation Information
81
-
82
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
83
-
84
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
85
-
86
- ```bash
87
- @inproceedings{dagw,
88
- title = {{The Danish Gigaword Corpus}},
89
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
90
- year = 2021,
91
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
92
- publisher = {NEALT}
93
- }
94
- ```
 
1
  ---
2
+ pretty_name: Bornholmsk (Danish dialect)
3
  language:
4
+ - da
5
  license: cc0-1.0
6
+ license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
+ - 1-10k
9
  task_categories:
10
+ - text-generation
11
+ - fill-mask
12
  task_ids:
13
+ - language-modeling
 
 
 
 
 
14
  ---
15
+ # Dataset Card for Bornholmsk (Danish dialect)
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 106
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'Ræua-Lârs
24
+
25
+ Ræua-Lârs å hans Konna, Stina, bode uda',
26
+ 'source': 'botxt',
27
+ 'id': 'botxt_0000040',
28
+ 'added': '2024-05-16',
29
+ 'created': '2000-01-01, 2022-01-01',
30
+ 'metadata': {
31
+ 'domain': 'Other',
32
+ 'license': 'Creative Commons Legal Code
33
+
34
+ CC0 1.0 Universal',
35
+ 'source-pretty': 'Bornholmsk (Danish dialect)'
36
+ }
37
  }
38
  ```
39
 
40
+ ## Data Fields
41
 
42
+ - **id**: source-specific identifier.
43
+ - **text**: textual content of the document.
44
+ - **source**: source of the data.
45
+ - **added**: timestamp ai2 acquired this data.
46
+ - **created**": timestamp when original document was created (best-guess if not available)
47
+ - **metadata**: source-specific metadata.
48
 
49
+ ## License Information
50
+ <details>
51
+ <summary>Creative Commons Zero v1.0 Universal</summary>
52
+ <p>
53
+ Creative Commons Legal Code
 
 
54
 
55
+ CC0 1.0 Universal
 
 
 
 
56
  </p>
57
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/botxt/botxt.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9948ed3d6cfd26c57086eacee83097f7abb8f8b95ae1639b5e17b1025ebdfb5e
3
- size 1343525
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec89c1dd57f1987dc6fe059a33a1d16b41b8c87439673a381f9671497f65b017
3
+ size 1344033
data/botxt/descriptive_stats.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "number_of_samples": 106,
3
- "number_of_tokens": 847973,
4
- "min_length_tokens": 407,
5
- "max_length_tokens": 83792,
6
- "number_of_characters": 2011076,
7
- "min_length_characters": 845,
8
- "max_length_characters": 202015
9
- }
 
 
 
 
 
 
 
 
 
 
data/botxt/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 92930c918e4b6bfbc3a5a1173e3af056d2f93c7d8c0a5cb02ee8604fbea14c41
  • Pointer size: 131 Bytes
  • Size of remote file: 541 kB
data/cellar/cellar.md DELETED
@@ -1,77 +0,0 @@
1
- ---
2
- pretty_name: Cellar
3
- language:
4
- - da
5
- license: cc-by-sa-4.0
6
- license_name: CC-BY-SA 4.0
7
- task_categories:
8
- - text-generation
9
- - fill-mask
10
- task_ids:
11
- - language-modeling
12
- domains:
13
- - Legal
14
- ---
15
-
16
- # Dataset Card for Cellar
17
-
18
- <!-- START-SHORT DESCRIPTION -->
19
- The official digital repository for European Union legal documents and open data.
20
- <!-- END-SHORT DESCRIPTION -->
21
-
22
- The EU Dataset Cellar serves as the central access point for all official EU publications, legislation, and open data resources. Maintained by the Publications Office of the European Union, this comprehensive digital archive contains millions of documents in multiple languages, including regulations, directives, decisions, treaties, case law, and preparatory acts dating back decades. The repository employs standardized metadata and unique identifiers to organize its vast collection, making it an essential resource for researchers, legal professionals, policymakers, and citizens seeking authoritative information on EU law and policy. The Cellar's linked data architecture also enables sophisticated search capabilities and integration with other information systems across the European Union's digital landscape.
23
-
24
-
25
- ## Dataset Description
26
-
27
- <!-- START-DESC-STATS -->
28
- - **Number of samples**: 63.40K
29
- - **Number of tokens (Llama 3)**: 1.15B
30
- - **Average document length in tokens (min, max)**: 18.17K (7, 2.60M)
31
- <!-- END-DESC-STATS -->
32
-
33
-
34
- ## Dataset Structure
35
- An example from the dataset looks as follows.
36
-
37
-
38
- <!-- START-SAMPLE -->
39
- ```py
40
- {
41
- "id": "cellar_0",
42
- "text": "\n\n\n\n© Европейски съюз, 2017 г.\n\nВъзпроизвеждането е разрешено при позоваване на оригинала.\n\n© Unión [...]",
43
- "source": "cellar",
44
- "added": "2025-03-25",
45
- "created": "2024-01-01, 2026-01-01",
46
- "token_count": 87018
47
- }
48
- ```
49
-
50
- ### Data Fields
51
-
52
- An entry in the dataset consists of the following fields:
53
-
54
- - `id` (`str`): An unique identifier for each document.
55
- - `text`(`str`): The content of the document.
56
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
57
- - `added` (`str`): An date for when the document was added to this collection.
58
- - `created` (`str`): An date range for when the document was originally created.
59
- - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
60
- <!-- END-SAMPLE -->
61
-
62
-
63
- ### Dataset Statistics
64
-
65
- <!-- START-DATASET PLOTS -->
66
- <p align="center">
67
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
68
- </p>
69
- <!-- END-DATASET PLOTS -->
70
-
71
-
72
-
73
- ## Additional Information
74
-
75
- ### Citation Information
76
-
77
- No citation is applicable for this work.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/cellar/cellar.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6162a90362e286ebc66a8344f39c3fbc835dec85f3e1d51318b7b39181ef4709
3
- size 1426079196
 
 
 
 
data/cellar/create.py DELETED
@@ -1,60 +0,0 @@
1
- # /// script
2
- # requires-python = ">=3.12"
3
- # dependencies = [
4
- # "datasets>=3.2.0",
5
- # ]
6
- # ///
7
-
8
- from pathlib import Path
9
- from typing import cast
10
- from datasets import Dataset, load_dataset, concatenate_datasets
11
-
12
- source = "cellar"
13
-
14
-
15
- def convert_sample(example):
16
- new_example = dict(
17
- text_new=example["text"],
18
- source=source,
19
- domain="Legal",
20
- license="cc-by-sa-4.0",
21
- added="2025-03-25",
22
- created="2024-01-01, 2026-01-01", # Scrape happened within these years - data likely written earlier
23
- metadata={"source-pretty": "Cellar"},
24
- )
25
-
26
- return new_example
27
-
28
-
29
- def main():
30
- data_path = Path("/work/dfm-data/pre-training/cellar/documents")
31
- data_paths = [p.as_posix() for p in data_path.glob("DAN*.jsonl.gz")]
32
- dfs = []
33
- for i, path in enumerate(data_paths):
34
- print(i, path.split("/")[-1])
35
- try:
36
- ds = load_dataset(
37
- "json", data_files=path, split="train"
38
- ) # a few datasets fail to load
39
- dfs.append(ds)
40
- print("\tSuccess")
41
- except Exception:
42
- print("\tFail")
43
-
44
- ds = concatenate_datasets(dsets=dfs)
45
-
46
- ds = cast(Dataset, ds)
47
-
48
- ds = ds.map(convert_sample, remove_columns=ds.column_names)
49
- ds = ds.rename_columns({"text_new": "text"})
50
- ds = ds.add_column("id", [f"{source}_{i}" for i in range(len(ds))]) # type: ignore
51
- ds = ds.select_columns(
52
- ["text", "source", "id", "added", "created", "license", "domain", "metadata"]
53
- )
54
-
55
- save_path = Path(__file__).parent / f"{source}.parquet"
56
- ds.to_parquet(save_path)
57
-
58
-
59
- if __name__ == "__main__":
60
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/cellar/descriptive_stats.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "number_of_samples": 63399,
3
- "number_of_tokens": 1152074881,
4
- "min_length_tokens": 7,
5
- "max_length_tokens": 2599840,
6
- "number_of_characters": 3866568270,
7
- "min_length_characters": 14,
8
- "max_length_characters": 37287484
9
- }
 
 
 
 
 
 
 
 
 
 
data/cellar/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: c47baf8bd18b1c625e4c5f5b58daa6b7004d25ce54b943a9fefc011260566c93
  • Pointer size: 131 Bytes
  • Size of remote file: 574 kB
data/dannet/dannet.md CHANGED
@@ -1,81 +1,84 @@
1
  ---
2
- pretty_name: DanNet
3
  language:
4
- - da
5
- license: other
6
- license_name: DanNet 1.0
7
  size_categories:
8
- - 10k-100k
9
  task_categories:
10
- - text-generation
11
- - fill-mask
12
  task_ids:
13
- - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
- domains:
17
- - Other
18
  ---
19
-
20
- # Dataset Card for DanNet
21
-
22
- <!-- START-SHORT DESCRIPTION -->
23
- [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet.
24
- <!-- END-SHORT DESCRIPTION -->
25
-
26
-
27
- A WordNet is a lexico-semantic network which show the meaning and the relation between words through named connections. It can be considered a machine-readable dictionary.
28
-
29
-
30
  ## Dataset Description
31
-
32
-
33
- <!-- START-DESC-STATS -->
34
- - **Number of samples**: 47.60K
35
- - **Number of tokens (Llama 3)**: 1.48M
36
- - **Average document length in tokens (min, max)**: 31.079364745919374 (2, 106)
37
- <!-- END-DESC-STATS -->
38
-
39
-
40
-
41
- ## Dataset Structure
42
  An example from the dataset looks as follows.
 
 
 
 
 
 
 
 
 
 
43
 
 
 
 
44
 
45
- <!-- START-SAMPLE -->
46
- ```py
47
- {
48
- "id": "dannet_46506",
49
- "text": "Når fodboldholdet fra 1. division i Ikast spiller hjemmekampe, lyder råbet ud over Ikast Stadion: We[...]",
50
- "source": "dannet",
51
- "added": "2020-09-24",
52
- "created": "2000-01-01, 2022-01-01",
53
- "token_count": 50
54
- }
55
- ```
56
 
57
- ### Data Fields
58
 
59
- An entry in the dataset consists of the following fields:
 
 
 
 
60
 
61
- - `id` (`str`): An unique identifier for each document.
62
- - `text`(`str`): The content of the document.
63
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
64
- - `added` (`str`): An date for when the document was added to this collection.
65
- - `created` (`str`): An date range for when the document was originally created.
66
- - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
67
- <!-- END-SAMPLE -->
68
 
 
 
 
 
 
 
 
69
 
70
- ### Dataset Statistics
 
 
 
 
 
71
 
72
- <!-- START-DATASET PLOTS -->
73
- <p align="center">
74
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
75
- </p>
76
- <!-- END-DATASET PLOTS -->
77
 
 
78
 
 
 
 
 
 
 
79
 
80
  ## License Information
81
  <details>
@@ -122,32 +125,3 @@ LICENSEE agrees to preserve same.
122
  DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish
123
  </p>
124
  </details>
125
-
126
-
127
-
128
- ## Additional Information
129
-
130
- <!-- TODO:
131
- Add issue on:
132
-
133
- Potential improvements for dannet
134
-
135
- I imagine that there is a lot of information in DanNet
136
- that could be used to create training datasets for LLMs (more than what is already present)
137
- -->
138
-
139
- ### Citation Information
140
-
141
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
142
-
143
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
144
-
145
- ```bash
146
- @inproceedings{dagw,
147
- title = {{The Danish Gigaword Corpus}},
148
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
149
- year = 2021,
150
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
151
- publisher = {NEALT}
152
- }
153
- ```
 
1
  ---
2
+ pretty_name: DanNet (Danish WordNet)
3
  language:
4
+ - da
5
+ license: DanNet 1.0 License
6
+ license_name: DanNet 1.0 License
7
  size_categories:
8
+ - 10k-100k
9
  task_categories:
10
+ - text-generation
11
+ - fill-mask
12
  task_ids:
13
+ - language-modeling
 
 
 
 
14
  ---
15
+ # Dataset Card for DanNet (Danish WordNet)
 
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 49040
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
22
+ {
23
+ 'text': 'Når fodboldholdet fra 1. division i Ikast spiller ',
24
+ 'source': 'dannet',
25
+ 'id': 'dannet_46506',
26
+ 'added': '2020-09-24',
27
+ 'created': '2000-01-01, 2022-01-01',
28
+ 'metadata': {
29
+ 'domain': 'dannet',
30
+ 'license': 'Commercial Use of DanNet
31
 
32
+ DanNet may be used in commercial applications in accordance with the following
33
+ license agreement. An attorney representing the commercial interest should
34
+ review this DanNet license with respect to the intended use.
35
 
36
+ DanNet 1.0 License
 
 
 
 
 
 
 
 
 
 
37
 
38
+ DanNet Release 2.1
39
 
40
+ This software and database is being provided to you, the LICENSEE, by University
41
+ of Copenhagen and Society for Danish Language and Literature under the following
42
+ license. By obtaining, using and/or copying this software and database, you
43
+ agree that you have read, understood, and will comply with these terms and
44
+ conditions.
45
 
46
+ Permission to use, copy, modify and distribute this software and database and
47
+ its documentation for any purpose and without fee or royalty is hereby granted,
48
+ provided that you agree to comply with the following copyright notice and
49
+ statements, including the disclaimer, and that the same appear on ALL copies of
50
+ the software, database and documentation, including modifications that you make
51
+ for internal use or for distribution.
 
52
 
53
+ THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND UNIVERSITY OF COPENHAGEN and
54
+ SOCIETY FOR DANISH LANGUAGE AND LITERATURE MAKE NO REPRESENTATIONS OR
55
+ WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION,
56
+ UNIVERSITY OF COPENHAGEN AND SOCIETY FOR DANISH LANGUAGE AND LITERATURE MAKE NO
57
+ REPRESENTATIONS OR WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR
58
+ PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL
59
+ NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.
60
 
61
+ The names of University of Copenhagen and Society for Danish Language and
62
+ Literature may not be used in advertising or publicity pertaining to
63
+ distribution of the software and/or database. Title to copyright in this
64
+ software, database and any associated documentation shall at all times remain
65
+ with University of Copenhagen and Society for Danish Language and Literature and
66
+ LICENSEE agrees to preserve same.
67
 
68
+ DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish',
69
+ 'source-pretty': 'DanNet (Danish WordNet)'
70
+ }
71
+ }
72
+ ```
73
 
74
+ ## Data Fields
75
 
76
+ - **id**: source-specific identifier.
77
+ - **text**: textual content of the document.
78
+ - **source**: source of the data.
79
+ - **added**: timestamp ai2 acquired this data.
80
+ - **created**": timestamp when original document was created (best-guess if not available)
81
+ - **metadata**: source-specific metadata.
82
 
83
  ## License Information
84
  <details>
 
125
  DanNet 2.1 Copyright 2009-12 by University of Copenhagen and Society for Danish
126
  </p>
127
  </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/dannet/dannet.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2ce98e55703f16406d9b3591297c7b860fa770c9ae55c4795bb7a50921619e43
3
- size 3918876
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9006617e35f568e7b7e4dacc87c4a490cf0a9170bd4e91488de77e00d3fb38c
3
+ size 4487008
data/dannet/descriptive_stats.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "number_of_samples": 47603,
3
- "number_of_tokens": 1479471,
4
- "min_length_tokens": 2,
5
- "max_length_tokens": 106,
6
- "number_of_characters": 4326120,
7
- "min_length_characters": 2,
8
- "max_length_characters": 340
9
- }
 
 
 
 
 
 
 
 
 
 
data/dannet/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: e41fb3761b6eeee9baea4ebd2c24d548dfbf9b8a9a445677f67f2596b0de2330
  • Pointer size: 131 Bytes
  • Size of remote file: 553 kB
data/danske-taler/create.py DELETED
@@ -1,314 +0,0 @@
1
- # /// script
2
- # requires-python = ">=3.12"
3
- # dependencies = [
4
- # "beautifulsoup4==4.13.3",
5
- # "datasets>=3.0.0",
6
- # "transformers",
7
- # "dynaword"
8
- # ]
9
- # [tool.uv.sources]
10
- # dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword", rev = "00e7f2aee7f7ad2da423419f77ecbb9c0536de0d" }
11
- # ///
12
- """
13
- Danske Taler API Downloader
14
- This script downloads speeches/articles from the Danske Taler API: https://www.dansketaler.dk/api/v1
15
-
16
- It saves it into the following structure:
17
-
18
- ```
19
- {
20
- "text": "Lav et referat af nedenstående tekst:\n\nTekst:\nOpdatering: Manden er nu fundet af Nordjyllands Politi[...]",
21
- "source": "nordjyllandnews",
22
- "id": "nordjyllandnews_0",
23
- "added": "2024-12-16",
24
- "created": "2000-01-01, 2024-01-01",
25
- "license": "Creative Commons Legal Code\n\nCC0 1.0 Universal",
26
- "domain": "News",
27
- "metadata": {
28
- "source-pretty": "Nordjylland News"
29
- }
30
- }
31
- ```
32
-
33
- Note: To run this script, you need to set `GIT_LFS_SKIP_SMUDGE=1` to be able to install dynaword:
34
-
35
- ```bash
36
- GIT_LFS_SKIP_SMUDGE=1 uv run data/memo/create.py
37
- ```
38
-
39
- This second version fixed previous issues with the download and processing of the Danish Memo repository:
40
- https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/67
41
- """
42
-
43
- import logging
44
- import time
45
- from datetime import date
46
- from pathlib import Path
47
- from typing import Any
48
-
49
- from datasets import Dataset
50
- import pandas as pd
51
- import requests
52
- from bs4 import BeautifulSoup, NavigableString
53
- from tqdm import tqdm
54
-
55
- from dynaword.process_dataset import (
56
- add_token_count,
57
- ensure_column_order,
58
- remove_duplicate_text,
59
- remove_empty_texts,
60
- )
61
-
62
- logger = logging.getLogger(__name__)
63
-
64
- # Configuration
65
- API_BASE_URL = "https://www.dansketaler.dk/api/v1"
66
-
67
- KNOWN_HTML_TAGS = {
68
- "html",
69
- "head",
70
- "body",
71
- "title",
72
- "meta",
73
- "link",
74
- "script",
75
- "style",
76
- "div",
77
- "span",
78
- "p",
79
- "a",
80
- "ul",
81
- "ol",
82
- "li",
83
- "table",
84
- "tr",
85
- "td",
86
- "th",
87
- "img",
88
- "h1",
89
- "h2",
90
- "h3",
91
- "h4",
92
- "h5",
93
- "h6",
94
- "strong",
95
- "em",
96
- "br",
97
- "hr",
98
- "form",
99
- "input",
100
- "button",
101
- "label",
102
- "select",
103
- "option",
104
- "textarea",
105
- "iframe",
106
- "nav",
107
- "footer",
108
- "header",
109
- "main",
110
- "section",
111
- "article",
112
- }
113
-
114
-
115
- def contains_html_tags(text):
116
- soup = BeautifulSoup(str(text), "html.parser")
117
- return any(tag.name in KNOWN_HTML_TAGS for tag in soup.find_all())
118
-
119
-
120
- def get_all_speeches() -> list[dict[str, Any]]:
121
- # fetch first page, notably the total number of pages
122
- url = f"{API_BASE_URL}/speeches?per_page=50"
123
- response = requests.get(url)
124
- response.raise_for_status()
125
- speeches = response.json()
126
- meta = speeches["meta"]
127
- total_pages = meta["total_pages"]
128
-
129
- # fetch all pages
130
- all_speeches = []
131
- for page in range(1, total_pages + 1):
132
- url = f"{API_BASE_URL}/speeches?per_page=50&page={page}"
133
- response = requests.get(url)
134
- response.raise_for_status()
135
- speeches = response.json()
136
- all_speeches.extend(speeches["speeches"])
137
-
138
- return all_speeches
139
-
140
-
141
- def fetch_speech_content(
142
- url: str, max_retries: int = 3, backoff_factor: float = 0.5
143
- ) -> tuple[str | None, str]:
144
- """
145
- Fetches the license div from the page with retry logic.
146
-
147
- Args:
148
- url: The URL to fetch the license div from
149
- max_retries: Maximum number of retry attempts
150
- backoff_factor: Factor to determine exponential backoff time between retries
151
-
152
- Returns:
153
- The text content of the license div if found, None otherwise
154
- """
155
- retries = 0
156
-
157
- while retries <= max_retries:
158
- try:
159
- response = requests.get(url, timeout=10)
160
- response.raise_for_status()
161
-
162
- soup = BeautifulSoup(response.text, "html.parser")
163
- license_div = soup.find("div", class_="speech-copyright")
164
- speech_div = soup.find("div", class_="speech-article-content")
165
- speech = ""
166
- if speech_div:
167
- # Iterate over the children of the found div
168
- for child_div in speech_div.children: # type: ignore
169
- if child_div.name == "div": # type: ignore
170
- current_paragraph = []
171
- for content in child_div.contents: # type: ignore
172
- if isinstance(content, NavigableString):
173
- # Append text content
174
- current_paragraph.append(str(content).strip())
175
- elif content.name == "br":
176
- # If a <br> is encountered, join and print the current paragraph, then reset
177
- if current_paragraph:
178
- speech += "".join(current_paragraph)
179
- speech += "\n" # Add a newline for paragraph break
180
- current_paragraph = []
181
- # Print any remaining text in the current_paragraph list
182
- if current_paragraph:
183
- speech += "".join(current_paragraph)
184
- speech += "\n" # Add a newline for paragraph break
185
-
186
- return (license_div.text if license_div else None, speech)
187
-
188
- except (requests.RequestException, AttributeError) as e:
189
- retries += 1
190
-
191
- if retries > max_retries:
192
- logger.info(
193
- f"Failed to fetch license after {max_retries} attempts: {str(e)}"
194
- )
195
- return (None, "")
196
-
197
- # Calculate backoff time using exponential backoff
198
- wait_time = backoff_factor * (2 ** (retries - 1))
199
- logger.info(
200
- f"Attempt {retries} failed. Retrying in {wait_time:.2f} seconds..."
201
- )
202
- time.sleep(wait_time)
203
-
204
- return (None, "")
205
-
206
-
207
- def convert_to_license(license_information: str | None) -> str | None:
208
- """checks if "Materialet er fri af ophavsret" is in the page"""
209
-
210
- if license_information and (
211
- ("Materialet er fri af ophavsret" in license_information)
212
- or ("Materialet er fri af ophvasret" in license_information)
213
- or ("Ophavsretten er bortfaldet" in license_information)
214
- or ("Manuskriptet er fri af ophavsret" in license_information)
215
- or ("Offentlig " == license_information)
216
- ):
217
- return "cc0"
218
-
219
- return license_information
220
-
221
-
222
- def convert_to_row(speech_meta: dict[str, Any]) -> dict[str, Any]:
223
- speech_id = speech_meta["id"]
224
-
225
- date_of_speech = speech_meta["date"]["iso_date"]
226
- date_of_speech_start = f"{date_of_speech}"
227
- date_of_speech_end = f"{date_of_speech}"
228
-
229
- (license_information, speech) = fetch_speech_content(speech_meta["url"])
230
-
231
- row = {
232
- "id": f"danske-taler_{speech_id}",
233
- "text": speech,
234
- "source": "danske-taler",
235
- # current date
236
- "added": date.today().isoformat(),
237
- "created": f"{date_of_speech_start}, {date_of_speech_end}",
238
- "license_information": license_information,
239
- "domain": "Spoken",
240
- "metadata": {"source-pretty": "Danske Taler"},
241
- }
242
-
243
- return row
244
-
245
-
246
- def download_speeches() -> pd.DataFrame:
247
- logger.info("Fetching all speeches from Danske Taler API")
248
- speeches = get_all_speeches()
249
- logger.info(f"Found {len(speeches)} speeches")
250
-
251
- rows = []
252
- for speech in tqdm(speeches):
253
- row = convert_to_row(speech)
254
- rows.append(row)
255
-
256
- logger.info(f"Saving {len(rows)} speeches to dataset")
257
- df = pd.DataFrame(rows)
258
- return df
259
-
260
-
261
- def main():
262
- save_path = Path(__file__).parent / "danske-taler.parquet"
263
- save_path_all = Path(__file__).parent / "tmp" / "danske-taler-all.parquet"
264
- save_path_all.parent.mkdir(parents=False, exist_ok=True)
265
-
266
- if save_path_all.exists():
267
- logger.info(f"Loading dataset from {save_path_all}")
268
- df = pd.read_parquet(save_path_all)
269
- else:
270
- logger.info(f"Downloading speeches and saving to {save_path_all}")
271
- df = download_speeches()
272
- df.to_parquet(save_path_all)
273
-
274
- licenses = [convert_to_license(license) for license in df["license_information"]]
275
- df["license"] = licenses
276
-
277
- uniques_licenses = set(df["license"].tolist())
278
- logger.info("Unique licenses:")
279
- for license in uniques_licenses:
280
- logger.info(f"\t{license}")
281
-
282
- # remove documents without a cc0 license
283
- len_df = len(df)
284
- df = df[df["license"] == "cc0"]
285
- logger.info(f"Removed {len_df - len(df)} documents without a cc0 license")
286
-
287
- dataset = Dataset.from_pandas(df, preserve_index=False)
288
-
289
- dataset = remove_empty_texts(dataset) # remove rows with empty text
290
- dataset = remove_duplicate_text(dataset) # remove rows with duplicate text
291
- dataset = add_token_count(dataset)
292
- dataset = ensure_column_order(dataset)
293
-
294
- assert len(set(dataset["id"])) == len(dataset), "IDs are not unique"
295
- assert len(set(dataset["text"])) == len(dataset), "Texts are not unique"
296
- assert len(set(df["license"])) == 1, "Multiple licenses found"
297
-
298
- # check for html tags in text
299
- assert not df["text"].apply(contains_html_tags).any(), "HTML tags found in text"
300
-
301
- dataset.to_parquet(save_path)
302
-
303
-
304
- if __name__ == "__main__":
305
- log_path = Path(__file__).parent / "danske-taler.log"
306
- logging.basicConfig(
307
- level=logging.INFO,
308
- format="%(asctime)s - %(levelname)s - %(message)s",
309
- handlers=[
310
- logging.StreamHandler(),
311
- logging.FileHandler(log_path),
312
- ],
313
- )
314
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/danske-taler/danske-taler.log DELETED
@@ -1,167 +0,0 @@
1
- 2025-03-29 14:14:08,846 - INFO - Downloading speeches and saving to /work/githubs/tmp/danish-dynaword/data/danske-taler/tmp/danske-taler-all.parquet
2
- 2025-03-29 14:14:08,847 - INFO - Fetching all speeches from Danske Taler API
3
- 2025-03-29 14:15:19,326 - INFO - Found 4725 speeches
4
- 13%|██████████▏ | 597/4725 [01:22<11:15, 6.11it/s]Attempt 1 failed. Retrying in 0.50 seconds...
5
- Attempt 2 failed. Retrying in 1.00 seconds...
6
- Attempt 3 failed. Retrying in 2.00 seconds...
7
- Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/niels-hoejlund-pedersens-translokationstale-2020
8
- 17%|██████████████ | 818/4725 [01:57<09:00, 7.23it/s]Attempt 1 failed. Retrying in 0.50 seconds...
9
- Attempt 2 failed. Retrying in 1.00 seconds...
10
- Attempt 3 failed. Retrying in 2.00 seconds...
11
- Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/katrine-lykke-pedersens-tale-til-unge-om-haab-i-en-coronatid
12
- 17%|█████████████▋ | 820/4725 [02:01<1:05:16, 1.00s/it]Attempt 1 failed. Retrying in 0.50 seconds...
13
- Attempt 2 failed. Retrying in 1.00 seconds...
14
- Attempt 3 failed. Retrying in 2.00 seconds...
15
- Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/anastacia-halkens-tale-til-unge-om-haab-i-en-coronatid
16
- 18%|██████████████▏ | 828/4725 [02:07<17:53, 3.63it/s]Attempt 1 failed. Retrying in 0.50 seconds...
17
- Attempt 2 failed. Retrying in 1.00 seconds...
18
- Attempt 3 failed. Retrying in 2.00 seconds...
19
- Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/thomas-vinterbergs-tale-ved-modtagelsen-af-oscar-prisen
20
- 22%|█████████████████▋ | 1042/4725 [02:41<10:04, 6.09it/s]Attempt 1 failed. Retrying in 0.50 seconds...
21
- Attempt 2 failed. Retrying in 1.00 seconds...
22
- Attempt 3 failed. Retrying in 2.00 seconds...
23
- Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-folketingets-aabningsdebat-2021
24
- 22%|█████████████████▉ | 1059/4725 [02:48<08:22, 7.30it/s]Attempt 1 failed. Retrying in 0.50 seconds...
25
- Attempt 2 failed. Retrying in 1.00 seconds...
26
- Attempt 3 failed. Retrying in 2.00 seconds...
27
- Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-nye-borgerliges-aarsmoede-2021
28
- 22%|█████████████████▌ | 1061/4725 [02:52<1:01:08, 1.00s/it]Attempt 1 failed. Retrying in 0.50 seconds...
29
- Attempt 2 failed. Retrying in 1.00 seconds...
30
- Attempt 3 failed. Retrying in 2.00 seconds...
31
- Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/mette-thiesens-tale-ved-nye-borgerliges-aarsmoede-2021
32
- 22%|█████████████████▌ | 1062/4725 [02:57<2:00:22, 1.97s/it]Attempt 1 failed. Retrying in 0.50 seconds...
33
- Attempt 2 failed. Retrying in 1.00 seconds...
34
- Attempt 3 failed. Retrying in 2.00 seconds...
35
- Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/peter-seier-christensens-tale-ved-nye-borgerliges-aarsmoede-2021
36
- 34%|███████████████████████████▍ | 1617/4725 [04:25<07:09, 7.24it/s]Attempt 1 failed. Retrying in 0.50 seconds...
37
- Attempt 2 failed. Retrying in 1.00 seconds...
38
- Attempt 3 failed. Retrying in 2.00 seconds...
39
- Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/silke-ena-svares-tale-ved-demonstrationen-for-born-og-unge
40
- 100%|████████████████████████████████████████████████████████████████████████████████| 4725/4725 [12:43<00:00, 6.19it/s]
41
- 2025-03-29 14:28:02,454 - INFO - Saving 4725 speeches to dataset
42
- 2025-03-29 14:28:03,330 - INFO - Unique licenses:
43
- 2025-03-29 14:28:03,331 - INFO - None
44
- 2025-03-29 14:28:03,331 - INFO - Materialet er beskyttet af ophavsret
45
- 2025-03-29 14:28:03,331 - INFO - cc0
46
- 2025-03-29 14:28:03,331 - INFO - Materialet er beskyttet af ophavsret, da talen ikke er holdt i offentligheden.
47
- 2025-03-29 14:28:03,331 - INFO - Materialet er omfattet af ophavsret
48
- 2025-03-29 14:28:03,331 - INFO - Manuskript taget fra ft.dk. med tilladelse fra udgiver.
49
- 2025-03-29 14:28:03,331 - INFO - Materialet et beskyttet af ophavsret
50
- 2025-03-29 14:28:03,331 - INFO - Manuskript taget fra ft.dk med tilladelse fra udgiver.
51
- 2025-03-29 14:28:03,331 - INFO - Materialet er beskyttet af ophavsret
52
- 2025-03-29 14:28:03,331 - INFO - Materialet er beskyttet af ophavsret
53
- 2025-03-29 14:28:03,461 - INFO - Removed 2063 documents without a cc0 license
54
- 2025-03-29 14:28:03,541 - INFO - Removed 0 duplicate ids
55
- 2025-03-29 14:28:03,549 - INFO - Removed 2 rows with empty text
56
- 2025-03-29 14:28:03,631 - INFO - Removed 2 rows with duplicate text
57
- Creating parquet from Arrow format: 100%|██████████████████████████████████████████████████| 3/3 [00:00<00:00, 11.33ba/s]
58
- 2025-06-24 13:03:05,424 - INFO - Found 5103 speeches
59
- 2025-06-24 13:04:19,375 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
60
- 2025-06-24 13:04:29,734 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
61
- 2025-06-24 13:04:30,613 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
62
- 2025-06-24 13:04:31,856 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
63
- 2025-06-24 13:04:34,098 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/niels-hoejlund-pedersens-translokationstale-2020
64
- 2025-06-24 13:05:10,223 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
65
- 2025-06-24 13:05:11,113 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
66
- 2025-06-24 13:05:12,575 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
67
- 2025-06-24 13:05:14,814 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/katrine-lykke-pedersens-tale-til-unge-om-haab-i-en-coronatid
68
- 2025-06-24 13:05:15,208 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
69
- 2025-06-24 13:05:15,922 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
70
- 2025-06-24 13:05:17,117 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
71
- 2025-06-24 13:05:19,583 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/anastacia-halkens-tale-til-unge-om-haab-i-en-coronatid
72
- 2025-06-24 13:05:20,875 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
73
- 2025-06-24 13:05:21,619 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
74
- 2025-06-24 13:05:22,844 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
75
- 2025-06-24 13:05:25,074 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/thomas-vinterbergs-tale-ved-modtagelsen-af-oscar-prisen
76
- 2025-06-24 13:06:01,599 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
77
- 2025-06-24 13:06:02,313 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
78
- 2025-06-24 13:06:03,588 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
79
- 2025-06-24 13:06:05,817 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-folketingets-aabningsdebat-2021
80
- 2025-06-24 13:06:08,990 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
81
- 2025-06-24 13:06:09,675 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
82
- 2025-06-24 13:06:10,912 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
83
- 2025-06-24 13:06:13,120 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-nye-borgerliges-aarsmoede-2021
84
- 2025-06-24 13:06:13,512 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
85
- 2025-06-24 13:06:14,230 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
86
- 2025-06-24 13:06:15,462 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
87
- 2025-06-24 13:06:17,720 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/mette-thiesens-tale-ved-nye-borgerliges-aarsmoede-2021
88
- 2025-06-24 13:06:17,920 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
89
- 2025-06-24 13:06:18,656 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
90
- 2025-06-24 13:06:19,902 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
91
- 2025-06-24 13:06:22,132 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/peter-seier-christensens-tale-ved-nye-borgerliges-aarsmoede-2021
92
- 2025-06-24 13:07:56,628 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
93
- 2025-06-24 13:07:57,353 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
94
- 2025-06-24 13:07:58,586 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
95
- 2025-06-24 13:08:00,850 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/silke-ena-svares-tale-ved-demonstrationen-for-born-og-unge
96
- 2025-06-24 13:19:38,142 - INFO - Saving 5103 speeches to dataset
97
- 2025-06-24 13:19:38,322 - INFO - Unique licenses:
98
- 2025-06-24 13:19:38,322 - INFO - None
99
- 2025-06-24 13:19:38,322 - INFO - cc0
100
- 2025-06-24 13:19:38,322 - INFO - Manuskript taget fra ft.dk. med tilladelse fra udgiver.
101
- 2025-06-24 13:19:38,322 - INFO - Manuskript tilsendt af taler og udgivet af Danske Taler med tilladelse fra taler.
102
- 2025-06-24 13:19:38,322 - INFO - Materialet er beskyttet af ophavsret, da talen ikke er holdt i offentligheden.
103
- 2025-06-24 13:19:38,322 - INFO - Materialet er beskyttet af ophavsret
104
- 2025-06-24 13:19:38,322 - INFO - Materialet er beskyttet af ophavsret
105
- 2025-06-24 13:19:38,322 - INFO - Materialet et beskyttet af ophavsret
106
- 2025-06-24 13:19:38,322 - INFO - Manuskript taget fra ft.dk med tilladelse fra udgiver.
107
- 2025-06-24 13:19:38,322 - INFO - Materialet er beskyttet af ophavsret
108
- 2025-06-24 13:19:38,322 - INFO - Materialet er omfattet af ophavsret
109
- 2025-06-24 13:19:38,325 - INFO - Removed 2188 documents without a cc0 license
110
- 2025-06-24 13:19:38,326 - INFO - Removed 0 duplicate ids
111
- 2025-06-24 13:19:38,332 - INFO - Removed 1 rows with empty text
112
- 2025-06-24 13:19:38,345 - INFO - Removed 2 rows with duplicate text2025-06-24 14:44:36,089 - INFO - Downloading speeches and saving to /Users/kristianjensen/Documents/danish-dynaword/data/danske-taler/tmp/danske-taler-all.parquet
113
- 2025-06-24 14:44:36,089 - INFO - Fetching all speeches from Danske Taler API
114
- 2025-06-24 14:45:43,887 - INFO - Found 5107 speeches
115
- 2025-06-24 14:46:53,929 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
116
- 2025-06-24 14:46:54,627 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
117
- 2025-06-24 14:46:55,824 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
118
- 2025-06-24 14:46:58,015 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/niels-hoejlund-pedersens-translokationstale-2020
119
- 2025-06-24 14:47:34,505 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
120
- 2025-06-24 14:47:35,215 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
121
- 2025-06-24 14:47:36,514 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
122
- 2025-06-24 14:47:38,725 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/katrine-lykke-pedersens-tale-til-unge-om-haab-i-en-coronatid
123
- 2025-06-24 14:47:39,093 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
124
- 2025-06-24 14:47:39,798 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
125
- 2025-06-24 14:47:41,013 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
126
- 2025-06-24 14:47:43,253 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/anastacia-halkens-tale-til-unge-om-haab-i-en-coronatid
127
- 2025-06-24 14:47:44,528 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
128
- 2025-06-24 14:47:45,272 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
129
- 2025-06-24 14:47:46,492 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
130
- 2025-06-24 14:47:48,691 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/thomas-vinterbergs-tale-ved-modtagelsen-af-oscar-prisen
131
- 2025-06-24 14:48:26,340 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
132
- 2025-06-24 14:48:27,037 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
133
- 2025-06-24 14:48:28,248 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
134
- 2025-06-24 14:48:30,496 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-folketingets-aabningsdebat-2021
135
- 2025-06-24 14:48:33,382 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
136
- 2025-06-24 14:48:34,125 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
137
- 2025-06-24 14:48:35,339 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
138
- 2025-06-24 14:48:37,570 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/pernille-vermunds-tale-ved-nye-borgerliges-aarsmoede-2021
139
- 2025-06-24 14:48:37,940 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
140
- 2025-06-24 14:48:38,663 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
141
- 2025-06-24 14:48:39,884 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
142
- 2025-06-24 14:48:42,101 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/mette-thiesens-tale-ved-nye-borgerliges-aarsmoede-2021
143
- 2025-06-24 14:48:42,357 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
144
- 2025-06-24 14:48:43,097 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
145
- 2025-06-24 14:48:44,340 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
146
- 2025-06-24 14:48:46,560 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/peter-seier-christensens-tale-ved-nye-borgerliges-aarsmoede-2021
147
- 2025-06-24 14:50:22,691 - INFO - Attempt 1 failed. Retrying in 0.50 seconds...
148
- 2025-06-24 14:50:23,446 - INFO - Attempt 2 failed. Retrying in 1.00 seconds...
149
- 2025-06-24 14:50:24,662 - INFO - Attempt 3 failed. Retrying in 2.00 seconds...
150
- 2025-06-24 14:50:26,911 - INFO - Failed to fetch license after 3 attempts: 500 Server Error: Internal Server Error for url: https://www.dansketaler.dk/tale/silke-ena-svares-tale-ved-demonstrationen-for-born-og-unge
151
- 2025-06-24 15:02:20,338 - INFO - Saving 5107 speeches to dataset
152
- 2025-06-24 15:02:20,503 - INFO - Unique licenses:
153
- 2025-06-24 15:02:20,503 - INFO - None
154
- 2025-06-24 15:02:20,503 - INFO - cc0
155
- 2025-06-24 15:02:20,503 - INFO - Materialet et beskyttet af ophavsret
156
- 2025-06-24 15:02:20,503 - INFO - Materialet er beskyttet af ophavsret
157
- 2025-06-24 15:02:20,503 - INFO - Materialet er omfattet af ophavsret
158
- 2025-06-24 15:02:20,503 - INFO - Manuskript taget fra ft.dk. med tilladelse fra udgiver.
159
- 2025-06-24 15:02:20,503 - INFO - Materialet er beskyttet af ophavsret
160
- 2025-06-24 15:02:20,503 - INFO - Manuskript taget fra ft.dk med tilladelse fra udgiver.
161
- 2025-06-24 15:02:20,503 - INFO - Materialet er beskyttet af ophavsret
162
- 2025-06-24 15:02:20,503 - INFO - Materialet er beskyttet af ophavsret, da talen ikke er holdt i offentligheden.
163
- 2025-06-24 15:02:20,503 - INFO - Manuskript tilsendt af taler og udgivet af Danske Taler med tilladelse fra taler.
164
- 2025-06-24 15:02:20,506 - INFO - Removed 2191 documents without a cc0 license
165
- 2025-06-24 15:02:20,508 - INFO - Removed 0 duplicate ids
166
- 2025-06-24 15:02:20,516 - INFO - Removed 2 rows with empty text
167
- 2025-06-24 15:02:20,529 - INFO - Removed 2 rows with duplicate text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/danske-taler/danske-taler.md DELETED
@@ -1,135 +0,0 @@
1
- ---
2
- pretty_name: Danske Taler
3
- language:
4
- - da
5
- license: cc0-1.0
6
- license_name: CC-0
7
- task_categories:
8
- - text-generation
9
- - fill-mask
10
- task_ids:
11
- - language-modeling
12
- domains:
13
- - Conversation
14
- - Speeches
15
- - Spoken
16
- ---
17
-
18
- # Dataset Card for Danske Taler
19
-
20
- <!-- START-SHORT DESCRIPTION -->
21
- Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk).
22
- <!-- END-SHORT DESCRIPTION -->
23
-
24
-
25
- The database dansketaler.dk is managed by Danske Taler, an independent institution that in addition to managing the database and carries out cultural
26
- and democratic projects based on speeches.
27
- Danske Taler state as their goals that they seek to preserve our cultural heritage and promotes active citizenship and democratic confidence through its work.
28
- Additionally, Danske Taler provides data to a number of online resources, including: lex.dk, sprogteknologi.dk, and ordnet.dk.
29
-
30
- The goal of the dataset is to collect historical and timely speeches and make them available for the public.
31
-
32
- Learn more about danske taler by reading their [about us](https://www.dansketaler.dk/om-os) page.
33
-
34
- > NOTE: Danske-Taler is also collecting [sermons](https://www.dansketaler.dk/praedikener), but these are not included in this dataset.
35
-
36
- ## Dataset Description
37
-
38
-
39
- <!-- START-DESC-STATS -->
40
- - **Number of samples**: 2.91K
41
- - **Number of tokens (Llama 3)**: 8.72M
42
- - **Average document length in tokens (min, max)**: 3.00K (129, 53.40K)
43
- <!-- END-DESC-STATS -->
44
-
45
-
46
- ## Dataset Structure
47
- An example from the dataset looks as follows.
48
-
49
-
50
- <!-- START-SAMPLE -->
51
- ```py
52
- {
53
- "id": "danske-taler_281",
54
- "text": "Tyske landsmænd og -kvinder !\nSyv år er kort tid, en brøkdel af en enkel menneskelig normaltilværels[...]",
55
- "source": "danske-taler",
56
- "added": "2025-06-24",
57
- "created": "1940-01-30, 1940-01-30",
58
- "token_count": 3020
59
- }
60
- ```
61
-
62
- ### Data Fields
63
-
64
- An entry in the dataset consists of the following fields:
65
-
66
- - `id` (`str`): An unique identifier for each document.
67
- - `text`(`str`): The content of the document.
68
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
69
- - `added` (`str`): An date for when the document was added to this collection.
70
- - `created` (`str`): An date range for when the document was originally created.
71
- - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
72
- <!-- END-SAMPLE -->
73
-
74
-
75
- ### Dataset Statistics
76
-
77
- <!-- START-DATASET PLOTS -->
78
- <p align="center">
79
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
80
- </p>
81
- <!-- END-DATASET PLOTS -->
82
-
83
-
84
-
85
- ## Additional Information
86
-
87
-
88
- ### Dataset Collection Process
89
-
90
- This dataset was collected using the publicly available [API](https://www.dansketaler.dk/api/v1).
91
-
92
- ### Quality Assurance
93
- We check for and remove exact duplicates, empty texts, duplicate ids after the initial download. We additionally check if the articles contain any HTML.
94
-
95
- ## Opportunities for Improvement
96
-
97
- While this dataset can be updated to include the latest availabe speeches.
98
-
99
- We consider the quality of the current collection high with a low chance of
100
- incorrect formatting,
101
- spelling errors,
102
- empty documents or
103
- misformatted segments.
104
- This stems both from the quality assurance, source of documents and subjective inspection.
105
-
106
- ### License Information
107
- Since the license information isn't avaiable through the API we collect this data directly from the webpage of each article under the header
108
- "Ophavsret".
109
-
110
- For speeches where it is noted that *"Materialet er fri af ophavsret"* (The material is in the public domain) or similarly we assign it a `cc0` license.
111
-
112
- Such an example can be seen here:
113
-
114
- > **Ophavsret**
115
- >
116
- > Materialet er fri af ophavsret. Taler, som er holdt i offentligheden, er ikke omfattet af ophavsret (Jf. ophavsretslovens § 26 og 32).
117
- > Det betyder, at når en tale er indgået i Danske Talers database, kan den bruges af tredjeparter, fx til undervisning eller forskning.
118
- >
119
- > *source: [Ursula von der Leyens tale om europæisk forsvar og sikkerhed på Hærens Officersskole](https://www.dansketaler.dk/tale/tale-om-europaeisk-forsvar-og-sikkerhed-pa-haerens-officersskole)*
120
-
121
- Speeches without this mention is removed. Such an example include:
122
-
123
- > **Ophavsret**
124
- >
125
- > Materialet er beskyttet af ophavsret
126
- >
127
- > *Source: [Christina Egelunds tale ved Aarhus Universitets årsfest](https://www.dansketaler.dk/tale/christina-egelunds-tale-ved-aarhus-universitets-arsfest)*
128
-
129
- We manually checked the unique set of license descriptions to see if any were open licenses that weren't included in the current criteria.
130
-
131
- For specific filtering criteria see the `create.py` script.
132
-
133
- ### Citation Information
134
-
135
- No citation is applicable for this work. We recommend citing the huggingface repository.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/danske-taler/danske-taler.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d007e606854f868febcf61a513302f7299ff35222fe9de487d17b9baaaedf248
3
- size 16089529
 
 
 
 
data/danske-taler/descriptive_stats.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "number_of_samples": 2912,
3
- "number_of_tokens": 8723951,
4
- "min_length_tokens": 129,
5
- "max_length_tokens": 53401,
6
- "number_of_characters": 26616908,
7
- "min_length_characters": 388,
8
- "max_length_characters": 155429
9
- }
 
 
 
 
 
 
 
 
 
 
data/danske-taler/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 8a6cc3946783f2d8e4725e50acc17b4ffbc84c38bb521253a5c2dca9087aa34d
  • Pointer size: 131 Bytes
  • Size of remote file: 553 kB
data/depbank/depbank.md CHANGED
@@ -1,115 +1,51 @@
1
  ---
2
  pretty_name: Danish Dependency Treebank
3
  language:
4
- - da
5
  license: cc-by-sa-4.0
6
- license_name: CC-BY-SA 4.0
7
  size_categories:
8
- - 1-10k
9
  task_categories:
10
- - text-generation
11
- - fill-mask
12
  task_ids:
13
- - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
- domains:
17
- - Other
18
  ---
19
-
20
  # Dataset Card for Danish Dependency Treebank
21
-
22
- <!-- START-SHORT DESCRIPTION -->
23
- The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT).
24
- <!-- END-SHORT DESCRIPTION -->
25
-
26
-
27
- The Danish UD treebank has been converted from the Danish Dependency Treebank (Buch-Kromman, 2003) into Universal Dependencies (UD). It consists of 5,512 sentences (100k words). The Danish source texts and the Danish part-of-speech tags were created by the PAROLE-DK project (Keson 1998) by the Danish Society for Language and Literature.
28
-
29
- While the dataset was initially intended as a rich annotation, this corpora only uses the raw text.
30
-
31
  ## Dataset Description
32
-
33
-
34
- <!-- START-DESC-STATS -->
35
- - **Number of samples**: 536
36
- - **Number of tokens (Llama 3)**: 185.45K
37
- - **Average document length in tokens (min, max)**: 345.99626865671644 (261, 517)
38
- <!-- END-DESC-STATS -->
39
-
40
-
41
-
42
- ## Dataset Structure
43
  An example from the dataset looks as follows.
44
-
45
-
46
- <!-- START-SAMPLE -->
47
- ```py
48
  {
49
- "id": "depbank_0375",
50
- "text": "\nH.L. Hansen var en usædvanmlig og frodig personlighed. Han skabte \nglæde og munterhed omkring sig o[...]",
51
- "source": "depbank",
52
- "added": "2024-05-16",
53
- "created": "2000-01-01, 2022-01-01",
54
- "token_count": 389
 
 
 
 
55
  }
56
  ```
57
 
58
- ### Data Fields
59
-
60
- An entry in the dataset consists of the following fields:
61
 
62
- - `id` (`str`): An unique identifier for each document.
63
- - `text`(`str`): The content of the document.
64
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
65
- - `added` (`str`): An date for when the document was added to this collection.
66
- - `created` (`str`): An date range for when the document was originally created.
67
- - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
68
- <!-- END-SAMPLE -->
69
 
70
-
71
- ### Dataset Statistics
72
-
73
- <!-- START-DATASET PLOTS -->
74
- <p align="center">
75
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
76
  </p>
77
- <!-- END-DATASET PLOTS -->
78
-
79
-
80
-
81
- ## Additional Information
82
-
83
- <!-- TODO:
84
- Add issue on:
85
-
86
- Potential improvements for depbank:
87
- 1) Pull directly from depbank
88
- 2) Compute texts into documents (seems like that is already done)
89
- 3) Add synthetic data instruction dataset
90
- - NER: What are the following names in this sentence
91
- - json output, html annotation, list at the end
92
- - POS:
93
- - Extract all POS-tags from the following sentence
94
- - Find all NOUNS in the following text
95
- - What POS tag does the ..
96
- - Tokenization:
97
- - split the following text into tokens
98
- - ...
99
- -->
100
-
101
- ### Citation Information
102
-
103
- This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
104
-
105
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
106
-
107
- ```bash
108
- @inproceedings{dagw,
109
- title = {{The Danish Gigaword Corpus}},
110
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
111
- year = 2021,
112
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
113
- publisher = {NEALT}
114
- }
115
- ```
 
1
  ---
2
  pretty_name: Danish Dependency Treebank
3
  language:
4
+ - da
5
  license: cc-by-sa-4.0
6
+ license_name: Creative Commons Attribution Share Alike 4.0
7
  size_categories:
8
+ - 1-10k
9
  task_categories:
10
+ - text-generation
11
+ - fill-mask
12
  task_ids:
13
+ - language-modeling
 
 
 
 
14
  ---
 
15
  # Dataset Card for Danish Dependency Treebank
 
 
 
 
 
 
 
 
 
 
16
  ## Dataset Description
17
+ - **Number of records:** 536
18
+ - **Languages:** Danish
19
+ ## Dataset Sturcture
 
 
 
 
 
 
 
 
20
  An example from the dataset looks as follows.
21
+ ```yaml
 
 
 
22
  {
23
+ 'text': 'H.L. Hansen var en usædvanmlig og frodig personlig',
24
+ 'source': 'depbank',
25
+ 'id': 'depbank_0375',
26
+ 'added': '2024-05-16',
27
+ 'created': '2000-01-01, 2022-01-01',
28
+ 'metadata': {
29
+ 'domain': 'Other',
30
+ 'license': 'Attribution-ShareAlike 4.0 International',
31
+ 'source-pretty': 'Danish Dependency Treebank'
32
+ }
33
  }
34
  ```
35
 
36
+ ## Data Fields
 
 
37
 
38
+ - **id**: source-specific identifier.
39
+ - **text**: textual content of the document.
40
+ - **source**: source of the data.
41
+ - **added**: timestamp ai2 acquired this data.
42
+ - **created**": timestamp when original document was created (best-guess if not available)
43
+ - **metadata**: source-specific metadata.
 
44
 
45
+ ## License Information
46
+ <details>
47
+ <summary>Creative Commons Attribution Share Alike 4.0</summary>
48
+ <p>
49
+ Attribution-ShareAlike 4.0 International
 
50
  </p>
51
+ </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/depbank/depbank.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:86febe315dae1089432da27d7b0c96a9a9bc0920d030563a35680416ac231e6f
3
- size 392289
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d4172e2ab4d7256ca5b76ad45b4d7326616e6679642056fdef20c5e3a8b1c62
3
+ size 392216
data/depbank/descriptive_stats.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "number_of_samples": 536,
3
- "number_of_tokens": 185454,
4
- "min_length_tokens": 261,
5
- "max_length_tokens": 517,
6
- "number_of_characters": 546130,
7
- "min_length_characters": 773,
8
- "max_length_characters": 1398
9
- }
 
 
 
 
 
 
 
 
 
 
data/depbank/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: d61b39a37be40d593e91cca7127f8ee3c3a3a1dcbad52609ac61e4c7ae59a798
  • Pointer size: 131 Bytes
  • Size of remote file: 539 kB
data/domsdatabasen/create.py DELETED
@@ -1,344 +0,0 @@
1
- # /// script
2
- # requires-python = ">=3.12"
3
- # dependencies = [
4
- # "datasets",
5
- # "dynaword",
6
- # "marker-pdf",
7
- # "requests",
8
- # "torch",
9
- # ]
10
- #
11
- # [tool.uv.sources]
12
- # dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword" }
13
- # ///
14
-
15
- """
16
- Script for downloading and processing the Domsdatabasen.dk site.
17
-
18
- Note: To run this script, you need to set `GIT_LFS_SKIP_SMUDGE=1` to be able to install dynaword:
19
-
20
- ```bash
21
- GIT_LFS_SKIP_SMUDGE=1 uv run data/domsdatabasen/create.py
22
- ```
23
-
24
- Note: This script is designed to be run using a GPU.
25
- """
26
-
27
- import atexit
28
- import logging
29
- import os
30
- import csv
31
- import time
32
- from typing import cast
33
-
34
- import torch
35
-
36
- import gc
37
- import requests
38
- import torch.multiprocessing as mp
39
- from pathlib import Path
40
- from datetime import date, datetime
41
-
42
- from datasets import Dataset, concatenate_datasets
43
- from marker.converters.pdf import PdfConverter
44
- from marker.models import create_model_dict
45
- from marker.output import text_from_rendered
46
-
47
- from dynaword.process_dataset import (
48
- add_token_count,
49
- ensure_column_order,
50
- remove_duplicate_text,
51
- remove_empty_texts,
52
- )
53
-
54
- logger = logging.getLogger(__name__)
55
-
56
- # ----------------- Config ------------------
57
-
58
- PDF_DIR = Path(__file__).parent / "pdfs"
59
- LOG_FILE = Path(__file__).parent / "progress_log.csv"
60
- PARQUET_FILE = Path(__file__).parent / "domsdatabasen.parquet"
61
- MAX_WORKERS = 10
62
- RETRY_COUNT = 3
63
- RETRY_DELAY = 2
64
-
65
- # ----------------- Headers ------------------
66
-
67
- HEADERS = {
68
- "Accept": "application/json, text/plain, */*",
69
- "Accept-Encoding": "gzip, deflate, br, zstd",
70
- "Accept-Language": "en-GB,en-US;q=0.9,en;q=0.8",
71
- "Connection": "keep-alive",
72
- "Content-Type": "application/json",
73
- }
74
-
75
-
76
- def init_csv():
77
- if not LOG_FILE.exists():
78
- with open(LOG_FILE, "w", newline="", encoding="utf-8") as f:
79
- writer = csv.DictWriter(
80
- f,
81
- fieldnames=["document_id", "pdf_downloaded", "text_extracted", "error"],
82
- )
83
- writer.writeheader()
84
-
85
-
86
- def append_log(document_id: str, pdf: bool, text: bool, error: str = ""):
87
- with open(LOG_FILE, "a", newline="", encoding="utf-8") as f:
88
- writer = csv.DictWriter(
89
- f, fieldnames=["document_id", "pdf_downloaded", "text_extracted", "error"]
90
- )
91
- writer.writerow(
92
- {
93
- "document_id": document_id,
94
- "pdf_downloaded": int(pdf),
95
- "text_extracted": int(text),
96
- "error": error,
97
- }
98
- )
99
-
100
-
101
- def load_existing_ids() -> set:
102
- if not PARQUET_FILE.exists():
103
- return set()
104
- ds = Dataset.from_parquet(str(PARQUET_FILE))
105
- ds = cast(Dataset, ds)
106
- return set(ds["id"])
107
-
108
-
109
- # ----------------- Retry Helpers ------------------
110
-
111
-
112
- def retry(func, *args, retries=RETRY_COUNT, delay=RETRY_DELAY, **kwargs):
113
- for attempt in range(retries):
114
- try:
115
- return func(*args, **kwargs)
116
- except Exception as e:
117
- logger.warning(f"⚠️ Retry {attempt + 1}/{retries} failed: {e}")
118
- time.sleep(delay)
119
- raise RuntimeError(f"❌ All retries failed for {func.__name__}({args})")
120
-
121
-
122
- # ----------------- PDF Download ------------------
123
-
124
-
125
- def download_pdf(document: dict) -> Path | None:
126
- document_id = document["id"]
127
- out_path = PDF_DIR / f"document_{document_id}.pdf"
128
- if out_path.exists():
129
- logger.info(f"⏭️ Skipped PDF (exists): {document_id}")
130
- return out_path
131
-
132
- url = f"https://domsdatabasen.dk/webapi/api/Case/document/download/{document_id}"
133
- try:
134
- response = retry(requests.get, url, headers=HEADERS)
135
- if response.status_code == 200:
136
- with open(out_path, "wb") as f:
137
- f.write(response.content)
138
- logger.info(f"✅ Downloaded PDF: {document_id}")
139
- append_log(document_id, pdf=True, text=False)
140
- return out_path
141
- else:
142
- raise RuntimeError(f"Download failed: {response.status_code}")
143
- except Exception as e:
144
- append_log(document_id, pdf=False, text=False, error=str(e))
145
- return None
146
-
147
-
148
- # ----------------- Parallel Extract Text ------------------
149
-
150
-
151
- def worker_init():
152
- model_dict = create_model_dict()
153
-
154
- global model_refs
155
- model_refs = model_dict
156
-
157
- # Ensure we clean up the model references on exit
158
- atexit.register(worker_exit)
159
-
160
-
161
- def worker_exit():
162
- global model_refs
163
- try:
164
- del model_refs
165
- except Exception:
166
- pass
167
-
168
-
169
- def process_document(document: dict) -> dict | None:
170
- # from marker.output import text_from_rendered
171
- # from marker.converters.pdf import PdfConverter
172
-
173
- torch.set_num_threads(2)
174
-
175
- document_id = document["id"]
176
- verdict_date = document.get("verdictDateTime")
177
- pdf_path = PDF_DIR / f"document_{document_id}.pdf"
178
-
179
- if not pdf_path.exists():
180
- url = (
181
- f"https://domsdatabasen.dk/webapi/api/Case/document/download/{document_id}"
182
- )
183
- try:
184
- response = retry(requests.get, url, headers=HEADERS)
185
- if response.status_code == 200:
186
- with open(pdf_path, "wb") as f:
187
- f.write(response.content)
188
- logger.info(f"✅ Downloaded PDF: {document_id}")
189
- else:
190
- raise RuntimeError(f"Download failed: {response.status_code}")
191
- except Exception as e:
192
- append_log(document_id, pdf=False, text=False, error=str(e))
193
- return None
194
-
195
- config = {"pdftext_workers": 1, "extract_images": False, "disable_tqdm": True}
196
-
197
- try:
198
- converter = PdfConverter(artifact_dict=model_refs, config=config)
199
- rendered = retry(converter, str(pdf_path))
200
- text, _, _ = text_from_rendered(rendered)
201
- logger.info(f"🖍️ Extracted text: {document_id}")
202
- append_log(document_id, pdf=True, text=True)
203
-
204
- del rendered
205
- del converter
206
-
207
- return {
208
- "id": document_id,
209
- "text": text,
210
- "source": "Domsdatabasen",
211
- "created": format_created(verdict_date),
212
- "added": date.today().isoformat(),
213
- "metadata": {},
214
- }
215
- except Exception as e:
216
- append_log(document_id, pdf=True, text=False, error=str(e))
217
- return None
218
- finally:
219
- gc.collect()
220
-
221
-
222
- # ----------------- Page Fetching ------------------
223
-
224
-
225
- def fetch_case_page(page_num: int) -> tuple[list[dict], int]:
226
- url = f"https://domsdatabasen.dk/webapi/api/Case/advanced?sorting=VerdictDateDesc&page={page_num}&pageSize=100"
227
- response = retry(requests.post, url, headers=HEADERS, json={})
228
- data = response.json()
229
-
230
- document_entries = []
231
- for case in data.get("cases", []):
232
- for doc in case.get("documents", []):
233
- document_entries.append(
234
- {
235
- "id": doc["id"],
236
- "verdictDateTime": doc.get("verdictDateTime"),
237
- }
238
- )
239
-
240
- return document_entries, data.get("pageCount", 1)
241
-
242
-
243
- # ----------------- Utilities ------------------
244
-
245
-
246
- def format_created(verdict_date: str | None) -> str:
247
- if verdict_date:
248
- try:
249
- dt = datetime.fromisoformat(verdict_date)
250
- formatted = dt.date().isoformat()
251
- return f"{formatted}, {formatted}"
252
- except Exception:
253
- pass
254
- today = date.today().isoformat()
255
- return f"{today}, {today}"
256
-
257
-
258
- # ----------------- Main Loop ------------------
259
-
260
-
261
- def main():
262
- PDF_DIR.mkdir(exist_ok=True)
263
- init_csv()
264
-
265
- all_records = []
266
- page_num = 1
267
- _, total_pages = fetch_case_page(1)
268
- logger.info(f"📄 Total pages: {total_pages}")
269
-
270
- existing_ids = load_existing_ids()
271
- logger.info(f"🔄 Resuming with {len(existing_ids)} already processed IDs")
272
-
273
- while page_num <= total_pages:
274
- logger.info(f"\n🔎 Fetching page {page_num}/{total_pages}")
275
-
276
- try:
277
- doc_infos, _ = fetch_case_page(page_num)
278
- except Exception as e:
279
- logger.warning(f"❌ Failed to fetch page {page_num}: {e}")
280
- page_num += 1
281
- continue
282
-
283
- doc_infos = [doc for doc in doc_infos if doc["id"] not in existing_ids]
284
-
285
- # Extract text in parallel using multiprocessing
286
- with mp.Pool(
287
- processes=MAX_WORKERS, initializer=worker_init, maxtasksperchild=10
288
- ) as pool:
289
- results = pool.map(process_document, doc_infos)
290
-
291
- all_records.extend([r for r in results if r])
292
-
293
- if all_records:
294
- ds_new = Dataset.from_list(all_records)
295
-
296
- if PARQUET_FILE.exists():
297
- ds_old = Dataset.from_parquet(str(PARQUET_FILE))
298
- ds_old = cast(Dataset, ds_old)
299
- ds_combined = concatenate_datasets([ds_old, ds_new])
300
- else:
301
- ds_combined = ds_new
302
-
303
- ds_combined.to_parquet(str(PARQUET_FILE))
304
- logger.info(f"📦 Appended {len(all_records)} records to {PARQUET_FILE}")
305
- existing_ids.update([r["id"] for r in all_records])
306
- all_records.clear()
307
-
308
- page_num += 1
309
-
310
- ds = Dataset.from_parquet(str(PARQUET_FILE))
311
- ds = cast(Dataset, ds)
312
- ds = remove_empty_texts(ds)
313
- ds = remove_duplicate_text(ds)
314
- ds = add_token_count(ds)
315
- ds = ensure_column_order(ds)
316
-
317
- ds.to_parquet(str(PARQUET_FILE))
318
-
319
-
320
- if __name__ == "__main__":
321
- # Ensure threads don't contend
322
- os.environ["MKL_DYNAMIC"] = "FALSE"
323
- os.environ["OMP_DYNAMIC"] = "FALSE"
324
- os.environ["OMP_NUM_THREADS"] = "2" # Avoid OpenMP issues with multiprocessing
325
- os.environ["OPENBLAS_NUM_THREADS"] = "2"
326
- os.environ["MKL_NUM_THREADS"] = "2"
327
- os.environ["GRPC_VERBOSITY"] = "ERROR"
328
- os.environ["GLOG_minloglevel"] = "2"
329
- os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = (
330
- "1" # Transformers uses .isin for a simple op, which is not supported on MPS
331
- )
332
- os.environ["IN_STREAMLIT"] = "true" # Avoid multiprocessing inside surya
333
-
334
- mp.set_start_method("spawn", force=True)
335
- log_path = Path(__file__).parent / "domsdatabasen.log"
336
- logging.basicConfig(
337
- level=logging.INFO,
338
- format="%(asctime)s - %(levelname)s - %(message)s",
339
- handlers=[
340
- logging.StreamHandler(),
341
- logging.FileHandler(log_path),
342
- ],
343
- )
344
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/domsdatabasen/descriptive_stats.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "number_of_samples": 8468,
3
- "number_of_tokens": 86353024,
4
- "min_length_tokens": 15,
5
- "max_length_tokens": 1008826,
6
- "number_of_characters": 256036077,
7
- "min_length_characters": 35,
8
- "max_length_characters": 3021437
9
- }
 
 
 
 
 
 
 
 
 
 
data/domsdatabasen/domsdatabasen.md DELETED
@@ -1,119 +0,0 @@
1
- ---
2
- pretty_name: Domsdatabasen.dk
3
- language:
4
- - da
5
- license: other
6
- license_name: Danish Copyright Law
7
- size_categories:
8
- - 10k-100k
9
- task_categories:
10
- - text-generation
11
- - fill-mask
12
- task_ids:
13
- - language-modeling
14
- source_datasets:
15
- - danish-foundation-models/danish-gigaword
16
- domains:
17
- - Legal
18
- ---
19
-
20
- # Dataset Card for Domsdatabasen.dk
21
-
22
- <!-- START-SHORT DESCRIPTION -->
23
- [Domsdatabasen.dk](https://domsdatabasen.dk/) is a public database containing selected judgments from the Danish courts.
24
- <!-- END-SHORT DESCRIPTION -->
25
-
26
- Launched in early 2022, the platform aims to increase transparency and public insight into the workings of the judiciary in Denmark. It is accessible to everyone – legal professionals, citizens, companies, and public authorities interested in Danish case law.
27
-
28
- ## Dataset Description
29
-
30
- ### Purpose and Scope
31
- The main goal of the database is to support the principle of openness in the administration of justice. It offers users access to selected civil and criminal decisions, with an initial focus on rulings from the higher courts, such as:
32
-
33
- - The Supreme Court (Højesteret)
34
- - The High Courts (Landsretterne)
35
- - The Maritime and Commercial Court (Sø- og Handelsretten)
36
-
37
- Some rulings from the district courts (byretterne) are also included, particularly when they are part of a case string that has been appealed.
38
- Over time, the database will expand in coverage and volume, especially as the court system transitions to new digital case management systems.
39
-
40
- ### Pseudonymization and Data Protection
41
- All published rulings are pseudonymized to protect the privacy of individuals involved, in accordance with the EU General Data Protection Regulation (GDPR), the Danish Data Protection Act, and rules from the Danish Data Protection Agency.
42
-
43
- Pseudonymization involves replacing personally identifiable information (e.g., names, CPR numbers) with general terms such as “the accused”, “witness 1”, etc. Additional data such as addresses or health-related details may be redacted or pseudonymized based on a case-specific evaluation.
44
-
45
- Some roles and names are not pseudonymized, including:
46
-
47
- - Judges from higher courts
48
- - Legal representatives (lawyers)
49
- - Author names in cited legal literature (unless directly involved in the case)
50
- - Names in EU court decisions
51
-
52
- Businesses involved in cases are typically not pseudonymized unless their name reveals personal information or constitutes a trade secret.
53
-
54
- ### Access and Development
55
- Domsdatabasen is continuously being developed. As digitization progresses and technical workflows improve, the number of published decisions is expected to grow. The judgments are published as full case strings, including decisions at multiple judicial levels, providing context and legal reasoning throughout the appeal process.
56
-
57
-
58
- <!-- START-DESC-STATS -->
59
- - **Number of samples**: 8.47K
60
- - **Number of tokens (Llama 3)**: 86.35M
61
- - **Average document length in tokens (min, max)**: 10.20K (15, 1.01M)
62
- <!-- END-DESC-STATS -->
63
-
64
-
65
- ## Dataset Structure
66
- An example from the dataset looks as follows.
67
-
68
-
69
- <!-- START-SAMPLE -->
70
- ```py
71
- {
72
- "id": "11389",
73
- "text": "## **Ikke grundlag for varetægtsfængsling af hensyn til retshåndhævelsen**\n\nDer var ikke særligt bes[...]",
74
- "source": "Domsdatabasen",
75
- "added": "2025-07-04",
76
- "created": "2025-07-04, 2025-07-04",
77
- "token_count": 796
78
- }
79
- ```
80
-
81
- ### Data Fields
82
-
83
- An entry in the dataset consists of the following fields:
84
-
85
- - `id` (`str`): An unique identifier for each document.
86
- - `text`(`str`): The content of the document.
87
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
88
- - `added` (`str`): An date for when the document was added to this collection.
89
- - `created` (`str`): An date range for when the document was originally created.
90
- - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
91
- <!-- END-SAMPLE -->
92
-
93
-
94
- ## License Information
95
- <details>
96
- <summary>Danish Copyright Law</summary>
97
- <p>
98
- Danish Copyright law at https://www.retsinformation.dk/forms/r0710.aspx?id=164796 states
99
-
100
- § 9. Love, administrative forskrifter, retsafgørelser og lignende offentlige aktstykker er ikke genstand for ophavsret.
101
-
102
- Stk. 2. Bestemmelsen i stk. 1 gælder ikke for værker, der fremtræder som selvstændige bidrag i de i stk. 1 nævnte aktstykker. Sådanne værker må dog gengives i forbindelse med aktstykket. Retten til videre udnyttelse afhænger af de i øvrigt gældende regler.
103
-
104
- </p>
105
- </details>
106
-
107
-
108
- ### Dataset Statistics
109
-
110
- <!-- START-DATASET PLOTS -->
111
- <p align="center">
112
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
113
- </p>
114
- <!-- END-DATASET PLOTS -->
115
-
116
-
117
- ## Additional Information
118
-
119
- **Extraction of text:** The documents being downloaded from [domsdatabasen.dk](https://www.domsdatabasen.dk/) is PDFs. To extract the texts from those, the `create.py` script uses the [marker-pdf](https://github.com/datalab-to/marker/tree/master) package.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/domsdatabasen/domsdatabasen.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:132f593c951564e56c262520116bd02eea193f10443b9d12305e130dde16ee99
3
- size 123195077
 
 
 
 
data/domsdatabasen/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: 47efb3cce555370325986d99b3b1f9c817e54b72eff8f9fde0d3c887bfa59af3
  • Pointer size: 131 Bytes
  • Size of remote file: 559 kB
data/enevaeldens_nyheder/create.py DELETED
@@ -1,96 +0,0 @@
1
- # /// script
2
- # requires-python = ">=3.12"
3
- # dependencies = [
4
- # "datasets",
5
- # "dynaword",
6
- # ]
7
- #
8
- # [tool.uv.sources]
9
- # dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword" }
10
- # ///
11
-
12
- """
13
- Script for downloading and processing the dataset
14
-
15
- Note: To run this script, you need to set `GIT_LFS_SKIP_SMUDGE=1` to be able to install dynaword:
16
-
17
- ```bash
18
- GIT_LFS_SKIP_SMUDGE=1 uv run data/enevaeldens_nyheder/create.py
19
- ```
20
- """
21
-
22
- import logging
23
- from datetime import date
24
- from pathlib import Path
25
- from typing import Any, cast
26
-
27
- from datasets import Dataset, load_dataset
28
-
29
- from dynaword.process_dataset import (
30
- add_token_count,
31
- ensure_column_order,
32
- remove_duplicate_text,
33
- remove_empty_texts,
34
- )
35
-
36
- logger = logging.getLogger(__name__)
37
-
38
- SOURCE = "enevaeldens_nyheder"
39
-
40
-
41
- def reformat_samples(example: dict[str, Any]) -> dict[str, Any]:
42
- creation_date = example["date"]
43
- # Reformatting the date to YYYY-MM-DD format
44
- start = creation_date
45
- end = creation_date
46
- return {
47
- "id": f"{SOURCE}_{example['id']}",
48
- "text": example["text"],
49
- "source": SOURCE,
50
- "added": date.today().strftime("%Y-%m-%d"),
51
- "created": f"{start}, {end}",
52
- }
53
-
54
-
55
- def main():
56
- dataset = load_dataset(
57
- "JohanHeinsen/ENO",
58
- split="train",
59
- revision="009f45ef63a1a41705781840807eb620f380d17d",
60
- )
61
- dataset = cast(Dataset, dataset)
62
-
63
- logger.info("Removing 1 word texts")
64
- len_ds = len(dataset)
65
- dataset = dataset.filter(
66
- lambda x: len(x["text"].split()) >= 2
67
- ) # require at least 2 word in the text
68
- logger.info(f"Filtered {len_ds - len(dataset)} 1 word examples")
69
-
70
- logger.info("Filtering out texts with predicted word acuracy < 0.7")
71
- dataset = dataset.filter(lambda x: x["pwa"] >= 0.7)
72
- logger.info(f"Filtered {len_ds - len(dataset)} low accuracy examples")
73
-
74
- dataset = dataset.map(reformat_samples)
75
-
76
- dataset = remove_empty_texts(dataset) # remove rows with empty text
77
- dataset = remove_duplicate_text(dataset) # remove rows with duplicate text
78
- dataset = add_token_count(dataset)
79
- dataset = ensure_column_order(dataset)
80
-
81
- dataset.to_parquet(
82
- Path(__file__).parent / f"{SOURCE}.parquet",
83
- )
84
-
85
-
86
- if __name__ == "__main__":
87
- log_path = Path(__file__).parent / f"{SOURCE}.log"
88
- logging.basicConfig(
89
- level=logging.INFO,
90
- format="%(asctime)s - %(levelname)s - %(message)s",
91
- handlers=[
92
- logging.StreamHandler(),
93
- logging.FileHandler(log_path),
94
- ],
95
- )
96
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/enevaeldens_nyheder/descriptive_stats.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "number_of_samples": 4593228,
3
- "number_of_tokens": 1034308344,
4
- "min_length_tokens": 3,
5
- "max_length_tokens": 37294,
6
- "number_of_characters": 2889445364,
7
- "min_length_characters": 4,
8
- "max_length_characters": 111182
9
- }
 
 
 
 
 
 
 
 
 
 
data/enevaeldens_nyheder/enevaeldens_nyheder.log DELETED
@@ -1,9 +0,0 @@
1
- 2025-08-05 13:09:29,533 - INFO - Removing 1 word texts
2
- 2025-08-05 13:10:14,475 - INFO - Filtered 42635 1 word examples
3
- 2025-08-05 13:10:14,475 - INFO - Filtering out texts with predicted word acuracy < 0.7
4
- 2025-08-05 13:11:24,300 - INFO - Filtered 76655 low accuracy examples
5
- 2025-08-05 13:15:33,389 - INFO - Removing empty texts
6
- 2025-08-05 13:15:50,876 - INFO - Filtered 0 empty examples
7
- 2025-08-05 13:15:50,876 - INFO - Removing duplicate texts
8
- 2025-08-05 13:19:48,194 - INFO - Filtered 161196 duplicate examples
9
- 2025-08-05 13:32:46,967 - INFO - Ensuring columns are in the correct order and are present
 
 
 
 
 
 
 
 
 
 
data/enevaeldens_nyheder/enevaeldens_nyheder.md DELETED
@@ -1,172 +0,0 @@
1
- ---
2
- pretty_name: "Enev\xE6ldens Nyheder Online"
3
- language:
4
- - da
5
- license: cc-by-sa-4.0
6
- license_name: CC-BY-SA 4.0
7
- task_categories:
8
- - text-generation
9
- - fill-mask
10
- task_ids:
11
- - language-modeling
12
- domains:
13
- - News
14
- source_datasets:
15
- - JohanHeinsen/ENO
16
- ---
17
-
18
- # Dataset Card for Enevældens Nyheder Online
19
-
20
- ![](images/header_img.jpeg)
21
- <!-- START-SHORT DESCRIPTION -->
22
- High quality OCR'd texts from Danish and Norwegian newspapers during the period of constitutional absolutism in Denmark (1660–1849).
23
- <!-- END-SHORT DESCRIPTION -->
24
-
25
-
26
- During the eighteenth century, newspapers became a ubiquitous medium. They informed a relatively large reading public about everything from high politics to the mundanities of local markets.
27
- The dataset was created by re-processing over 550.000 digital images scanned from microfilm and held in the Danish Royal Library's collection. They had initially been OCR-processed, but the results were generally unreadable. ENO reprocessed the images using tailored pylaia models in Transkribus. The OCR-quality is generally high, despite the difficult state of the original images.
28
- The newspaper editions have been segmented into individual texts using a model designed by the project team. Such texts are the base entity of the dataset. They include mainly two genres: news items and advertisements.
29
-
30
- ## Dataset Description
31
-
32
-
33
- <!-- START-DESC-STATS -->
34
- - **Number of samples**: 4.59M
35
- - **Number of tokens (Llama 3)**: 1.03B
36
- - **Average document length in tokens (min, max)**: 225.1811458085686 (3, 37.29K)
37
- <!-- END-DESC-STATS -->
38
-
39
-
40
- * **Curated by**: Johan Heinsen and Camilla Bøgeskov, Historisk Datalaboratorium, Aalborg University. With assistance from Sofus Landor Dam, Anders Birkemose, Kamilla Matthiassen and Louise Karoline Sort.
41
- * **Funded by**: MASSHINE, Aalborg University.
42
-
43
-
44
- The dataset contains a wide range of newspapers. The total distribution can be studied here. They cover most of Denmark as well as the three oldest newspapers of Norway, running until the separation of the Danish-Norwegian conglomerate in 1814. This dataset represents version 0.9 (updated 5th of August 2025).
45
-
46
-
47
- ### Dataset Sources
48
-
49
- The sources of the dataset can be studied in more detail at the [project website](https://hislab.quarto.pub/eno/).
50
- Most of the original image material is available in [LOAR](https://loar.kb.dk/handle/1902/7803) – a data repository of the Danish Royal Library. The Norwegian material was downloaded via the API of Nettbiblioteket. The scans of Nyeste Skilderie af Kjøbenhavn were taken from the Internet Archive repository of [Niels Jensen](https://archive.org/details/@uforbederlig). The scans for Politivennen stem from [Københavns Biblioteker](https://bibliotek.kk.dk/din/bag-om-kobenhavn/politivennen). Some early newspapers come from recent scans made available to the project by the Danish Royal Library. These are not yet available online.
51
-
52
- ## Uses
53
-
54
- This dataset represents an effort to enable analysis of Denmark-Norway in the seventeenth, eighteenth, and nineteenth centuries. The data can be used to study and model sentiments, political and cultural currents, and the minutiae of urban life.
55
-
56
- In addition, this dataset is part of Danish Dynaword, a collection of datasets intended for training language models, thereby integrating Danish cultural heritage into the next generation of digital technologies.
57
-
58
-
59
-
60
- ## Dataset Structure
61
- An example from the dataset looks as follows.
62
-
63
-
64
- <!-- START-SAMPLE -->
65
- ```py
66
- {
67
- "id": "enevaeldens_nyheder_aalborg1767_1767-01-02_1000001",
68
- "text": "Et Menneske er skabt ey for sig selv allene: Hvert Lem paa Legemet det heele tiene maae, En Stolpes [...]",
69
- "source": "enevaeldens_nyheder",
70
- "added": "2025-08-05",
71
- "created": "1767-01-02, 1767-01-02",
72
- "token_count": 2377
73
- }
74
- ```
75
-
76
- ### Data Fields
77
-
78
- An entry in the dataset consists of the following fields:
79
-
80
- - `id` (`str`): An unique identifier for each document.
81
- - `text`(`str`): The content of the document.
82
- - `source` (`str`): The source of the document (see [Source Data](#source-data)).
83
- - `added` (`str`): An date for when the document was added to this collection.
84
- - `created` (`str`): An date range for when the document was originally created.
85
- - `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
86
- <!-- END-SAMPLE -->
87
-
88
-
89
-
90
- ## Dataset Creation
91
-
92
- ### Curation Rationale
93
-
94
- The newspapers in the dataset generally represent the longest-running newspaper series in the Danish and Norwegian libraries. We prioritised long-running newspapers to enable historical analysis of changes over time. As historians, this was our initial ambition: to allow us to get quality serial text data.
95
- We also prioritised geographical diversity, representing different regions of Denmark-Norway. Of course, this varies over time, as newspapers were most common in Copenhagen until the late eighteenth century.
96
- Since the newspapers of Denmark's Caribbean colonies were primarily in English, they are not included. The text recognition model designed for the project struggles with English text.
97
- Besides long-running series, we also included a few smaller newspaper series, mainly with an eye towards diversity of subject matter. These include Politivennen, which was concerned with very local news from Copenhagen and carried a lot of reader contributions, offering a unique insight into urban sentiments at the time. A similar inclusion was made with Jyllandsposten (of 1838), which was defined by a somewhat radical rural horizon.
98
-
99
- As a rule of thumb, publications have been digitised in total, as they exist in their respective collections.
100
- This means that they sometimes include appendices and sometimes do not, depending on whether these exist. Holes in the dataset mirror holes in the archival collections.
101
- The one exception to this rule is the newspaper Københavns Adresseavis. This advertisement paper has survived continuously from its inception in 1759, but from 1804 onwards, it is only included here with samples every fifth year.
102
- The reason for sampling is a combination of the massive extent of this advertisement paper and the poor condition of the digital images available for this specific period. Combined this meant that the results of the text recognition process were not entirely satisfying relative to the resources necessary for the effort. Therefore, we decided to prioritize other publications that would yield better quality text.
103
-
104
- Most publications contain title page marginalia (date, title, etc.). Because these were set with large ornamental types, they are typically recognised with much less accuracy than the regular text. We are currently working on implementing a step in the workflow to identify and filter out these elements.
105
-
106
- ### Data Collection and Processing
107
-
108
- The text recognition model used to create the dataset is available via [Transkribus](https://app.transkribus.org/models/public/text/danish-newspapers-1750-1850). A description of the text segmentation process can be found [here](https://hislab.quarto.pub/eno/dokumentation.html). Besides segmentation into separate news items / advertisements, no further processing of the text has taken place. We are currently experimenting with automated error correction using decoder-models.
109
-
110
- For Danish Dynaword we apply additional filtering including:
111
-
112
- - 1) Removing 1 word documents (using a whitespace split)
113
- - 2) Removing document with a PWA < 0.7
114
-
115
- PWA is defined as:
116
-
117
- > A predicted word accuracy [PWA] based on a dictionary consisting of words from literary works, personal names and place names from the census of 1787, and a manually curated list of common words that are present in the material, but not represented in canonical literature. This is an estimate. In general we advise that you filter the dataset on this variable in case of using the material for language modelling. This will also filter out texts in other languages than Danish.
118
- >
119
- > source: [JohanHeinsen/ENO](https://huggingface.co/datasets/JohanHeinsen/ENO#dataset-structure)
120
-
121
- Below you see 10 examples of documents (truncated to 200 characters) filtered out due to the PWA filtering:
122
-
123
- ```
124
- ['Under Staders Segl. nespil.',
125
- 'Frisk Selter=, Permonter=, Bitter, og FachingerVand bekommes paa Løveapotheket.',
126
- 'Søesyglinsk, Christoph. Auf Anordning der Liquidations=Commission, den ten August 1834. (Ges.) Mitglied der Commission, Regierungsrath: Pestof. Stellvertretender Secretair. Gabriel Ostrowski.',
127
- 'J de Reussiske Koge: Bordelummer Seil.',
128
- 'Scriptores historiae Byzantinae vird bei uns un entgeltlich ansgegeben. Anch sind bei und fortige Bogen dieses Werkes in den verschiedenen Ansgeden auf Druck., Schreibe und Velinpapier niedergelegt, z',
129
- 'Gammel Conjac. Potten.',
130
- 'NOTIFICATION. Von der 5ten Classe, der 7ten Königl. allen privilegitten Kopenhagner Lotteren, deren Ziehung den 17ten Passati geendiget worden, werden die Gewinne den 8ten hujus und følgende Werkeltag',
131
- 'Jm Verlag des Unterzeichneten har die Presse verlassen: Uever dis religiøse Bestimmung der Jugend, in einigen Predigten von K. C. von Gehren. Jn dieser Samlung sind følgende Gegenstande behandelt: 1) ',
132
- "ditoyens fortund, ) vous qui, loin des combats, Pouves jouir en pair dans vos heureur ClimatsDes trefors annuel d'unne moisson fertileDont il plait aux saisons de couronner votre ile, Vous, diseje, a ",
133
- 'AVERTISSEMENTS. Ausser der am Seelandischen Langericht geschehene Proclamation, wird auch hiedurch zu dreien mahlen kund gethan, das die Theilungs Berichtigung nach dem menland Johann Georg Kanneworff']
134
- ```
135
-
136
- ### Dataset Statistics
137
-
138
- <!-- START-DATASET PLOTS -->
139
- <p align="center">
140
- <img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
141
- </p>
142
- <!-- END-DATASET PLOTS -->
143
-
144
- The coverage of the newspapers included can be seen here:
145
-
146
- ![](images/coverage-of-newspapers.jpeg)
147
-
148
- The distribution of texts pr. year is as follows:
149
-
150
- ![](images/distribution-pr-year.jpeg)
151
-
152
-
153
- ## Personal and Sensitive Information
154
-
155
- Due to the historical nature of the data, ENO contains no personal or sensitive information.
156
-
157
- ## Bias, Risks, and Limitations
158
-
159
- The data reflects the time of its initial creation. This means that it mirrors and describes a deeply hierarchical society that was structured by deep-seated biases and forms of discrimination that are alien even to some of the worst among the living today. For example, the material contains racist language in describing contemporary phenomena such as the Transatlantic slave trade and the persecution of Jewish diasporas. Use cases which might relay or perpetuate such sentiments should be aware of these risks. It is a historical text corpora, warts and all.
160
-
161
- Please also note that, although the newspapers are all in Danish, they do contain intermittent passages in German and Latin.
162
-
163
- Some advertisements were reprinted verbatim. The dataset, therefore, includes occasional duplicate texts.
164
-
165
-
166
- ### License Information
167
-
168
- The dataset is licensed under CC BY-SA 4.0. Please note that this license only pertains to the digitised text and dataset curation, not the original images. The original images of all material stemming from The Danish Royal Library, Nettbiblioteket, Københavns Biblioteker as well as the scans of Nyeste Skilderie af Kiøbenhavn made available by Niels Jensen are all in the public domain.
169
-
170
- ## More Information
171
-
172
- For questions related to the dataset, curation, and annotation we please contact Johan Heinsen, Aalborg University <[email protected]>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/enevaeldens_nyheder/enevaeldens_nyheder.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f0ccbf865189f37c9735e001219ef85da11ea3b5849621993a995f138c7f51d
3
- size 1856788258
 
 
 
 
data/enevaeldens_nyheder/images/coverage-of-newspapers.jpeg DELETED

Git LFS Details

  • SHA256: 66d18149a4d2050eaef38ae8c8b6ee101bebcdffa124d1accde5414198a4b198
  • Pointer size: 132 Bytes
  • Size of remote file: 1.08 MB
data/enevaeldens_nyheder/images/dist_document_length.png DELETED

Git LFS Details

  • SHA256: af6e89096c36f019d387dcb9b8a249f6a1aad6008fc1f31f94cdb83572ef2cd0
  • Pointer size: 131 Bytes
  • Size of remote file: 579 kB
data/enevaeldens_nyheder/images/distribution-pr-year.jpeg DELETED

Git LFS Details

  • SHA256: c27f03153532cf2b52db3a541fa1406495696dcfe13b56519d685d2d7ab6f101
  • Pointer size: 131 Bytes
  • Size of remote file: 530 kB