FaheemBEG's picture
Update README.md
87c2dc5 verified
metadata
language:
  - fr
tags:
  - france
  - public-sector
  - embeddings
  - directory
  - open-data
  - government
  - etalab
pretty_name: French State Administrations Directory
size_categories:
  - 1K<n<10K
license: etalab-2.0
configs:
  - config_name: latest
    data_files: data/state-administrations-directory-latest/*.parquet
    default: true

🇫🇷 French State Administrations Directory Dataset

This dataset is a processed and embedded version of the public data Référentiel de l’organisation administrative de l’État (French State Administrations Directory), published by DILA (Direction de l'information légale et administrative) on data.gouv.fr. This information is also available on the official directory website of Service-Public.fr: https://lannuaire.service-public.fr/

The dataset provides semantic-ready, structured and chunked data of French state entities, including organizational details, missions, contact information, and hierarchical links. Each chunk of text is vectorized using the BAAI/bge-m3 embedding model to enable semantic search and retrieval tasks.


🗂️ Dataset Contents

The dataset is provided in Parquet format and contains the following columns:

Column Name Type Description
chunk_id str Unique source based identifier of the chunk.
doc_id str Document identifier. Identical to chunk_id as each document only has 1 chunk.
chunk_xxh64 str XXH64 hash of the chunk_text value.
types str Type(s) of administrative entity.
name str Name of the organization or service.
mission_description str Description of the entity's mission.
addresses list[dict] List of address objects (street, postal code, city, etc.).
phone_numbers list[str] List of telephone numbers.
mails list[str] List of contact email addresses.
urls list[str] List of related URLs.
social_medias list[str] Social media accounts.
mobile_applications list[str] Related mobile applications.
opening_hours str Opening hours.
contact_forms list[str] Contact form URLs.
additional_information str Additional information.
modification_date str Last update date.
siret str SIRET number.
siren str SIREN number.
people_in_charge list[dict] List of responsible persons.
organizational_chart list[str] Organization chart references.
hierarchy list[dict] Links to parent or child entities.
directory_url str Source URL from the official state directory website.
chunk_text str Textual content of the administrative chunk.
embeddings_bge-m3 str (stringified list) Embeddings of chunk_text using BAAI/bge-m3. Stored as a JSON array string.

🛠️ Data Processing Methodology

📥 1. Field Extraction

The following fields were extracted and/or transformed from the original JSON:

  • Basic fields: chunk_id, doc_id, name, types, mission_description, additional_information, siret, siren, directory_url, modification_date are directly extracted from JSON attributes.
  • Structured lists:
    • addresses: list of dictionaries with adresse, code_postal, commune, pays, longitude, and latitude.
    • phone_numbers, mails, urls, social_medias, mobile_applications, contact_forms: derived from their respective fields with formatting.
  • People and structure:
    • people_in_charge: list of dictionaries representing staff members or leadership (title, name, rank, etc.).
    • organizational_chart, hierarchy: structural information within the administration.
  • Other fields:
    • opening_hours: built using a custom function that parses declared time slots into readable strings.
    • chunk_xxh64: is the xxh64 hash of the chunk_text value. It is useful to determine if the chunk_text value has changed from a version to another.

✂️ 2. Generation of chunk_text

A synthetic text field called chunk_text was created to summarize key aspects of each administrative body. This field is designed for semantic search and embedding generation. It includes:

  • The entity’s name : name
  • Its mission statement (if available) : mission_description
  • Key responsible individuals (formatted using role, title, name, and rank) : people_in_charge

There was no need here to split characters here.

🧠 3. Embeddings Generation

Each chunk_text was embedded using the BAAI/bge-m3 model. The resulting embedding vector is stored in the embeddings_bge-m3 column as a string, but can easily be parsed back into a list[float] or NumPy array.

📌 Embeddings Notice

⚠️ The embeddings_bge-m3 column is stored as a stringified list (e.g., "[-0.03062629,-0.017049594,...]"). To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe by using the datasets library:

import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

dataset = load_dataset("AgentPublic/state-administrations-directory")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

Otherwise, if you have already downloaded all parquet files from the data/state-administrations-directory-latest/ folder :

import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="state-administrations-directory-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.

🐱 GitHub repository :

The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the GitHub repository

📚 Source & License

🔗 Source :

📄 Licence :

Open License (Etalab) — This dataset is publicly available and can be reused under the conditions of the Etalab open license.