Datasets:
Cybersecurity Wiki Slices (Parquet)
Cybersecurity Wiki Slices is a curated collection of English Wikipedia pages covering cybersecurity foundations, systems, protocols, tools, and defensive/attack techniques—consolidated into Parquet for fast streaming with the datasets library, with original JSONL preserved under raw/.
Token count: ~24.73M tokens.
License: "TanDev Proprietary License — All Rights Reserved"
⚠️ Ethical use & attribution required. Content originates from Wikipedia and is licensed under CC BY‑SA 4.0. Derivatives must preserve attribution and ShareAlike terms. See Licensing & Attribution below.
What’s in this release (Parquet)
- Primary delivery = Parquet shards under
data/wiki/.../train-*.parquet(Hub‑managed) for efficient loading. - Raw JSONL retained at
raw/wiki.jsonl[.gz|.zst]for transparency and reproducibility. - Named config:
wiki(single config; noallconfig).
Scope & Coverage
This corpus targets practical security knowledge across the stack:
- Foundations & Methodologies — computer security, pentesting, vulnerability assessment, OSINT.
- Web & API Security — OWASP classes (XSS/SQLi/CSRF/SSRF/etc.), authentication & authorization, CORS/CSP, browser security.
- Networks & Protocols — TCP/IP layers, TLS/SSH/QUIC/HTTP/2–3, VPNs, wireless & Bluetooth, MITM classes.
- Operating Systems & Internals — Windows internals, PowerShell, WMI/DCOM, Active Directory; Linux subsystems (SELinux, AppArmor, namespaces, cgroups).
- Cloud & Containers — AWS/Azure/GCP primitives, IAM, logging/audit, Kubernetes/containers & hardening.
- Privilege Escalation, Lateral Movement, Persistence, Credential Access — AD/Windows/Linux concepts and primitives.
- Malware & C2 (conceptual) — families, botnets/C2, ransomware; evasion/obfuscation (non‑instructional, encyclopedic POV).
- Reverse Engineering & Exploit Dev (conceptual) — RE tooling, memory safety classes, fuzzing.
- Crypto & Passwords — algorithms, PKI, KDFs, steganography (encyclopedic coverage).
- DFIR / Threat Intel / Logging — IR workflows, indicators, STIX/TAXII, SIEM/log management, IDS/IPS/honeypots.
- ICS/SCADA, IoT, Mobile, Automotive — protocols and security considerations.
- Supply Chain & DevSecOps — SBOM, signing, CI/CD security; risk/governance and baselines.
The dataset focuses on encyclopedic text. It does not include exploit POCs or instructional “how‑to hack” guides.
Repository Layout
/ # dataset root (this card lives here as README.md)
raw/
wiki.jsonl # original JSONL export (optionally .gz/.zst)
data/
wiki/
default/1.0.0/train/
train-00000-of-XXXXX.parquet
train-00001-of-XXXXX.parquet
...
Each JSONL line (in raw/) or Parquet row (in data/wiki/...) is a single record.
Schema
Core fields
text (string) — cleaned encyclopedic text (normalized whitespace; multi‑paragraph).
meta (object) — metadata with at least:
- source ("wikipedia")
- title (string) — page title
- pageid (int) — Wikipedia page ID
- rev_id (int) — revision id used for extraction
- source_url (string) — canonical page URL
- license ("CC BY-SA 4.0")
- license_url (string) —
https://creativecommons.org/licenses/by-sa/4.0/ - license_class ("sharealike")
- attribution (string) — attribution sentence for downstream use
- topic (string) — high‑level slice label (e.g.,
protocols,active_directory,kubernetes_security, …) - redpajama_set_name ("WikiSecuritySlices")
Downstream readers should tolerate missing optional keys and nulls.
Loading Examples
1) Load the Parquet config (recommended)
from datasets import load_dataset
REPO = "tandevllc/cybersecurity-wiki-slices"
wiki = load_dataset(REPO, name="wiki", split="train")
len(wiki), wiki.column_names
2) Direct JSONL loading from raw/ (explicit)
from datasets import load_dataset
wiki = load_dataset(
"json",
data_files="raw/wiki.jsonl",
repo_id="tandevllc/cybersecurity-wiki-slices",
split="train",
)
3) Filtering by topic or domain
# Keep only networking/protocol pages
net = wiki.filter(lambda r: (r.get("meta") or {}).get("topic") in {"networks", "network_protocols", "protocols"})
# Example: filter by title keyword
tls_pages = wiki.filter(lambda r: "TLS" in (r.get("meta") or {}).get("title", ""))
Cleaning & Quality (High‑Level)
- Encyclopedic text blocks only (intro + sections), no wikitext/markup; boilerplate minimized.
- Normalization of line breaks and whitespace; removal of control characters.
- Minimum‑length guardrails to avoid stubs; duplicates avoided via pageid/revision tracking.
- Topical coverage is intentionally broad; some overlap across topics may remain for completeness.
Intended Uses
- Pretraining / Continued Pretraining for models needing strong general security background.
- RAG/Retrieval over security concepts, protocols, tools, and platform internals.
- Evaluation: long‑form QA, summarization, topical recall across security domains.
- Education: safe, encyclopedic overviews for bootcamps and courseware.
Not a vulnerability feed; pair with CVE/NVD datasets for up‑to‑date vulnerability intel.
Limitations & Caveats
- License & ShareAlike — CC BY‑SA 4.0 requires attribution and sharing derivatives under the same license.
- Temporal Drift — Wikipedia pages evolve; this snapshot reflects a specific revision set.
- Extraction Noise — Edge cases may include navigation remnants or short stubs.
- Topic Labels —
meta.topicis a helpful slice label, not a strict taxonomy.
Citation
@dataset{tandevllc_2025_cybersecurity_wiki_slices,
author = {Gupta, Smridh},
title = {Cybersecurity Wiki Slices},
year = {2025},
url = {https://huggingface.co/datasets/tandevllc/cybersecurity-wiki-slices}
}
Maintainer
Smridh Gupta — [email protected]
- Downloads last month
- 33