Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
hash
string
repo
string
date
string
license
string
message
string
mods
list
c27d31c06520c3df4c820ea10d5d16316f4d88cb
cupy/cupy
19.07.2017 16:24:41
MIT License
Support CUDA stream on memory pool Now, memory pool will have an arena (bins) for each stream to avoid concurrent streams touch the same memory block
[ { "change_type": "MODIFY", "old_path": "cupy/cuda/memory.pxd", "new_path": "cupy/cuda/memory.pxd", "diff": "@@ -1,4 +1,5 @@\n from libcpp cimport vector\n+from libcpp cimport unordered_map\n \n from cupy.cuda cimport device\n \n@@ -11,6 +12,7 @@ cdef class Chunk:\n readonly size_t ptr\n ...
6683a9aa7bae67e855cd9d1f17fdc49eb3f6dea0
cupy/cupy
17.06.2020 22:41:09
MIT License
Complete overhaul of filter testing. These tests are much more flexible now for when additional filters are added.
[ { "change_type": "MODIFY", "old_path": "tests/cupyx_tests/scipy_tests/ndimage_tests/test_filters.py", "new_path": "tests/cupyx_tests/scipy_tests/ndimage_tests/test_filters.py", "diff": "@@ -11,359 +11,349 @@ try:\n except ImportError:\n pass\n \n-# ######### Testing convolve and correlate ######...
dad51485282b6e05c4993b0733bd54aa3c0bacef
cupy/cupy
12.01.2021 16:21:46
MIT License
Use "import numpy as np" in the array_api submodule This avoids importing everything inside the individual functions, but still is preferred over importing the functions used explicitly, as most of them clash with the wrapper function names.
[ { "change_type": "MODIFY", "old_path": "numpy/_array_api/_creation_functions.py", "new_path": "numpy/_array_api/_creation_functions.py", "diff": "@@ -1,76 +1,67 @@\n+import numpy as np\n+\n def arange(start, /, *, stop=None, step=1, dtype=None, device=None):\n- from .. import arange\n if devi...
76eb888612183768d9e1b0c818fcf5416c5f28c7
cupy/cupy
20.01.2021 18:25:20
MIT License
Use _implementation on all functions that have it in the array API submodule That way they only work on actual ndarray inputs, not array-like, which is more inline with the spec.
[ { "change_type": "MODIFY", "old_path": "numpy/_array_api/_creation_functions.py", "new_path": "numpy/_array_api/_creation_functions.py", "diff": "@@ -35,7 +35,7 @@ def empty_like(x: array, /, *, dtype: Optional[dtype] = None, device: Optional[d\n if device is not None:\n # Note: Device s...
994ce07595026d5de54f52ef5748b578f9fae1bc
cupy/cupy
09.07.2021 13:57:44
MIT License
Use better type signatures in the array API module This includes returning custom dataclasses for finfo and iinfo that only contain the properties required by the array API specification.
[ { "change_type": "MODIFY", "old_path": "numpy/_array_api/_array_object.py", "new_path": "numpy/_array_api/_array_object.py", "diff": "@@ -396,7 +396,8 @@ class Array:\n res = self._array.__le__(other._array)\n return self.__class__._new(res)\n \n- def __len__(self, /):\n+ # Not...
783d157701ea6afa16a620669f89720864e62e9e
cupy/cupy
09.07.2021 18:08:22
MIT License
Make the array API left and right shift do type promotion The spec previously said it should return the type of the left argument, but this was changed to do type promotion to be consistent with all the other elementwise functions/operators.
[ { "change_type": "MODIFY", "old_path": "numpy/_array_api/_array_object.py", "new_path": "numpy/_array_api/_array_object.py", "diff": "@@ -410,11 +410,8 @@ class Array:\n \"\"\"\n if isinstance(other, (int, float, bool)):\n other = self._promote_scalar(other)\n- # N...
29535ad693507084ff3691fefd637a6b7292674f
cupy/cupy
21.07.2021 15:45:36
MIT License
Implement the array API result_type() manually np.result_type() has too many behaviors that we want to avoid in the array API namespace, like value-based casting and unwanted type promotions. Instead, we implement the exact type promotion table from the spec.
[ { "change_type": "MODIFY", "old_path": "numpy/_array_api/_data_type_functions.py", "new_path": "numpy/_array_api/_data_type_functions.py", "diff": "@@ -1,7 +1,7 @@\n from __future__ import annotations\n \n from ._array_object import Array\n-from ._dtypes import _all_dtypes\n+from ._dtypes import _al...
4877478d275959f746dab4f7b91bfe68956f26f1
netflix/security_monkey
26.01.2018 18:59:26
Apache License 2.0
Fix for orphaned items that may develop from a failed watcher event. - Also added optional (but on by default) silencing of verbose and useless botocore logs.
[ { "change_type": "MODIFY", "old_path": "security_monkey/datastore_utils.py", "new_path": "security_monkey/datastore_utils.py", "diff": "@@ -95,7 +95,6 @@ def create_item(item, technology, account):\n )\n \n \n-\n def detect_change(item, account, technology, complete_hash, durable_hash):\n \"...
84fd14194ddaa5b890e4479def071ce53a93b9d4
netflix/security_monkey
07.05.2018 10:58:36
Apache License 2.0
Add options to post metrics to queue This commit adds an option to SM to post metrics to cloudwatch. Metric data will be posted whenever scan queue items are added or removed.
[ { "change_type": "MODIFY", "old_path": "docs/autostarting.md", "new_path": "docs/autostarting.md", "diff": "@@ -5,6 +5,7 @@ This document outlines how to configure Security Monkey to:\n \n 1. Automatically run the API\n 1. Automatically scan for changes in your environment.\n+1. Configure Security M...
0b2146c8f794d5642a0a4feb9152916b49fd4be8
mesonbuild/meson
06.02.2017 11:51:46
Apache License 2.0
Use named field for command_template when generating ninja command. The command template become easier to read with named field.
[ { "change_type": "MODIFY", "old_path": "mesonbuild/backend/ninjabackend.py", "new_path": "mesonbuild/backend/ninjabackend.py", "diff": "@@ -1232,15 +1232,16 @@ int dummy;\n return\n rule = 'rule STATIC%s_LINKER\\n' % crstr\n if mesonlib.is_windows():\n- command...
73b2ee08a884d6baa7b8e3c35c6da8f17aa9a875
mesonbuild/meson
13.02.2017 20:59:03
Apache License 2.0
Rewrite custom_target template string substitution Factor it out into a function in mesonlib.py. This will allow us to reuse it for generators and for configure_file(). The latter doesn't implement this at all right now. Also includes unit tests.
[ { "change_type": "MODIFY", "old_path": "mesonbuild/backend/backends.py", "new_path": "mesonbuild/backend/backends.py", "diff": "@@ -603,19 +603,15 @@ class Backend:\n return srcs\n \n def eval_custom_target_command(self, target, absolute_outputs=False):\n- # We only want the outpu...
003e0a0610582020d1b213e0c8d16fe63bc6eabe
mesonbuild/meson
20.02.2017 07:06:13
Apache License 2.0
Use the same function for detection of C and C++ compilers The mechanism is identical which means there's a high likelihood of unintended divergence. In fact, a slight divergence was already there.
[ { "change_type": "MODIFY", "old_path": "mesonbuild/environment.py", "new_path": "mesonbuild/environment.py", "diff": "@@ -400,9 +400,9 @@ class Environment:\n errmsg += '\\nRunning \"{0}\" gave \"{1}\"'.format(c, e)\n raise EnvironmentException(errmsg)\n \n- def detect_c_c...
1fbf6300c5d38b12a4347a9327e54a9a315ef8de
mesonbuild/meson
10.04.2017 23:36:06
Apache License 2.0
Use an enum instead of strings for method names. If a non-string value is passed as a method, reject this explicitly with a clear error message rather than trying to match with it and failing.
[ { "change_type": "MODIFY", "old_path": "mesonbuild/dependencies.py", "new_path": "mesonbuild/dependencies.py", "diff": "@@ -24,6 +24,7 @@ import sys\n import os, stat, glob, shutil\n import subprocess\n import sysconfig\n+from enum import Enum\n from collections import OrderedDict\n from . mesonlib ...
fab5634916191816ddecf1a2a958fa7ed2eac1ec
mesonbuild/meson
24.06.2017 20:16:30
Apache License 2.0
Add 'Compiler.get_display_language' Use this when we print language-related information to the console and via the Ninja backend.
[ { "change_type": "MODIFY", "old_path": "mesonbuild/backend/ninjabackend.py", "new_path": "mesonbuild/backend/ninjabackend.py", "diff": "@@ -1606,7 +1606,7 @@ rule FORTRAN_DEP_HACK\n output_args=' '.join(compiler.get_output_args('$out')),\n compile_only_args=' '.join(compiler....
cda0e33650341f0a82c7d4164607fd74805e670f
mesonbuild/meson
18.10.2017 22:39:05
Apache License 2.0
Add ConfigToolDependency class This class is meant abstract away some of the tedium of writing a config tool wrapper dependency, and allow these instances to share some basic code that they all need.
[ { "change_type": "MODIFY", "old_path": "mesonbuild/dependencies/base.py", "new_path": "mesonbuild/dependencies/base.py", "diff": "@@ -24,7 +24,9 @@ from enum import Enum\n \n from .. import mlog\n from .. import mesonlib\n-from ..mesonlib import MesonException, Popen_safe, version_compare_many, list...
cf98f5e3705603ae21bef9b0a577bcd001a8c92e
mesonbuild/meson
21.02.2018 13:39:52
Apache License 2.0
Enable searching system crossfile locations on more platforms There's no reason not to also look in these places on Cygwin or OSX. Don't do this on Windows, as these paths aren't meaningful there. Move test_cross_file_system_paths from LinuxlikeTests to AllPlatformTests.
[ { "change_type": "MODIFY", "old_path": "mesonbuild/coredata.py", "new_path": "mesonbuild/coredata.py", "diff": "@@ -222,17 +222,17 @@ class CoreData:\n (after resolving variables and ~), return that absolute path. Next,\n check if the file is relative to the current source dir. If th...
ea3b54d40252fcb87eb1852223f125398b1edbdf
mesonbuild/meson
25.02.2018 15:49:58
Apache License 2.0
Use include_directories for D impdirs. Change the code to store D properties as plain data. Only convert them to compiler flags in the backend. This also means we can fully parse D arguments without needing to know the compiler being used.
[ { "change_type": "MODIFY", "old_path": "mesonbuild/backend/ninjabackend.py", "new_path": "mesonbuild/backend/ninjabackend.py", "diff": "@@ -2257,6 +2257,9 @@ rule FORTRAN_DEP_HACK\n depelem.write(outfile)\n commands += compiler.get_module_outdir_args(self.get_target_p...
End of preview. Expand in Data Studio

🏟️ Long Code Arena (Commit message generation)

This is the benchmark for the Commit message generation task as part of the 🏟️ Long Code Arena benchmark.

The dataset is a manually curated subset of the Python test set from the 🤗 CommitChronicle dataset, tailored for larger commits.

All the repositories are published under permissive licenses (MIT, Apache-2.0, and BSD-3-Clause). The datapoints can be removed upon request.

How-to

from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-cmg", split="test")

Note that all the data we have is considered to be in the test split.

Note. Working with git repositories under repos directory is not supported via 🤗 Datasets. See Git Repositories section for more details.

About

Overview

In total, there are 163 commits from 34 repositories. For length statistics, refer to the notebook in our repository.

Dataset Structure

The dataset contains two kinds of data: data about each commit (under commitchronicle-py-long folder) and compressed git repositories (under repos folder).

Commits

Each example has the following fields:

Field Description
repo Commit repository.
hash Commit hash.
date Commit date.
license Commit repository's license.
message Commit message.
mods List of file modifications from a commit.

Each file modification has the following fields:

Field Description
change_type Type of change to current file. One of: ADD, COPY, RENAME, DELETE, MODIFY or UNKNOWN.
old_path Path to file before change (might be empty).
new_path Path to file after change (might be empty).
diff git diff for current file.

Data point example:

{'hash': 'b76ed0db81b3123ede5dc5e5f1bddf36336f3722',
 'repo': 'apache/libcloud',
 'date': '05.03.2022 17:52:34',
 'license': 'Apache License 2.0',
 'message': 'Add tests which verify that all OpenStack driver can be instantiated\nwith all the supported auth versions.\nNOTE: Those tests will fail right now due to the regressions being\nintroduced recently which breaks auth for some versions.',
 'mods': [{'change_type': 'MODIFY',
    'new_path': 'libcloud/test/compute/test_openstack.py',
    'old_path': 'libcloud/test/compute/test_openstack.py',
    'diff': '@@ -39,6 +39,7 @@ from libcloud.utils.py3 import u\n<...>'}],
}    

Git Repositories

The compressed Git repositories for all the commits in this benchmark are stored under repos directory.

Working with git repositories under repos directory is not supported directly via 🤗 Datasets. You can use huggingface_hub package to download the repositories. The sample code is provided below:

import tarfile
from huggingface_hub import list_repo_tree, hf_hub_download


data_dir = "..."  # replace with a path to where you want to store repositories locally

for repo_file in list_repo_tree("JetBrains-Research/lca-commit-message-generation", "repos", repo_type="dataset"):
    file_path = hf_hub_download(
        repo_id="JetBrains-Research/lca-commit-message-generation",
        filename=repo_file.path,
        repo_type="dataset",
        local_dir=data_dir,
    )

    with tarfile.open(file_path, "r:gz") as tar:
        tar.extractall(path=os.path.join(data_dir, "extracted_repos"))

For convenience, we also provide a full list of files in paths.json.

After you download and extract the repositories, you can work with each repository either via Git or via Python libraries like GitPython or PyDriller.

Extra: longer context

Full Files

To facilitate further research, we additionally provide full contents of modified files before and after each commit in full_files dataset config. full split provides the whole files, and the remaining splits truncates each file given the maximum allowed number of tokens n. The files are truncated uniformly, essentially, limiting the number of tokens for each file to max_num_tokens // num_files. We use DeepSeek-V3 tokenizer to obtain the number of tokens.

from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-commit-message-generation", 
                       "full_files",
                       split="16k"  # should be one of: '4k', '8k', '16k', '32k', '64k', 'full'
                       )

Each example has the following fields:

  • repo: commit repository
  • hash: commit hash
  • mods: commit modification (combined into a single diff)
  • files: a list of dictionaries, where each corresponds to a specific file changed in the commit and has the following keys:
    • old_path: file path before the commit
    • old_contents: file contents before the commit
    • new_path: file path after the commit
    • old_contents: file contents after the commit

Retrieval

To facilitate further research, we additionally provide context for each commit as retrieved by BM25 retriever in retrieval_bm25 dataset config. For each commit, we run BM25 over all .py files in the corresponding repository at the state before the commit (excluding the files that were changed in this commit). We retrieve up to 50 files most relevant to the commit diff, and then, given the maximum allowed number of tokens n, we add files until the total context length (including diff) in tokens returned by the DeepSeek-V3 tokenizer exceeds n, possibly trunctating the last included file.

To access these, run the following:

from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-commit-message-generation", 
                       "retrieval_bm25",
                       split="16k"  # should be one of: '4k', '8k', '16k', '32k', '64k'
                       )

Each example has the following fields:

  • repo: commit repository
  • hash: commit hash
  • mods: commit modification (combined into a single diff)
  • context: context retrieved for the current commit; a list of dictionaries, where each corresponds to a specific file and has the following keys:
    • source: file path
    • content: file content

🏷️ Extra: commit labels

To facilitate further research, we additionally provide the manual labels for all the 858 commits that made it through initial filtering. The final version of the dataset described above consists of commits labeled either 4 or 5.

How-to

from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-commit-message-generation", "labels", split="test")

Note that all the data we have is considered to be in the test split.

About

Dataset Structure

Each example has the following fields:

Field Description
repo Commit repository.
hash Commit hash.
date Commit date.
license Commit repository's license.
message Commit message.
label Label of the current commit as a target for CMG task.
comment Comment for a label for the current commit (optional, might be empty).

Labels are in 1–5 scale, where:

  • 1 – strong no
  • 2 – weak no
  • 3 – unsure
  • 4 – weak yes
  • 5 – strong yes

Data point example:

{'hash': '1559a4c686ddc2947fc3606e1c4279062cc9480f',
 'repo': 'appscale/gts',
 'date': '15.07.2018 21:00:39',
 'license': 'Apache License 2.0',
 'message': 'Add auto_id_policy and logs_path flags\n\nThese changes were introduced in the 1.7.5 SDK.',
 'label': 1,
 'comment': 'no way to know the version'}

Citing

@article{bogomolov2024long,
  title={Long Code Arena: a Set of Benchmarks for Long-Context Code Models},
  author={Bogomolov, Egor and Eliseeva, Aleksandra and Galimzyanov, Timur and Glukhov, Evgeniy and Shapkin, Anton and Tigina, Maria and Golubev, Yaroslav and Kovrigin, Alexander and van Deursen, Arie and Izadi, Maliheh and Bryksin, Timofey},
  journal={arXiv preprint arXiv:2406.11612},
  year={2024}
}

You can find the paper here.

Downloads last month
174

Spaces using JetBrains-Research/lca-commit-message-generation 3

Collection including JetBrains-Research/lca-commit-message-generation

Paper for JetBrains-Research/lca-commit-message-generation