Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'hint'})

This happened while the json dataset builder was generating data using

hf://datasets/DEVAI-benchmark/DEVAI/instances/03_Text_Classification_NaiveBayes_20Newsgroups_ML.json (at revision a35b69ea9d737ec5bbaa0081fd78d9232392d34f)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1869, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 580, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              name: string
              query: string
              tags: list<item: string>
                child 0, item: string
              requirements: list<item: struct<requirement_id: int64, prerequisites: list<item: int64>, criteria: string, category: string, satisfied: null>>
                child 0, item: struct<requirement_id: int64, prerequisites: list<item: int64>, criteria: string, category: string, satisfied: null>
                    child 0, requirement_id: int64
                    child 1, prerequisites: list<item: int64>
                        child 0, item: int64
                    child 2, criteria: string
                    child 3, category: string
                    child 4, satisfied: null
              preferences: list<item: struct<preference_id: int64, criteria: string, satisfied: null>>
                child 0, item: struct<preference_id: int64, criteria: string, satisfied: null>
                    child 0, preference_id: int64
                    child 1, criteria: string
                    child 2, satisfied: null
              is_kaggle_api_needed: bool
              is_training_needed: bool
              is_web_navigation_needed: bool
              hint: string
              to
              {'name': Value(dtype='string', id=None), 'query': Value(dtype='string', id=None), 'tags': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'requirements': [{'requirement_id': Value(dtype='int64', id=None), 'prerequisites': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'criteria': Value(dtype='string', id=None), 'category': Value(dtype='string', id=None), 'satisfied': Value(dtype='null', id=None)}], 'preferences': [{'preference_id': Value(dtype='int64', id=None), 'criteria': Value(dtype='string', id=None), 'satisfied': Value(dtype='null', id=None)}], 'is_kaggle_api_needed': Value(dtype='bool', id=None), 'is_training_needed': Value(dtype='bool', id=None), 'is_web_navigation_needed': Value(dtype='bool', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1392, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1041, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 999, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1740, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'hint'})
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/DEVAI-benchmark/DEVAI/instances/03_Text_Classification_NaiveBayes_20Newsgroups_ML.json (at revision a35b69ea9d737ec5bbaa0081fd78d9232392d34f)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

name
string
query
string
tags
sequence
requirements
list
preferences
list
is_kaggle_api_needed
bool
is_training_needed
bool
is_web_navigation_needed
bool
hint
string
01_Image_Classification_ResNet18_Fashion_MNIST_DL
Hey! Could you help me set up a system to classify images from the Fashion-MNIST dataset using the ResNet-18 model in PyTorch? The Fashion-MNIST dataset should be loaded in `src/data_loader.py`. I'd like the system to show the training progress with the tqdm library in the training loop in `src/train.py` and to perform...
[ "Classification", "Computer Vision", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Fashion-MNIST\" dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data augmentation is perfor...
[ { "preference_id": 0, "criteria": "Code should be written in a clear, understandable and maintainable style with appropriate comments.", "satisfied": null }, { "preference_id": 1, "criteria": "The training process should be efficient.", "satisfied": null }, { "preference_id": 2, ...
false
true
false
null
02_Maze_Solver_Q_Learning_Gridworld_RL
Can you help me create a system to solve maze-style Gridworld tasks using the Q-learning algorithm? The system should use numpy to make the core calculations more efficient and matplotlib for visualizations. The Q-learning algorithm should be implemented in `src/train.py`, and the aptly-named Gridworld environment shou...
[ "Reinforcement Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Q-learning\" algorithm is used in `src/train.py`.", "category": "Machine Learning Method", "satisfied": null }, { "requirement_id": 1, "prerequisites": [], "criteria": "The \"Gridworld\" environment is defined in `src...
[ { "preference_id": 0, "criteria": "Some real-time progress or feedback during the training process should be displayed.", "satisfied": null }, { "preference_id": 1, "criteria": "The code should be written in a way that's easy to modify or extend later on.", "satisfied": null } ]
false
true
false
null
03_Text_Classification_NaiveBayes_20Newsgroups_ML
Please implement a Naive Bayes classifier for the 20 Newsgroups dataset and save it in a file called `src/model.py`. The dataset should loaded in `src/data_loader.py`. The program should handle data preprocessing, including removing stop words, punctuation, and special characters. Show the improvement of your classifie...
[ "Classification", "Natural Language Processing", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"20 Newsgroups\" dataset is used in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data preprocessing is perform...
[ { "preference_id": 0, "criteria": "The model should be straightforward to interpret.", "satisfied": null }, { "preference_id": 1, "criteria": "The final report should be structured clearly for easy review.", "satisfied": null } ]
false
true
false
In the query, there is a missing word "be" after the word "should" in "The dataset should loaded in `src/data_loader.py`".
04_Text_Generation_GPT2_Prompts_DL
Please build a text generation system by automatically downloading a pre-trained GPT-2 model which you then cache in `models/saved_models/`. The system should receive prompts through loading the current contents of a text file named `data/prompt.txt` which, for demonstration purposes, should contain only the text "who ...
[ "Generative Models", "Natural Language Processing" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "A pre-trained \"GPT-2\" model is downloaded and cached in `models/saved_models/`.", "category": "Machine Learning Method", "satisfied": null }, { "requirement_id": 1, "prerequisites": [], "criteria": "Prompts are read from ...
[ { "preference_id": 0, "criteria": "The system should handle different input prompts efficiently and correct for minor typos.", "satisfied": null }, { "preference_id": 1, "criteria": "The system should operate efficiently, with minimal latency during text generation.", "satisfied": null ...
false
false
false
There is only one prompt to read. However, requirment 1 says "Prompts".
05_Game_Simulation_DQN_CartPole_v1_RL
Create a reliable system to train a DQN agent using PyTorch in an OpenAI Gym environment such as CartPole-v1. Implement DQN using PyTorch in `src/model.py`. The environment should be instantiated in the main file, `src/main.py`. Handle any dimension mismatches between DQN and the environment (like would happen if the G...
[ "Reinforcement Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"DQN\" algorithm is implemented using PyTorch and saved in `src/model.py`.", "category": "Machine Learning Method", "satisfied": null }, { "requirement_id": 1, "prerequisites": [], "criteria": "An \"OpenAI Gym\" enviro...
[ { "preference_id": 0, "criteria": "The system should handle dimension mismatches, logging the issues for easy debugging.", "satisfied": null }, { "preference_id": 1, "criteria": "The return over episode curve has key milestones annotated.", "satisfied": null } ]
false
true
false
null
06_Sentiment_Analysis_SVM_Sentiment140_ML
Please help me build a system for sentiment analysis on tweets using the Sentiment140 dataset available from Hugging Face. Load the Sentiment140 dataset and, when loading the data, clean it by removing stop words, punctuation, and special characters, all in `src/data_loader.py`. Use Word2Vec or GloVe for text vectoriza...
[ "Natural Language Processing", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Sentiment140\" dataset, available from \"Hugging Face,\" is obtained in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "crit...
[ { "preference_id": 0, "criteria": "The dataset download process should be reliable, with clear error handling.", "satisfied": null }, { "preference_id": 1, "criteria": "The final accuracy report should be straightforward and easy to interpret.", "satisfied": null } ]
false
true
false
null
07_Image_Super_Resolution_SRCNN_Set5_DL
Hi, I need to create a project for image super-resolution using the SRCNN model with the Set5 dataset (available from `https://huggingface.co/datasets/eugenesiow/Set5`). Load the dataset in `src/data_loader.py`. When loading the data, include image preprocessing steps such as resizing and normalization, all in `src/dat...
[ "Computer Vision", "Generative Models" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Set5\" dataset (available from \"Hugging Face\") is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Im...
[ { "preference_id": 0, "criteria": "The project should generate high-quality, clear super-resolution images with detailed comparisons.", "satisfied": null }, { "preference_id": 1, "criteria": "Well-organized output images, highlighting key improvements, should be included.", "satisfied": ...
false
true
true
null
08_Robot_Control_PPO_PyBullet_RL
I am seeking to implement a project which explores robotic arm control via reinforcement learning in the PyBullet simulation environment with the PPO algorithm. The PyBullet simulator should be imported and a related robotics environment should be loaded in `src/env.py`. The PPO algorithm should be implemented in `src/...
[ "Reinforcement Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"PyBullet\" simulator is used in `src/env.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [], "criteria": "The \"PPO\" algorithm is used in `src/train.py`.", ...
[ { "preference_id": 0, "criteria": "The system should effectively handle potential issues with loading URDF files in PyBullet, providing clear error messages or logging for debugging.", "satisfied": null } ]
false
true
false
null
09_Recommendation_System_NCF_MovieLens_ML
Help me develop a system to recommend movies based on user ratings from the MovieLens dataset using a Neural Collaborative Filtering (NCF) approach. First, load the dataset and split it into training and testing sets in `src/data_loader.py`. Next, implement the NCF approach and a matrix factorization baseline in `src/m...
[ "Recommender Systems", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Neural Collaborative Filtering (NCF)\" algorithm is implemented in `src/model.py`.", "category": "Machine Learning Method", "satisfied": null }, { "requirement_id": 1, "prerequisites": [], "criteria": "The \"MovieLens...
[ { "preference_id": 0, "criteria": "Robust path handling is implemented to deal with things like missing directories.", "satisfied": null }, { "preference_id": 1, "criteria": "The top 10 recommendations should be clear and relevant to the sample user's preferences.", "satisfied": null }...
false
true
false
null
10_Face_Recognition_FaceNet_LFW_DL
Help me create a PyTorch face recognition project using the FaceNet model with the LFW dataset. Load the dataset in `src/model.py`. Get the model from Hugging Face (you can find it at https://huggingface.co/py-feat/facenet) and save it in `models/saved_models/`. Ensure the data is preprocessed to ensure the standardiza...
[ "Computer Vision", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"LFW\" (Labeled Faces in the Wild) dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data alig...
[ { "preference_id": 0, "criteria": "The model loading process should be smooth, with clear handling of any issues if the model fails to load.", "satisfied": null }, { "preference_id": 1, "criteria": "Embedding visualizations should be clear and effectively highlight distinct facial features."...
false
false
true
The page https://huggingface.co/py-feat/facenet provides guidance on how to use FaceNet; however, Hugging Face does not currently offer a model entry for direct use.
11_House_Price_Prediction_LinearRegression_BostonHousing_ML
Hi. Set up a house price prediction project using a Linear Regression model on the Boston Housing dataset. Load the dataset using `from datasets import load_dataset` and `ds = load_dataset("~/mrseba/boston_house_price")` in `src/data_loader.py`. Ensure feature scaling and data standardization are performed in `src/data...
[ "Financial Analysis", "Regression", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Boston Housing\" dataset is utilized using `from datasets import load_dataset` and `ds = load_dataset(\"mrseba/boston_house_price\")` in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "re...
[ { "preference_id": 0, "criteria": "The dataset should load smoothly using the provided `load_dataset` code, and other methods should be tried if issues arise.", "satisfied": null }, { "preference_id": 1, "criteria": "The visualizations should clearly demonstrate the model's accuracy by highl...
false
true
false
`ds = load_dataset("~/mrseba/boston_house_price")` in the query is wrong, and it should be `ds = load_dataset("mrseba/boston_house_price")`. We leave it here to check the self-debugging skill of the agents.
12_Spam_Detection_SVM_Enron_ML
Hello. I need you to build a project to detect spam emails using the Support Vector Machine (SVM) classifier on the Enron-Spam dataset. The project should preprocess the text by removing stop words and punctuation, employ TF-IDF features, perform hyperparameter tuning using GridSearchCV, and save the confusion matrix t...
[ "Classification", "Natural Language Processing", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Enron-Spam\" dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Text preprocessing is performe...
[ { "preference_id": 0, "criteria": "The text preprocessing step should be optimized to handle a large number of emails efficiently.", "satisfied": null }, { "preference_id": 1, "criteria": "The classification report should be comprehensive.", "satisfied": null } ]
false
true
false
null
13_Style_Transfer_Perceptual_Loss_CustomImages_DL
Please create a PyTorch Perceptual Loss project for image style transfer (refer to this paper: https://arxiv.org/pdf/1603.08155). You can build the Perceptual Loss Network using VGG16 in `src/model.py`. The project should combine content and style images, allow smooth adjustment of style intensity by tuning the weights...
[ "Computer Vision", "Generative Models" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "A famous content image is inputted for testing, downloaded from [this link](https://upload.wikimedia.org/wikipedia/commons/thumb/e/ec/Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2RMF_retouched.jpg/768px-Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2R...
[ { "preference_id": 0, "criteria": "The style transfer process should allow for smooth adjustment of style intensity, making the stylized image visually appealing.", "satisfied": null }, { "preference_id": 1, "criteria": "The project should handle high-resolution images efficiently without ex...
false
false
false
VGG16 was not originally designed for style transfer. However, the user's query states, 'Please create a PyTorch project for image style transfer using a pre-trained VGG16 model.' Ideally, a well-informed agent should create or find a model for style transfer networks that incorporate pre-trained VGG16, rather than sim...
14_Customer_Churn_Prediction_LogisticRegression_Telco_ML
Help me develop a system to predict customer churn using the Telco Customer Churn dataset, potentially being downloaded from [this link](https://huggingface.co/datasets/scikit-learn/churn-prediction). Load the dataset in `src/data_loader.py`. The project should include feature engineering, such as feature selection and...
[ "Classification", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Telco Customer Churn\" dataset is used, potentially being downloaded from [this link](https://huggingface.co/datasets/scikit-learn/churn-prediction). Load the dataset in `src/data_loader.py`.", "category": "Dataset or Environment", "...
[ { "preference_id": 0, "criteria": "The dataset should load smoothly, with proper error handling if issues arise during download.", "satisfied": null }, { "preference_id": 1, "criteria": "The feature engineering process should be thorough, ensuring that the most relevant features are selected...
false
true
true
null
15_Image_Captioning_ShowAndTell_Flickr8k_DL
This is my current PyTorch project: Develop an automatic image captioning system using the Show and Tell model. Here I found a repo can guide you: https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning. Use the dataset Flickr8k dataset, downloading it from [this link](https://huggingface.co/datasets/jxie/fl...
[ "Computer Vision", "Natural Language Processing" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The pre-trained \"Show and Tell\" model is used.", "category": "Machine Learning Method", "satisfied": null }, { "requirement_id": 1, "prerequisites": [], "criteria": "The \"Flickr8k\" dataset, potentially downloaded from [...
[ { "preference_id": 0, "criteria": "The dataset should load smoothly, with clear error handling if any issues arise during download.", "satisfied": null }, { "preference_id": 1, "criteria": "The attention mechanism should clearly highlight the image regions that contribute most to the generat...
false
true
true
null
16_Credit_Scoring_DecisionTree_GermanCredit_ML
Help me develop a system to predict credit scores using the German Credit dataset, which can be downloaded from [this link](https://archive.ics.uci.edu/dataset/144/statlog+german+credit+data). Load the dataset and preprocess it, including handling missing values and feature encoding, in `src/data_loader.py`. Use a Deci...
[ "Classification", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "Load the \"German Credit\" dataset, potentially downloading it from [this link](https://archive.ics.uci.edu/dataset/144/statlog+german+credit+data) in the `src/data_loader.py` file.", "category": "Dataset or Environment", "satisfied": null...
[ { "preference_id": 0, "criteria": "The dataset should load smoothly, with clear error handling if any issues arise during download.", "satisfied": null }, { "preference_id": 1, "criteria": "The Markdown report should be well-organized, making it easy to review all the results and visualizati...
false
true
true
null
17_Heart_Disease_Prediction_XGBoost_UCI_ML
Create a project to predict heart disease using an XGBoost model with the UCI Heart Disease dataset, which can be downloaded from [this link](https://archive.ics.uci.edu/dataset/45/heart+disease). Load the dataset in `src/data_loader.py`. Implement feature selection and data standardization in `src/data_loader.py`. Use...
[ "Classification", "Medical Analysis", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"UCI Heart Disease\" dataset is used, potentially being downloaded from [this link](https://archive.ics.uci.edu/dataset/45/heart+disease). Load the dataset in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": ...
[ { "preference_id": 0, "criteria": "The SHAP visualizations should be clear and highlight the most impactful features, making the results easy to interpret.", "satisfied": null } ]
false
true
true
null
18_Image_Enhancement_SRGAN_DIV2K_DL
I need to create a system for image enhancement using an SRGAN model (you can obtain a pre-trained SRGAN [here](https://github.com/tensorlayer/srgan)) with the DIV2K dataset, which can be downloaded from [this link](https://data.vision.ee.ethz.ch/cvl/DIV2K/). The dataset should be loaded in the `src/data_loader.py` fil...
[ "Computer Vision", "Generative Models" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"DIV2K\" dataset is loaded in the `src/data_loader.py` file.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [], "criteria": "A pre-trained \"SRGAN\" model is saved ...
[ { "preference_id": 0, "criteria": "A diverse set of samples should be selected to showcase the model's performance across different types of images.", "satisfied": null }, { "preference_id": 1, "criteria": "The Markdown report should include a detailed comparison of the model's performance o...
false
false
true
null
19_Time_Series_Forecasting_Seq2Seq_LSTM_Rossmann_ML
Develop a sales forecasting system using a sequence-to-sequence model based on LSTM with the Rossmann Store Sales dataset, downloading it from Kaggle [here](https://www.kaggle.com/c/rossmann-store-sales/data) and loading it in `src/data_loader.py`. Split the data into training and testing sets and save them in `src/dat...
[ "Supervised Learning", "Time Series Forecasting" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Rossmann Store Sales\" dataset is used, potentially downloaded from Kaggle [this link](https://www.kaggle.com/c/rossmann-store-sales/data) and loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null ...
[ { "preference_id": 0, "criteria": "The model should be tuned to capture seasonal trends in the sales data for more accurate forecasting.", "satisfied": null } ]
true
true
true
null
20_Car_Price_Prediction_RandomForest_CarPrices_ML
Can you help me create a car price prediction project using a Random Forest model with the Kaggle Car Prices dataset? Load the dataset and perform feature selection to identify important features in `src/data_loader.py`. Use cross-validation to evaluate the model in `src/train.py`. Save the R-squared score, Mean Square...
[ "Financial Analysis", "Regression", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Kaggle Car Prices\" dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Feature selection is im...
[ { "preference_id": 0, "criteria": "The feature selection process should be thorough, ensuring that only the most relevant features are used in the model.", "satisfied": null }, { "preference_id": 1, "criteria": "The Markdown report should provide clear insights into how the selected features...
true
true
false
null
21_Iris_Classification_SVM_Iris_ML
I request a project to classify iris species utilizing the Iris dataset with a Support Vector Machine (SVM) classifier implemented in `src/model.py`. The project should standardize the data in and perform feature selection in `src/data_loader.py`. It will document the classification accuracy and save it as `results/met...
[ "Classification", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Iris\" dataset is used.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data is standardized to ensure feature values are within the s...
[ { "preference_id": 0, "criteria": "The Streamlit web page should be user-friendly, allowing users to easily explore different aspects of the model's performance.", "satisfied": null }, { "preference_id": 1, "criteria": "A brief model explanation should be included on the web page, helping us...
false
true
false
null
22_Sentiment_Analysis_LSTM_IMDb_DL
Could you help me set up a sentiment analysis project using an LSTM model and the IMDb dataset? Please implement data cleaning in `src/data_loader.py`, including the removal of stop words and punctuation. Use word embeddings to convert the text to a numerical format and save these embeddings under `models/saved_models/...
[ "Natural Language Processing", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"IMDb\" movie reviews dataset is used.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data cleaning is implemented in `src/data_loade...
[ { "preference_id": 0, "criteria": "The word embeddings should be pre-trained (e.g., Word2Vec or GloVe) to leverage existing semantic knowledge.", "satisfied": null }, { "preference_id": 1, "criteria": "The Jupyter Notebook should be well-documented, making it easy for others to understand th...
false
true
false
null
23_Wine_Quality_Prediction_DecisionTree_WineQuality_ML
Build a wine quality prediction system using a Decision Tree model with the Wine Quality dataset from UCI. Preprocess the data in `src/data_loader.py`, including handling missing values and feature scaling. Use cross-validation to evaluate the model in `src/train.py`. Implement the Decision Tree regression model in `sr...
[ "Classification", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Wine Quality\" dataset from \"UCI\" is used.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data preprocessing is performed in `src/d...
[ { "preference_id": 0, "criteria": "The feature importance plot should clearly highlight the top influential features.", "satisfied": null }, { "preference_id": 1, "criteria": "The final PDF report should include a brief discussion on potential improvements of the model.", "satisfied": nu...
false
true
false
null
24_Diabetes_Prediction_LogisticRegression_PimaIndians_ML
Set up a diabetes prediction project using a Logistic Regression model and the Pima Indians Diabetes dataset. Perform feature scaling and data standardization in `src/data_loader.py`. Use cross-validation to evaluate the model in `src/train.py`, and save the accuracy score to `results/metrics/accuracy_score.txt`. Gener...
[ "Classification", "Medical Analysis", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Pima Indians Diabetes\" dataset is used.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Feature scaling and data standardization are ...
[ { "preference_id": 0, "criteria": "The dashboard should allow users to explore different aspects of the model's performance and understand which features contribute most to predictions.", "satisfied": null }, { "preference_id": 1, "criteria": "The dashboard should clearly show how the datase...
false
true
false
null
25_Speech_Emotion_Recognition_CNN_LSTM_RAVDESS_DL
I am seeking a speech emotion recognition project using a CNN-LSTM model with the RAVDESS dataset, which should be downloaded from Kaggle or [this Hugging Face link](https://huggingface.co/datasets/xbgoose/ravdess). The project should load the dataset and perform robust audio preprocessing (noise removal and normalizat...
[ "Audio Processing", "Classification" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"RAVDESS\" dataset is loaded in `src/data_loader.py`, which is downloaded from Kaggle or [this Hugging Face link](https://huggingface.co/datasets/xbgoose/ravdess).", "category": "Dataset or Environment", "satisfied": null }, { ...
[ { "preference_id": 0, "criteria": "The audio preprocessing step should be robust, effectively reducing noise while preserving the integrity of the speech signals.", "satisfied": null }, { "preference_id": 1, "criteria": "The local API should be user-friendly, with clear instructions for uplo...
true
true
true
null
26_Mushroom_Classification_RandomForest_Mushroom_ML
Develop a mushroom classification system using a Random Forest model on the UCI Mushroom dataset. Load the dataset in the `src/data_loader.py` file. Ensure that feature engineering, including feature encoding and feature selection, and missing data handling are completed in `src/data_loader.py` before training the mode...
[ "Classification", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"UCI Mushroom\" dataset is loaded in the `src/data_loader.py` file.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Feature engineering...
[ { "preference_id": 0, "criteria": "The feature importance visualization should clearly highlight the most influential features, making it easy to interpret.", "satisfied": null }, { "preference_id": 1, "criteria": "The Streamlit web page should provide an overview of the model's performance ...
false
true
false
null
27_Image_Generation_DCGAN_MNIST_DL
I need to create a system for image generation using a DCGAN model with the MNIST`dataset. Load the MNIST dataset in `src/data_loader.py` and implement the DCGAN model in `src/model.py`. The system should ensure the use of the correct DCGAN architecture, save the generated images to `results/figures/`, monitor the mode...
[ "Computer Vision", "Generative Models" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"MNIST\" dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [], "criteria": "The \"DCGAN\" model, not a standard GAN, is impl...
[ { "preference_id": 0, "criteria": "The DCGAN model architecture should be clearly documented in the Notebook to avoid confusion with other GAN variants.", "satisfied": null }, { "preference_id": 1, "criteria": "The PDF report should be well-structured, with clear sections for model architect...
false
true
false
Saving figures is mentioned twice, i.e., once in requirement 2 and once in requirement 3.
28_Stock_Price_Prediction_LSTM_YahooFinance_ML
Could you help me build a stock price prediction system using an LSTM model and the Yahoo Finance dataset? Please clean the data, including handling missing values and outliers, and use a time window to convert the time series data to a supervised learning problem. The LSTM model should be implemented in `src/model.py`...
[ "Financial Analysis", "Supervised Learning", "Time Series Forecasting" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"LSTM\" model is implemented in `src/model.py`.", "category": "Machine Learning Method", "satisfied": null }, { "requirement_id": 1, "prerequisites": [], "criteria": "The \"Yahoo Finance\" dataset is loaded in `src/dat...
[]
false
true
false
null
29_Financial_Time_Series_Prediction_LSTM_ML
Could you help me set up a financial time series prediction system using an LSTM model with some real-world Financial Analysis, like stock prices or Bitcoin prices? First, we'll need to clean the data, taking care of any missing values and outliers in `src/data_loader.py`. Then, let's convert the time series data into ...
[ "Financial Analysis", "Supervised Learning", "Time Series Forecasting" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "Some real-world financial time series data (e.g., \"stock prices\" or \"Bitcoin prices\") is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [...
[ { "preference_id": 0, "criteria": "The \"Dash\" dashboard should allow users to interact with the prediction results, enabling exploration of different time frames and zooming into specific periods for detailed analysis.", "satisfied": null }, { "preference_id": 1, "criteria": "During develo...
false
true
false
null
30_Image_Segmentation_UNet_PascalVOC_DL
Could you help me set up an image segmentation project using the Pascal VOC dataset and a pre-trained U-Net model implemented in PyTorch? There is no need for additional training. Apply data augmentation (e.g., flipping and rotating images), use the Dice coefficient for evaluation, save the segmented images to `results...
[ "Computer Vision" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Pascal VOC\" dataset is used in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data augmentation, including fli...
[ { "preference_id": 0, "criteria": "The Jupyter Notebook should include well-documented code snippets explaining each step of the process.", "satisfied": null }, { "preference_id": 1, "criteria": "The GIF animation should clearly show the changes before and after segmentation over different i...
false
false
false
null
31_Cancer_Prediction_SVM_BreastCancer_ML
Could you help me create a project for breast cancer prediction using an SVM model with the Breast Cancer Wisconsin dataset? Load the dataset and perform feature selection to identify important features in `src/data_loader.py`. Implement the SVM classifier for cancer prediction in `src/model.py`. Use cross-validation t...
[ "Classification", "Medical Analysis", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Breast Cancer Wisconsin\" dataset is used.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Feature selection is performed to identify ...
[ { "preference_id": 0, "criteria": "The feature selection process should be well-documented in the report, explaining why certain features were chosen.", "satisfied": null }, { "preference_id": 1, "criteria": "The heatmap should clearly distinguish between different performance metrics, such ...
false
true
false
null
32_Weather_Data_Analysis_LinearRegression_Weather_ML
Develop a weather data analysis system using a Linear Regression model on the Weather dataset from Kaggle. Load the dataset and perform feature engineering, including feature selection and generation and handle missing data using mean imputation or interpolation in `src/data_loader.py`. Then, apply the Linear Regressio...
[ "Regression", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Kaggle Weather\" dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Feature engineering, inclu...
[ { "preference_id": 0, "criteria": "The feature engineering process should be clearly documented in the report, explaining the rationale behind feature selection and generation.", "satisfied": null }, { "preference_id": 1, "criteria": "The report should include a discussion on the correlation...
true
true
false
null
33_Object_Detection_YOLOv3_COCO_DL
Help me develop an object detection system using the YOLOv3 model and the COCO dataset. Download the dataset and preprocess the images by resizing and normalization in `src/data_loader.py`. Implement the YOLOv3 model and use Non-Maximum Suppression (NMS) to refine the results in `src/model.py`. Save the detected object...
[ "Computer Vision" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"COCO\" dataset downloading is implemented in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data preprocessing,...
[ { "preference_id": 0, "criteria": "The \"Streamlit\" web page should be user-friendly, allowing users to easily upload and view new images for detection.", "satisfied": null }, { "preference_id": 1, "criteria": "The performence evalution includes mAP and inference time as metrics.", "sat...
false
true
false
null
34_Customer_Segmentation_KMeans_CustomerSegmentation_ML
I need to create a customer segmentation system using the K-means clustering algorithm with the Kaggle Customer Segmentation dataset. Start by standardizing the data in `src/data_loader.py`, then use the elbow method to determine the optimal number of clusters and save the elbow plot to `results/figures/elbow.jpg`. Imp...
[ "Unsupervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Kaggle Customer Segmentation\" dataset is used, including data loading and preparation in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ ...
[ { "preference_id": 0, "criteria": "The elbow plot clearly shows how the optimal number of clusters is determined.", "satisfied": null }, { "preference_id": 1, "criteria": " The system properly manages the launch and termination of the dashboard.", "satisfied": null } ]
true
true
false
null
35_Loan_Default_Prediction_RandomForest_LendingClub_ML
Can you help me build a loan default prediction system using a Random Forest classifier with the Lending Club Loan dataset? Start by loading the dataset, handling imbalanced data using oversampling or undersampling techniques, and performing feature selection to identify important features, all implemented in `src/data...
[ "Classification", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Lending Club Loan\" dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Imbalanced data is hand...
[ { "preference_id": 0, "criteria": "The Markdown report is detailed.", "satisfied": null }, { "preference_id": 1, "criteria": "The Markdown report should include insights on model performance and suggestions for potential improvements.", "satisfied": null } ]
false
true
false
null
36_Music_Emotion_Classification_SVM_GTZAN_ML
Help me develop a project for music emotion classification using an SVM model with the GTZAN dataset. The project should include audio preprocessing using librosa for noise removal and normalization, MFCC feature extraction with 13 coefficients, and the use of a linear SVM classifier with hyperparameter tuning. The dat...
[ "Audio Processing", "Classification" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"GTZAN\" music emotion loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Audio preprocessing, including n...
[ { "preference_id": 0, "criteria": "The \"Streamlit\" webpage should allow users to upload new audio files and view the classification results in real-time.", "satisfied": null }, { "preference_id": 1, "criteria": "The spectrogram visualizations should include options to adjust the frequency ...
false
true
false
null
37_Lane_Detection_ResNet50_TuSimple_DL
Develop a lane detection system. Start by importing the standard pre-trained ResNet-50 model from PyTorch in `src/model.py`. We'll work here with the TuSimple lane detection dataset as our test dataset, which should be loaded through `src/data_loader.py`. Then load and preprocess the dataset, including data augmentatio...
[ "Computer Vision" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"TuSimple\" lane detection dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data augmentation...
[ { "preference_id": 0, "criteria": "The report should include an analysis of the model's performance on challenging scenarios, such as curves or poor lighting conditions.", "satisfied": null }, { "preference_id": 1, "criteria": "The data augmentation steps should be well-documented, with exam...
false
true
false
null
38_Object_Tracking_Siamese_OTB50_DL
I need to create a system for object tracking using a Siamese network and the OTB50 dataset. The OTB50 dataset should be loaded in `src/data_loader.py`. The system should include data augmentation steps such as rotation and scaling, performed in `src/data_loader.py`. Implement the Siamese network in `src/model.py`. Hy...
[ "Computer Vision" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"OTB50\" dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data augmentation, such as rotation...
[ { "preference_id": 0, "criteria": "The tracking videos should be saved in high resolution and include annotations that highlight the tracked object.", "satisfied": null }, { "preference_id": 1, "criteria": "Ensure the system is capable of processing new video sequences with minimal modificat...
false
true
false
null
39_Drug_Response_Prediction_SVM_GDSC_ML
Develop a system to predict drug response using the GDSC dataset with a Support Vector Machine (SVM) regressor. Load the dataset and perform feature selection to identify key features in `src/data_loader.py`. Implement the SVM regressor in `src/model.py`. Use cross-validation to evaluate the model's performance in `src...
[ "Medical Analysis", "Regression", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"GDSC\" drug response dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Feature selection is p...
[ { "preference_id": 0, "criteria": "The report should emphasize how feature selection impacts the model's performance.", "satisfied": null }, { "preference_id": 1, "criteria": "The regression results visualization should clearly highlight the relationship between the selected features and the...
false
true
false
null
40_Text_Summarization_BART_CNNDailyMail_DL
Develop a system that performs text summarization system using the BART model with the CNN/Daily Mail dataset. Start by loading and preparing the dataset in `src/data_loader.py`, then perform data preprocessing such as removing HTML tags and punctuation in `src/data_loader.py`. Import a pre-trained BART model for text ...
[ "Generative Models", "Natural Language Processing" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"CNN/Daily Mail\" news dataset is used, including loading and preparing the dataset in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ...
[ { "preference_id": 0, "criteria": "The interactive \"Streamlit\" webpage should allow users to input new text and generate summaries in real-time.", "satisfied": null }, { "preference_id": 1, "criteria": "The report should include a discussion on how different hyperparameter settings affecte...
false
false
false
null
41_Stock_Classification_KNN_YahooFinance_ML
Develop a stock classification system using a KNN model on the Yahoo Finance dataset. Your implementation should decide if a given stock will increase or decrease in price. Start by loading the dataset and performing feature engineering, including generating technical indicators and selecting the most relevant features...
[ "Classification", "Financial Analysis", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Yahoo Finance\" dataset is used, including data loading and preparation in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "c...
[ { "preference_id": 0, "criteria": "The Jupyter Notebook should include clear explanations of each step, including feature engineering and model evaluation.", "satisfied": null }, { "preference_id": 1, "criteria": "The correlation heatmap should highlight the most significant technical indica...
false
true
false
null
42_Medical_Image_Classification_DenseNet121_ChestXray_DL
Create a medical image classification system using a pre-trained DenseNet-121 model and the Kaggle Chest X-ray dataset. Start by loading and preprocessing the dataset and performing data augmentation (including rotation, translation, and scaling) in `src/data_loader.py`. Apply the DenseNet-121 model for classification,...
[ "Classification", "Computer Vision", "Medical Analysis", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Kaggle Chest X-ray\" dataset is used, with data loading and preprocessing implemented in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0...
[ { "preference_id": 0, "criteria": "The \"Markdown\" report should include a section explaining the impact of data augmentation on model performance.", "satisfied": null }, { "preference_id": 1, "criteria": "The \"Grad-CAM\" visualizations should clearly highlight the areas of the images that...
true
true
false
null
43_Social_Network_Analysis_GCN_Cora_ML
Hey! Could you help me create a social network analysis system using a GCN model with the Cora citation network dataset? First, let's load and preprocess the dataset, including normalization and denoising, in `src/data_loader.py`. Then, apply the GCN model to classify the nodes and tune the hyperparameters such as the ...
[ "Unsupervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Cora citation network\" dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data preprocessing ...
[ { "preference_id": 0, "criteria": "The interactive network graph should allow users to explore individual nodes and their classifications dynamically.", "satisfied": null }, { "preference_id": 1, "criteria": "The citation network visualization should clearly differentiate between different n...
false
true
false
null
44_Text_Classification_BERT_AGNews_DL
Hey! Could you help me build a text classification system using a pretrained BERT model on the AG News dataset? Start by loading and preprocessing the data in `src/data_loader.py` (including removing whatever noise you can and performing tokenization). Once that's done, please save the BERT model parameters under `mode...
[ "Classification", "Natural Language Processing", "Supervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"AG News\" dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data preprocessing is performed i...
[ { "preference_id": 0, "criteria": "The Jupyter Notebook should explain how transfer learning was applied and its impact on model performance.", "satisfied": null }, { "preference_id": 1, "criteria": "The confusion matrix visualization should clearly differentiate between correctly and incorr...
false
true
false
null
45_Product_Recommendation_MatrixFactorization_AmazonReviews_ML
Could you help me set up a product recommendation system using a matrix factorization algorithm with the Electronics subset of the Amazon Reviews 2023 dataset? You should handle data loading and all the data preprocessing, including noise removal and normalization in `src/data_loader.py`. Apply a latent factor model to...
[ "Recommender Systems" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Electronics\" subset of the \"Amazon Reviews 2023\" dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "cri...
[ { "preference_id": 0, "criteria": "The impact of different preprocessing steps on recommendation accuracy should be discussed in the analysis report.", "satisfied": null }, { "preference_id": 1, "criteria": "The visualization should be interactive, allowing users to explore the recommendatio...
false
true
false
null
46_Speech_Recognition_DeepSpeech_LibriSpeech_DL
I'd like to develop a speech recognition system using the DeepSpeech library and the LibriSpeech dataset for me. Could you implement data loading and audio preprocessing, including noise reduction and normalization, in `src/data_loader.py`? Tune the hyperparameters such as learning rate and batch size in `src/train.py`...
[ "Audio Processing" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "\"LibriSpeech\" dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Audio preprocessing, including no...
[ { "preference_id": 0, "criteria": "The installation process for the \"DeepSpeech\" library should be well-documented, with troubleshooting tips if the library fails to install. Refer to the [DeepSpeech documentation](https://deepspeech.readthedocs.io/en/r0.9/) for guidance.", "satisfied": null }, { ...
false
true
true
null
47_Network_Traffic_Analysis_KMeans_NetworkTraffic_ML
Develop a network traffic analysis system using the K-means clustering algorithm with the Network Intrusion dataset (CIC-IDS-2017) from Kaggle. Load the dataset and standardize the data to ensure feature values are within the same range in `src/data_loader.py`. Implement the K-means clustering algorithm in `src/model.p...
[ "Unsupervised Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "\"Network Intrusion dataset (CIC-IDS-2017)\" from Kaggle is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "...
[ { "preference_id": 0, "criteria": "The dashboard should allow users to filter and drill down into specific clusters for detailed analysis.", "satisfied": null }, { "preference_id": 1, "criteria": "Visualizations should clearly distinguish between different clusters, making it easy to identif...
true
true
false
null
48_Stock_Trading_Simulation_PPO_HistoricalData_RL
Hey! I'm interested in developing a stock trading agent using the Proximal Policy Optimization (PPO) algorithm. The idea is to use historical market data for training and testing. A stock trading simulation environment should be implemented in `src/env.py`. The Proximal Policy Optimization (PPO) algorithm should be imp...
[ "Financial Analysis", "Reinforcement Learning" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "A stock trading simulation environment is implemented in `src/env.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Historical market dat...
[ { "preference_id": 0, "criteria": "The profit curve visualization should highlight significant trades or events that impacted performance.", "satisfied": null }, { "preference_id": 1, "criteria": "The report should include insights on how parameter tuning affects the trading outcome.", "...
false
true
false
null
49_Explainable_AI_LIME_Titanic_ML
Hi there! I'm looking to create a project that explains model predictions using LIME, specifically with the Titanic survival prediction dataset. First, load the dataset in `src/data_loader.py`.Then, train a Random Forest classifier and save it under `models/saved_models/`? Finally, use LIME to explain the Random Forest...
[ "Classification" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The \"Titanic\" survival prediction dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "A \"Random Fo...
[ { "preference_id": 0, "criteria": "The explanation report should be written in a clear and accessible style, making it understandable even for those without a deep technical background.", "satisfied": null }, { "preference_id": 1, "criteria": "The feature importance plot should be visually i...
false
true
false
null
50_Math_Problem_Solving_Transformer_DeepMindMath_DL
Hi! I need help with a project that uses a Transformer model to solve math problems from the DeepMind Mathematics dataset. Please load the dataset and preprocessing it in `src/data_loader.py`. The preprocessing should parse and standardize the math expressions in a syntactically consistent way so the model can easily p...
[ "Natural Language Processing" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "DeepMind Mathematics dataset is loaded in `src/data_loader.py`.", "category": "Dataset or Environment", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Data preprocessing is perfor...
[ { "preference_id": 0, "criteria": "The preprocessing step should ensure that the mathematical expressions are standardized in a way that makes them easily processed by the model.", "satisfied": null }, { "preference_id": 1, "criteria": "The interactive tool should be capable of handling a wi...
false
true
false
null
51_Devin_AI_Software_Engineer_Plants_Secret_Messages_in_Images
Hi! Please follow the instructions from the blog post [Hidden in Plain Sight](https://www.factsmachine.ai/p/hidden-in-plain-sight) to set up the script mentioned for generating images with hidden text in `src/visualize.py`. Ensure the generated images are of 1080p resolution and saved in `results/figures/`. Create cont...
[ "Computer Vision", "Generative Models", "Natural Language Processing" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The instructions from the blog post [Hidden in Plain Sight](https://www.factsmachine.ai/p/hidden-in-plain-sight) are followed to set up the script mentioned for generating images with hidden text in `src/visualize.py`.", "category": "Dataset o...
[ { "preference_id": 0, "criteria": "The system should be capable of learning and using unfamiliar technologies, adapting to new tools or platforms as required.", "satisfied": null }, { "preference_id": 1, "criteria": "After reviewing the blog post, ControlNet should be successfully run on Mod...
false
false
true
null
52_Devin_AI_Trains_an_AI
Can you finetune a 7B LLaMA model using `https://github.com/artidoro/qlora`? Follow the instructions in the repository to finetune the 7B LLaMA model and save it in models/saved_models/. Ensure the necessary environment and dependencies are set up as outlined in `src/env.py`. Download and prepare the datasets required ...
[ "Generative Models", "Natural Language Processing" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The repository at `https://github.com/artidoro/qlora` has been downloaded.", "category": "Machine Learning Method", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "The necessary en...
[ { "preference_id": 0, "criteria": "The finetuning process should include validation steps to monitor overfitting or other issues.", "satisfied": null }, { "preference_id": 1, "criteria": "A detailed report on the finetuning process, including any challenges faced and how they were overcome, ...
false
true
true
null
53_Devin_Upwork_Side_Hustle
Hello, I am looking to make inferences with the models in this repository `https://github.com/mahdi65/roadDamageDetection2020`. The system should perform inferences using the models from the repository and save the results in `models/saved_models/`. Sample data should be downloaded and prepared for testing the models i...
[ "Computer Vision" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The repository at `https://github.com/mahdi65/roadDamageDetection2020` is set up.", "category": "Machine Learning Method", "satisfied": null }, { "requirement_id": 1, "prerequisites": [ 0 ], "criteria": "Sample da...
[ { "preference_id": 0, "criteria": "The visualized images should be clear, with detections accurately highlighted for easy interpretation.", "satisfied": null }, { "preference_id": 1, "criteria": "The performance report should include a summary of detection accuracy and any issues encountered...
false
false
true
null
54_Mock_OpenAI_API_Response_Analyzer_App
I want to create an app that will enable me to analyze the different responses the OpenAI API may give for the same query. The frontend should be implemented in `src/frontend.py` and should contain a conversation between a user and an LLM as a list. Each list item should contain a text field where I can add a (potentia...
[ "Natural Language Processing", "Generative Models", "Other" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The frontend should be implemented in `src/frontend.py`, containing a list where the user can add large text messages and select whether the message is from the LLM or the user. When the app loads, the list should start with a single empty item.",...
[ { "preference_id": 0, "criteria": "The UI should maintain a clean and consistent style, using Tailwind for cohesive and easy-to-navigate design.", "satisfied": null }, { "preference_id": 1, "criteria": "Streaming responses from the API should be efficient, ensuring smooth real-time updates w...
false
false
false
null
55_SQLite_Database_Viewer_and_Analyzer_App
I want to create an app that enables users to view and analyze AI development data stored in an SQLite database. On the frontend (implemented in `src/frontend.py`), the user should either upload a new SQLite database, including AI model training logs or prompt-response data, by selecting a file from their device or sel...
[ "Other" ]
[ { "requirement_id": 0, "prerequisites": [], "criteria": "The frontend is implemented in `src/frontend.py` and allows users to upload a new SQLite database or select a previously cached one from `src/cache.py`. The chosen file should be saved and accessible for future use.", "category": "Human Comput...
[ { "preference_id": 0, "criteria": "The frontend interface should allow easy interaction with the database, ensuring users can smoothly navigate between apps, tasks, and steps.", "satisfied": null }, { "preference_id": 1, "criteria": "The system should efficiently handle large SQLite database...
false
false
false
null

GITHUB: https://github.com/metauto-ai/agent-as-a-judge

Current evaluation techniques are often inadequate for advanced agentic systems due to their focus on final outcomes and labor-intensive manual reviews. To overcome this limitation, we introduce the Agent-as-a-Judge framework.

As a proof-of-concept, we applied Agent-as-a-Judge to code generation tasks using DevAI, a benchmark consisting of 55 realistic AI development tasks with 365 hierarchical user requirements. The results demonstrate that Agent-as-a-Judge significantly outperforms traditional evaluation methods, delivering reliable reward signals for scalable self-improvement in agentic systems.

Check out the dataset on Hugging Face 🤗. See how to use this dataset in the guidelines.

DEVAI dataset

DEVAI is a benchmark of 55 realistic AI development tasks. It consists of plentiful manual annotations, including a total of 365 hierarchical user requirements. This dataset enables rich reinforcement signals for better automated AI software development.

Here is an example of our tasks.

We apply three state-of-the-art automatic software development systems to DEVAI, namely MetaGPT, GPT-Piolt, and OpenHands. We suggest expanding the task queries with constraints defined in constraints.json to guide development systems' behavior and provide auxiliary if needed. The table below shows preliminary statistics results.

We perform a manual evaluation to judge if each requirement is satisfied by the solution provided by the aforementioned systems.

An automated evaluation program that could possibly replace manual evaluation can be found at our Github realse. Find more details in our paper.

If you use DEVAI to test your development system, we suggest providing the system API keys of Kaggle and Hugging Face, as some DEVAI tasks require access to these platforms.

Downloads last month
383

Paper for DEVAI-benchmark/DEVAI