|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- question-answering |
|
|
tags: |
|
|
- agent |
|
|
- benchmark |
|
|
- tool-use |
|
|
- korean |
|
|
--- |
|
|
|
|
|
<p align="center"> |
|
|
<img src="banner.png" /> |
|
|
</p> |
|
|
|
|
|
# **๐ฐ๐ท Ko-AgentBench v1** |
|
|
|
|
|
**"Korean Agent Benchmark Project"** |
|
|
|
|
|
**English | [ํ๊ตญ์ด](README.md)** |
|
|
|
|
|
As AI agents become more sophisticated, it has become crucial to precisely measure their performance under conditions similar to real-world environments. However, most benchmarks are designed based on English-speaking environments, which limits their ability to reflect Korea's unique usage contexts. |
|
|
|
|
|
To address this issue, we have developed a high-quality agent benchmark specialized for the Korean real-world usage environment. |
|
|
|
|
|
# Ko-AgentBench Key Features โจ |
|
|
**1. Step-by-step Task Design** |
|
|
|
|
|
We have comprehensively analyzed agent capabilities across 7 levels, from simple tool calls to long-term contextual abilities and robustness handling capabilities. |
|
|
|
|
|
**2. 18 Korean-specific APIs and High-quality Scenarios Tailored to Real-life Environments** |
|
|
|
|
|
Based on APIs from Korean real-world usage environments such as Naver, Maps, Kakao, and websites, we have implemented realistic problem-solving scenarios closely related to domestic users' daily lives, such as 'appointment booking' and 'blog review search'. |
|
|
|
|
|
**3. Cache-based Iterative Evaluation and Robustness Testing** |
|
|
|
|
|
We solve chronic problems of existing benchmarks, such as 'information attribute inconsistency changes'. |
|
|
By improving failed API responses, we ensure benchmark consistency and reliability. |
|
|
|
|
|
By evaluating error recognition/response capabilities (strategies) in intentional error situations, we select models that operate stably even in real-world environments. |
|
|
|
|
|
**4. Step-specific Precision Metrics** |
|
|
|
|
|
We evaluate the necessity/requirements of problem-solving step by step, including tool selection, parameter configuration, and data flow. Through this, we quantitatively identify the strengths and weaknesses of models. |
|
|
|
|
|
## **Data Loading** |
|
|
|
|
|
```bash |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load specific level |
|
|
dataset = load_dataset("Hugging-Face-KREW/Ko-AgentBench", data_files="L1.json") |
|
|
|
|
|
# Or load all levels |
|
|
dataset = load_dataset("Hugging-Face-KREW/Ko-AgentBench", data_files="*.json") |
|
|
``` |
|
|
|
|
|
# Dataset Overview |
|
|
|
|
|
- Define task classification system for agent benchmark design |
|
|
- Design to evaluate agent's tool calling capabilities in a step-by-step manner |
|
|
|
|
|
## Dataset Scope |
|
|
|
|
|
- Evaluation Target: Open-weight sLLM (supports tool calling), Commercial APIs |
|
|
- Evaluation Scope: Agent tool calling performance in single-turn and multi-turn conversation situations |
|
|
- Applied APIs: 18 Korean-specific open APIs |
|
|
|
|
|
|
|
|
# Task Levels |
|
|
|
|
|
## Single-Turn |
|
|
|
|
|
**L1. Single Tool Call** |
|
|
- Goal: Verify the most basic API calling capability |
|
|
- Description: Check if the given tool can be executed with correct parameters |
|
|
- Feature: Evaluate "accuracy only" by performing requests with specified API names or natural language requests as-is |
|
|
- Example: "Search for 'Rapid Current' using Naver Book API and tell me the price." |
|
|
- Example: "Tell me the price of the 'Rapid Current' book" |
|
|
|
|
|
**L2. Tool Selection** |
|
|
- Goal: Verify the ability to select the optimal API among multiple candidate tools |
|
|
- Description: Users make requests in natural language, and the model must select the most suitable tool from the given tool list |
|
|
- Feature: Evaluate accurate tool mapping with input natural language |
|
|
- Example: "Check the price of the 'All Back English Middle 2-1 Cheonjae (Kim)' book." |
|
|
- Candidate tools: `hotel_booking_api`, `aladin_books_api` |
|
|
- Candidate tools must have no mutual correlation. |
|
|
|
|
|
**L3. Sequential Tool Reasoning** |
|
|
- Goal: Verify planning and execution capabilities through multi-step reasoning |
|
|
- Description: Check if a correct pipeline can be constructed by connecting the results of one tool as input to another tool |
|
|
- Feature: Evaluate "planned chain-of-tools" rather than simple calls |
|
|
- Example: "Tell me when the Instax11 I bought from 11st Amazon will be delivered" |
|
|
- Candidate tools: `11st_order_api`, `customs_api`, `cj_delivery_api` |
|
|
- Tools must be callable sequentially (11st delivery number inquiry โ customs clearance โ courier company) |
|
|
|
|
|
**L4. Parallel Tool Reasoning** |
|
|
- Goal: Collect information in parallel and derive conclusions by synthesizing it |
|
|
- Description: Simultaneously call multiple independent tools, compare and analyze results, then produce final answers |
|
|
- Feature: Evaluate multi-source aggregation (information synthesis and comparison ability) |
|
|
- Example: "Check the stock of the 'Hanroro Grapefruit Apricot Club' book." |
|
|
- Candidate tools: `kyobo_books_api`, `aladin_books_api` |
|
|
- Expected answer: There are 12 books at Kyobo Book Centre and 18 books at Aladin, totaling 30 books. |
|
|
- At this time, candidate tools must handle the same function in parallel. |
|
|
|
|
|
**L5. Error Handling and Robustness** |
|
|
- Goal: Verify coping ability in error situations |
|
|
- Description: Evaluate how various failure modes are handled, not just "failed" |
|
|
- **Sub-items:** |
|
|
- A. Request for additional questions |
|
|
- Guide users to make clearer requests when information is insufficient |
|
|
- B. Hallucination prevention |
|
|
- Prohibit calling non-existent APIs |
|
|
- Prohibit "pretending to succeed" answers when failed |
|
|
- C. Fallback maneuvers |
|
|
- Whether alternative APIs with the same function can be utilized when specific API errors occur |
|
|
- Example: "When Naver Movie API call fails โ Report 'API call failed' or call Kakao Movie API as alternative" |
|
|
|
|
|
## Multi-Turn |
|
|
|
|
|
**L6. Efficient Tool Utilization** |
|
|
- Goal: Verify the ability to efficiently reuse previous tool results |
|
|
- Description: While recalling APIs in all situations is accurate, it's inefficient in terms of cost and delay. Conversely, unconditionally reusing old information also causes accuracy problems. |
|
|
- Feature: Evaluate whether reasonable choices can be made between "recall vs reuse" |
|
|
- Example: |
|
|
- User: "Compare Coupang and Naver prices." โ Result: Coupang 80, Naver 85 |
|
|
- User: "What was the Naver price?" |
|
|
- Correct answer: 85 (utilize past information, avoid unnecessary recalls) |
|
|
- Wrong answer: Call API again or "I don't know" |
|
|
|
|
|
**L7. Long-Context Reasoning** |
|
|
- Goal: Verify the ability to maintain long-term context in multi-turn conversations |
|
|
- Description: Remember information from several turns ago and correctly perform tool calling by connecting it with new questions |
|
|
- Example: |
|
|
- User's first question: "I'm going to travel to Jeju Island." |
|
|
- Later: "How's the weather?" โ Call weather API using Jeju Island context |
|
|
- (Additional turn) "If it rains, find places where I can buy an umbrella." โ Utilize all previous Jeju Island + weather context |
|
|
|
|
|
## Links |
|
|
You can check more detailed information about Ko-AgentBench. |
|
|
- ๐ [Live Leaderboard](https://huggingface.co/spaces/huggingface-KREW/Ko-AgentBench) |
|
|
- ๐ [Dataset](https://huggingface.co/datasets/huggingface-KREW/Ko-AgentBench) |
|
|
- ๐ [Github](https://github.com/Hugging-Face-KREW/Ko-AgentBench) |
|
|
|
|
|
## Contact |
|
|
If you have any questions about the dataset and benchmark, please contact us! |
|
|
|
|
|
Hugging Face KREW is a Korean non-profit research organization that strives to deeply understand artificial intelligence through Hugging Face and contribute to open source. |
|
|
- โ๐ป Blog: [KREW-blog](https://hugging-face-krew.github.io/) |
|
|
- ๐ฆ HuggingFace Community: [@huggingface-KREW](https://huggingface.co/huggingface-KREW) |
|
|
- ๐ผ LinkedIn: [Hugging Face KREW](https://www.linkedin.com/company/hugging-face-krew/) |
|
|
|