Grade Received: A
I did all the coding parts of task 2 and task 3.
Meta KDD Cup '24 CRAG: Comprehensive RAG Benchmark Starter Kit
This repository is the CRAG: Comphrensive RAG Benchmark Submission template and Starter kit! Clone the repository to compete now!
This repository contains:
- Documentation on how to submit your models to the leaderboard
- The procedure for best practices and information on how we evaluate your model, etc.
- Starter code for you to get started!
- Competition Overview
- Dataset
- Tasks
- Evaluation Metrics
- Getting Started
- Frequently Asked Questions
- Important Links
Please find more details about the dataset in docs/dataset.md.
Please refer to local_evaluation.py for more details on how we will evaluate your submissions.
- Sign up to join the competition on the AIcrowd website.
- Fork this starter kit repository. You can use this link to create a fork.
- Clone your forked repo and start developing your model.
- Develop your model(s) following the template in how to write your own model section.
- Submit your trained models to AIcrowd Gitlab for evaluation (full instructions below). The automated evaluation will evaluate the submissions on the public test set and report the metrics on the leaderboard of the competition.
Please follow the instructions in models/README.md for instructions and examples on how to write your own models for this competition.
- Add your SSH key to AIcrowd GitLab
You can add your SSH Keys to your GitLab account by going to your profile settings here. If you do not have SSH Keys, you will first need to generate one.
-
Fork the repository. You can use this link to create a fork.
-
Clone the repository
git clone [email protected]:<YOUR-AICROWD-USERNAME>/meta-comphrehensive-rag-benchmark-starter-kit.git cd meta-comphrehensive-rag-benchmark-starter-kit
-
Install competition specific dependencies!
cd meta-comphrehensive-rag-benchmark-starter-kit pip install -r requirements.txt
-
Write your own model as described in How to write your own model section.
-
Test your model locally using
python local_evaluation.py
. -
Accept the Challenge Rules on the main challenge page by clicking on the Participate button. Also accept the Challenge Rules on the Task specific page (link on the challenge page) that you want to submit to.
-
Make a submission as described in How to make a submission section.
Please follow the instructions in models/README.md for instructions and examples on how to write your own models for this competition.
Please follow the instructions in docs/submission.md to make your first submission. This also includes instructions on specifying your software runtime, code structure, submitting to different tracks.
Note: Remember to accept the Challenge Rules on the challenge page, and the task page before making your first submission.
You can find more details about the hardware and system configuration in docs/hardware-and-system-config.md.
In summary, we provide you 4
x [NVIDIA T4 GPUs].
We include three baselines for demonstration purposes, and you can read more abou them in docs/baselines.md.
This starter kit can be used to submit to any of the tracks. You can find more information in docs/submission.md#submitting-to-different-tracks.
The dataset schema is described in docs/dataset.md.
If you want to use Croissant to view the data, please use docs/croissant.json.
Best of Luck 🎉 🎉
- 💪 Challenge Page: https://www.aicrowd.com/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024
- 🗣 Discussion Forum: https://www.aicrowd.com/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024/discussion
- 🏆 Leaderboard: https://www.aicrowd.com/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024/leaderboards