Skip to content

Commit

Permalink
Auto. Make Doomgrad HF Review on 27 January
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Jan 27, 2025
1 parent 90454bf commit 217b746
Show file tree
Hide file tree
Showing 7 changed files with 139 additions and 139 deletions.
8 changes: 4 additions & 4 deletions d/2025-01-27.html

Large diffs are not rendered by default.

110 changes: 55 additions & 55 deletions d/2025-01-27.json

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions hf_papers.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,17 @@
"en": "January 27",
"zh": "1月27日"
},
"time_utc": "2025-01-27 11:08",
"time_utc": "2025-01-27 12:18",
"weekday": 0,
"issue_id": 1880,
"issue_id": 1881,
"home_page_url": "https://huggingface.co/papers",
"papers": [
{
"id": "https://huggingface.co/papers/2501.14249",
"title": "Humanity's Last Exam",
"url": "https://huggingface.co/papers/2501.14249",
"abstract": "Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90\\% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. HLE consists of 3,000 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a significant gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai.",
"score": 17,
"score": 18,
"issue_id": 1873,
"pub_date": "2025-01-24",
"pub_date_card": {
Expand Down Expand Up @@ -771,7 +771,7 @@
"title": "Chain-of-Retrieval Augmented Generation",
"url": "https://huggingface.co/papers/2501.14342",
"abstract": "This paper introduces an approach for training o1-like RAG models that retrieve and reason over relevant information step by step before generating the final answer. Conventional RAG methods usually perform a single retrieval step before the generation process, which limits their effectiveness in addressing complex queries due to imperfect retrieval results. In contrast, our proposed method, CoRAG (Chain-of-Retrieval Augmented Generation), allows the model to dynamically reformulate the query based on the evolving state. To train CoRAG effectively, we utilize rejection sampling to automatically generate intermediate retrieval chains, thereby augmenting existing RAG datasets that only provide the correct final answer. At test time, we propose various decoding strategies to scale the model's test-time compute by controlling the length and number of sampled retrieval chains. Experimental results across multiple benchmarks validate the efficacy of CoRAG, particularly in multi-hop question answering tasks, where we observe more than 10 points improvement in EM score compared to strong baselines. On the KILT benchmark, CoRAG establishes a new state-of-the-art performance across a diverse range of knowledge-intensive tasks. Furthermore, we offer comprehensive analyses to understand the scaling behavior of CoRAG, laying the groundwork for future research aimed at developing factual and grounded foundation models.",
"score": 9,
"score": 12,
"issue_id": 1873,
"pub_date": "2025-01-24",
"pub_date_card": {
Expand Down
8 changes: 4 additions & 4 deletions index.html

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions log.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
[27.01.2025 11:08] Read previous papers.
[27.01.2025 11:08] Generating top page (month).
[27.01.2025 11:08] Writing top page (month).
[27.01.2025 12:18] Read previous papers.
[27.01.2025 12:18] Generating top page (month).
[27.01.2025 12:18] Writing top page (month).
Loading

0 comments on commit 217b746

Please sign in to comment.