Spaces:
Running
on
Zero
Running
on
Zero
Upload rss.xml with huggingface_hub
Browse files
rss.xml
CHANGED
@@ -14,7 +14,17 @@
|
|
14 |
<itunes:email>florent.daudens@hf.co</itunes:email>
|
15 |
</itunes:owner>
|
16 |
<itunes:image href="https://huggingface.co/spaces/fdaudens/podcast-jobs/resolve/main/images/cover3.png" />
|
17 |
-
<lastBuildDate>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
<item>
|
19 |
<title>Language Models' Hidden Biases: Who Does LLMs Favor in Global Politics</title>
|
20 |
<description>This episode delves into the unnoticed geopolitical biases in popular Large Language Models (LLMs), analyzing how they prioritize national perspectives on contentious historical events, and challenges the idea that simple debiasing methods can effectively mitigate these biases.
|
|
|
14 |
<itunes:email>florent.daudens@hf.co</itunes:email>
|
15 |
</itunes:owner>
|
16 |
<itunes:image href="https://huggingface.co/spaces/fdaudens/podcast-jobs/resolve/main/images/cover3.png" />
|
17 |
+
<lastBuildDate>Fri, 13 Jun 2025 16:34:32 +0000</lastBuildDate>
|
18 |
+
<item>
|
19 |
+
<title>Revolutionizing Medical AI with the World's Largest Reasoning Dataset</title>
|
20 |
+
<description>We explore the creation and impact of ReasonMed, a groundbreaking 370K multi-agent generated dataset designed to advance medical reasoning capabilities in AI. The dataset features rigorously verified examples of medical reasoning paths and allows for the evaluation of best practices in training medical reasoning models.
|
21 |
+
|
22 |
+
<a href="https://huggingface.co/papers/2506.09513">[Read the paper on Hugging Face]</a></description>
|
23 |
+
<pubDate>Fri, 13 Jun 2025 16:34:32 +0000</pubDate>
|
24 |
+
<enclosure url="https://huggingface.co/spaces/fdaudens/podcast-jobs/resolve/main/podcasts/podcast-2025-06-13.wav" length="7179644" type="audio/wav" />
|
25 |
+
<guid>https://huggingface.co/spaces/fdaudens/podcast-jobs/resolve/main/podcasts/podcast-2025-06-13.wav</guid>
|
26 |
+
<itunes:explicit>false</itunes:explicit>
|
27 |
+
</item>
|
28 |
<item>
|
29 |
<title>Language Models' Hidden Biases: Who Does LLMs Favor in Global Politics</title>
|
30 |
<description>This episode delves into the unnoticed geopolitical biases in popular Large Language Models (LLMs), analyzing how they prioritize national perspectives on contentious historical events, and challenges the idea that simple debiasing methods can effectively mitigate these biases.
|