MaoShen commited on
Commit
2eb41d7
·
verified ·
1 Parent(s): 522a5f4

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .dockerignore +24 -0
  2. .gitattributes +3 -0
  3. .github/ISSUE_TEMPLATE/bug_report.md +26 -0
  4. .github/ISSUE_TEMPLATE/custom.md +10 -0
  5. .github/ISSUE_TEMPLATE/feature_request.md +23 -0
  6. .github/workflows/build_documentation.yml +27 -0
  7. .github/workflows/build_pr_documentation.yml +22 -0
  8. .github/workflows/quality.yml +30 -0
  9. .github/workflows/tests.yml +119 -0
  10. .github/workflows/trufflehog.yml +18 -0
  11. .github/workflows/upload_pr_documentation.yml +16 -0
  12. .gitignore +153 -0
  13. .pre-commit-config.yaml +13 -0
  14. CODE_OF_CONDUCT.md +133 -0
  15. CONTRIBUTING.md +127 -0
  16. Dockerfile +29 -0
  17. LICENSE +201 -0
  18. Makefile +18 -0
  19. README.md +248 -12
  20. deploy_space_action.yaml +19 -0
  21. docs/README.md +274 -0
  22. docs/source/en/_config.py +14 -0
  23. docs/source/en/_toctree.yml +42 -0
  24. docs/source/en/ai_assistant_architecture.md +207 -0
  25. docs/source/en/conceptual_guides/intro_agents.mdx +118 -0
  26. docs/source/en/conceptual_guides/react.mdx +63 -0
  27. docs/source/en/examples/multiagents.mdx +189 -0
  28. docs/source/en/examples/rag.mdx +151 -0
  29. docs/source/en/examples/text_to_sql.mdx +212 -0
  30. docs/source/en/examples/web_browser.mdx +213 -0
  31. docs/source/en/guided_tour.mdx +434 -0
  32. docs/source/en/index.mdx +53 -0
  33. docs/source/en/reference/agents.mdx +69 -0
  34. docs/source/en/reference/models.mdx +169 -0
  35. docs/source/en/reference/tools.mdx +107 -0
  36. docs/source/en/tutorials/building_good_agents.mdx +277 -0
  37. docs/source/en/tutorials/inspect_runs.mdx +193 -0
  38. docs/source/en/tutorials/memory.mdx +148 -0
  39. docs/source/en/tutorials/secure_code_execution.mdx +317 -0
  40. docs/source/en/tutorials/tools.mdx +247 -0
  41. docs/source/hi/_config.py +14 -0
  42. docs/source/hi/_toctree.yml +36 -0
  43. docs/source/hi/conceptual_guides/intro_agents.mdx +115 -0
  44. docs/source/hi/conceptual_guides/react.mdx +44 -0
  45. docs/source/hi/examples/multiagents.mdx +199 -0
  46. docs/source/hi/examples/rag.mdx +156 -0
  47. docs/source/hi/examples/text_to_sql.mdx +203 -0
  48. docs/source/hi/guided_tour.mdx +360 -0
  49. docs/source/hi/index.mdx +54 -0
  50. docs/source/hi/reference/agents.mdx +166 -0
.dockerignore ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .venv/
2
+ .git/
3
+ __pycache__/
4
+ *.pyc
5
+ *.pyo
6
+ *.pyd
7
+ .Python
8
+ build/
9
+ develop-eggs/
10
+ dist/
11
+ downloads/
12
+ eggs/
13
+ .eggs/
14
+ lib/
15
+ lib64/
16
+ parts/
17
+ sdist/
18
+ var/
19
+ wheels/
20
+ *.egg-info/
21
+ .installed.cfg
22
+ *.egg
23
+ tests/
24
+ build/
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ examples/open_deep_research/analysis_report[[:space:]]-[[:space:]]副本.pdf filter=lfs diff=lfs merge=lfs -text
37
+ examples/open_deep_research/analysis_report.pdf filter=lfs diff=lfs merge=lfs -text
38
+ tests/fixtures/000000039769.png filter=lfs diff=lfs merge=lfs -text
.github/ISSUE_TEMPLATE/bug_report.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Bug report
3
+ about: The clearer your bug report, the faster it will be fixed!
4
+ title: "[BUG]"
5
+ labels: bug
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+ **Describe the bug**
11
+ A clear and concise description of what the bug is.
12
+
13
+ **Code to reproduce the error**
14
+ The simplest code snippet that produces your bug.
15
+
16
+ **Error logs (if any)**
17
+ Provide error logs if there are any.
18
+
19
+ **Expected behavior**
20
+ A clear and concise description of what you expected to happen.
21
+
22
+ **Packages version:**
23
+ Run `pip freeze | grep smolagents` and paste it here.
24
+
25
+ **Additional context**
26
+ Add any other context about the problem here.
.github/ISSUE_TEMPLATE/custom.md ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Custom issue template
3
+ about: Describe this issue template's purpose here.
4
+ title: ''
5
+ labels: ''
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+
.github/ISSUE_TEMPLATE/feature_request.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Feature request
3
+ about: Suggest an idea for this project
4
+ title: ''
5
+ labels: enhancement
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+ **Is your feature request related to a problem? Please describe.**
11
+ A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
12
+
13
+ **Describe the solution you'd like**
14
+ A clear and concise description of what you want to happen.
15
+
16
+ **Is this not possible with the current options.**
17
+ Make sure to consider if what you're requesting can be done with current abstractions.
18
+
19
+ **Describe alternatives you've considered**
20
+ A clear and concise description of any alternative solutions or features you've considered.
21
+
22
+ **Additional context**
23
+ Add any other context or screenshots about the feature request here.
.github/workflows/build_documentation.yml ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Build documentation
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - main
7
+ - doc-builder*
8
+ - v*-release
9
+ - use_templates
10
+ paths:
11
+ - 'docs/source/**'
12
+ - 'assets/**'
13
+ - '.github/workflows/doc-build.yml'
14
+ - 'pyproject.toml'
15
+
16
+ jobs:
17
+ build:
18
+ uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main
19
+ with:
20
+ commit_sha: ${{ github.sha }}
21
+ package: smolagents
22
+ languages: en
23
+ notebook_folder: smolagents_doc
24
+ # additional_args: --not_python_module # use this arg if repository is documentation only
25
+ secrets:
26
+ token: ${{ secrets.HUGGINGFACE_PUSH }}
27
+ hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
.github/workflows/build_pr_documentation.yml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Build PR Documentation
2
+
3
+ on:
4
+ pull_request:
5
+ paths:
6
+ - 'docs/source/**'
7
+ - 'assets/**'
8
+ - '.github/workflows/doc-pr-build.yml'
9
+
10
+ concurrency:
11
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
12
+ cancel-in-progress: true
13
+
14
+ jobs:
15
+ build:
16
+ uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@main
17
+ with:
18
+ commit_sha: ${{ github.event.pull_request.head.sha }}
19
+ pr_number: ${{ github.event.number }}
20
+ package: smolagents
21
+ languages: en
22
+ # additional_args: --not_python_module # use this arg if repository is documentation only
.github/workflows/quality.yml ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Quality Check
2
+
3
+ on: [pull_request]
4
+
5
+ jobs:
6
+ check_code_quality:
7
+ runs-on: ubuntu-latest
8
+ env:
9
+ UV_HTTP_TIMEOUT: 600 # max 10min to install deps
10
+
11
+ steps:
12
+ - uses: actions/checkout@v2
13
+ - name: Set up Python
14
+ uses: actions/setup-python@v2
15
+ with:
16
+ python-version: "3.12"
17
+
18
+ # Setup venv
19
+ - name: Setup venv + uv
20
+ run: |
21
+ pip install --upgrade uv
22
+ uv venv
23
+
24
+ - name: Install dependencies
25
+ run: uv pip install "smolagents[quality] @ ."
26
+
27
+ # Equivalent of "make quality" but step by step
28
+ - run: uv run ruff check examples src tests utils # linter
29
+ - run: uv run ruff format --check examples src tests utils # formatter
30
+ - run: uv run python utils/check_tests_in_ci.py
.github/workflows/tests.yml ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Python tests
2
+
3
+ on: [pull_request]
4
+
5
+ jobs:
6
+ build-ubuntu:
7
+ runs-on: ubuntu-latest
8
+ env:
9
+ UV_HTTP_TIMEOUT: 600 # max 10min to install deps
10
+
11
+ strategy:
12
+ fail-fast: false
13
+ matrix:
14
+ python-version: ["3.10", "3.12"]
15
+
16
+ steps:
17
+ - uses: actions/checkout@v2
18
+ - name: Set up Python ${{ matrix.python-version }}
19
+ uses: actions/setup-python@v2
20
+ with:
21
+ python-version: ${{ matrix.python-version }}
22
+
23
+ # Setup venv
24
+ - name: Setup venv + uv
25
+ run: |
26
+ pip install --upgrade uv
27
+ uv venv
28
+
29
+ # Install dependencies
30
+ - name: Install dependencies
31
+ run: |
32
+ uv pip install "smolagents[test] @ ."
33
+
34
+ # Run all tests separately for individual feedback
35
+ # Use 'if success() || failure()' so that all tests are run even if one failed
36
+ # See https://stackoverflow.com/a/62112985
37
+ - name: Import tests
38
+ run: |
39
+ uv run pytest ./tests/test_import.py
40
+ if: ${{ success() || failure() }}
41
+
42
+ - name: Agent tests
43
+ run: |
44
+ uv run pytest ./tests/test_agents.py
45
+ if: ${{ success() || failure() }}
46
+
47
+ - name: Default tools tests
48
+ run: |
49
+ uv run pytest ./tests/test_default_tools.py
50
+ if: ${{ success() || failure() }}
51
+
52
+ # - name: Docs tests # Disabled for now (slow test + requires API keys)
53
+ # run: |
54
+ # uv run pytest ./tests/test_all_docs.py
55
+
56
+ - name: Final answer tests
57
+ run: |
58
+ uv run pytest ./tests/test_final_answer.py
59
+ if: ${{ success() || failure() }}
60
+
61
+ - name: Models tests
62
+ run: |
63
+ uv run pytest ./tests/test_models.py
64
+ if: ${{ success() || failure() }}
65
+
66
+ - name: Memory tests
67
+ run: |
68
+ uv run pytest ./tests/test_memory.py
69
+ if: ${{ success() || failure() }}
70
+
71
+ - name: Monitoring tests
72
+ run: |
73
+ uv run pytest ./tests/test_monitoring.py
74
+ if: ${{ success() || failure() }}
75
+
76
+ - name: Local Python executor tests
77
+ run: |
78
+ uv run pytest ./tests/test_local_python_executor.py
79
+ if: ${{ success() || failure() }}
80
+
81
+ - name: Remote executor tests
82
+ run: |
83
+ uv run pytest ./tests/test_remote_executors.py
84
+ if: ${{ success() || failure() }}
85
+
86
+ - name: Search tests
87
+ run: |
88
+ uv run pytest ./tests/test_search.py
89
+ if: ${{ success() || failure() }}
90
+
91
+ - name: Tools tests
92
+ run: |
93
+ uv run pytest ./tests/test_tools.py
94
+ if: ${{ success() || failure() }}
95
+
96
+ - name: Tool validation tests
97
+ run: |
98
+ uv run pytest ./tests/test_tool_validation.py
99
+ if: ${{ success() || failure() }}
100
+
101
+ - name: Types tests
102
+ run: |
103
+ uv run pytest ./tests/test_types.py
104
+ if: ${{ success() || failure() }}
105
+
106
+ - name: Utils tests
107
+ run: |
108
+ uv run pytest ./tests/test_utils.py
109
+ if: ${{ success() || failure() }}
110
+
111
+ - name: Gradio UI tests
112
+ run: |
113
+ uv run pytest ./tests/test_gradio_ui.py
114
+ if: ${{ success() || failure() }}
115
+
116
+ - name: Function type hints utils tests
117
+ run: |
118
+ uv run pytest ./tests/test_function_type_hints_utils.py
119
+ if: ${{ success() || failure() }}
.github/workflows/trufflehog.yml ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ on:
2
+ push:
3
+
4
+ name: Secret Leaks
5
+
6
+ permissions:
7
+ contents: read
8
+
9
+ jobs:
10
+ trufflehog:
11
+ runs-on: ubuntu-latest
12
+ steps:
13
+ - name: Checkout code
14
+ uses: actions/checkout@v4
15
+ with:
16
+ fetch-depth: 0
17
+ - name: Secret Scanning
18
+ uses: trufflesecurity/trufflehog@main
.github/workflows/upload_pr_documentation.yml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Upload PR Documentation
2
+
3
+ on:
4
+ workflow_run:
5
+ workflows: ["Build PR Documentation"]
6
+ types:
7
+ - completed
8
+
9
+ jobs:
10
+ build:
11
+ uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main
12
+ with:
13
+ package_name: smolagents
14
+ secrets:
15
+ hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
16
+ comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}
.gitignore ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Logging
2
+ logs
3
+ tmp
4
+ wandb
5
+
6
+ # Data
7
+ data
8
+ outputs
9
+ data/
10
+
11
+ # Apple
12
+ .DS_Store
13
+
14
+ # VS Code
15
+ .vscode
16
+
17
+ # Byte-compiled / optimized / DLL files
18
+ __pycache__/
19
+ *.py[cod]
20
+ *$py.class
21
+
22
+ # C extensions
23
+ *.so
24
+
25
+ # Distribution / packaging
26
+ .Python
27
+ build/
28
+ develop-eggs/
29
+ dist/
30
+ downloads/
31
+ eggs/
32
+ .eggs/
33
+ lib/
34
+ lib64/
35
+ parts/
36
+ sdist/
37
+ var/
38
+ wheels/
39
+ share/python-wheels/
40
+ node_modules/
41
+ *.egg-info/
42
+ .installed.cfg
43
+ *.egg
44
+ MANIFEST
45
+
46
+ # PyInstaller
47
+ *.manifest
48
+ *.spec
49
+
50
+ # Installer logs
51
+ pip-log.txt
52
+ pip-delete-this-directory.txt
53
+
54
+ # Unit test / coverage reports
55
+ htmlcov/
56
+ .tox/
57
+ .nox/
58
+ .coverage
59
+ .coverage.*
60
+ .cache
61
+ nosetests.xml
62
+ coverage.xml
63
+ *.cover
64
+ *.py,cover
65
+ .hypothesis/
66
+ .pytest_cache/
67
+ cover/
68
+ uv.lock
69
+
70
+ # Translations
71
+ *.mo
72
+ *.pot
73
+
74
+ # Sphinx documentation
75
+ docs/_build/
76
+
77
+ # PyBuilder
78
+ .pybuilder/
79
+ target/
80
+
81
+ # Jupyter Notebook
82
+ .ipynb_checkpoints
83
+
84
+ # IPython
85
+ profile_default/
86
+ ipython_config.py
87
+
88
+ # pyenv
89
+ # .python-version
90
+
91
+ # pipenv
92
+ #Pipfile.lock
93
+
94
+ # UV
95
+ #uv.lock
96
+
97
+ # poetry
98
+ #poetry.lock
99
+
100
+ # pdm
101
+ .pdm.toml
102
+ .pdm-python
103
+ .pdm-build/
104
+
105
+ # PEP 582
106
+ __pypackages__/
107
+
108
+ # Celery stuff
109
+ celerybeat-schedule
110
+ celerybeat.pid
111
+
112
+ # SageMath parsed files
113
+ *.sage.py
114
+
115
+ # Environments
116
+ .env
117
+ **/.env
118
+ .venv
119
+ env/
120
+ venv/
121
+ ENV/
122
+ env.bak/
123
+ venv.bak/
124
+
125
+
126
+ # mkdocs documentation
127
+ /site
128
+
129
+ # mypy
130
+ .mypy_cache/
131
+ .dmypy.json
132
+ dmypy.json
133
+
134
+ # Pyre type checker
135
+ .pyre/
136
+
137
+ # pytype static type analyzer
138
+ .pytype/
139
+
140
+ # Cython debug symbols
141
+ cython_debug/
142
+
143
+ # PyCharm
144
+ .idea/
145
+
146
+ # Interpreter
147
+ interpreter_workspace/
148
+
149
+ # Archive
150
+ archive/
151
+ savedir/
152
+ output/
153
+ tool_output/
.pre-commit-config.yaml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ repos:
2
+ - repo: https://github.com/astral-sh/ruff-pre-commit
3
+ rev: v0.2.1
4
+ hooks:
5
+ - id: ruff
6
+ args:
7
+ - --fix
8
+ - id: ruff-format
9
+ - repo: https://github.com/pre-commit/pre-commit-hooks
10
+ rev: v4.5.0
11
+ hooks:
12
+ - id: check-merge-conflict
13
+ - id: check-yaml
CODE_OF_CONDUCT.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Contributor Covenant Code of Conduct
3
+
4
+ ## Our Pledge
5
+
6
+ We as members, contributors, and leaders pledge to make participation in our
7
+ community a harassment-free experience for everyone, regardless of age, body
8
+ size, visible or invisible disability, ethnicity, sex characteristics, gender
9
+ identity and expression, level of experience, education, socio-economic status,
10
+ nationality, personal appearance, race, caste, color, religion, or sexual
11
+ identity and orientation.
12
+
13
+ We pledge to act and interact in ways that contribute to an open, welcoming,
14
+ diverse, inclusive, and healthy community.
15
+
16
+ ## Our Standards
17
+
18
+ Examples of behavior that contributes to a positive environment for our
19
+ community include:
20
+
21
+ * Demonstrating empathy and kindness toward other people
22
+ * Being respectful of differing opinions, viewpoints, and experiences
23
+ * Giving and gracefully accepting constructive feedback
24
+ * Accepting responsibility and apologizing to those affected by our mistakes,
25
+ and learning from the experience
26
+ * Focusing on what is best not just for us as individuals, but for the overall
27
+ community
28
+
29
+ Examples of unacceptable behavior include:
30
+
31
+ * The use of sexualized language or imagery, and sexual attention or advances of
32
+ any kind
33
+ * Trolling, insulting or derogatory comments, and personal or political attacks
34
+ * Public or private harassment
35
+ * Publishing others' private information, such as a physical or email address,
36
+ without their explicit permission
37
+ * Other conduct which could reasonably be considered inappropriate in a
38
+ professional setting
39
+
40
+ ## Enforcement Responsibilities
41
+
42
+ Community leaders are responsible for clarifying and enforcing our standards of
43
+ acceptable behavior and will take appropriate and fair corrective action in
44
+ response to any behavior that they deem inappropriate, threatening, offensive,
45
+ or harmful.
46
+
47
+ Community leaders have the right and responsibility to remove, edit, or reject
48
+ comments, commits, code, wiki edits, issues, and other contributions that are
49
+ not aligned to this Code of Conduct, and will communicate reasons for moderation
50
+ decisions when appropriate.
51
+
52
+ ## Scope
53
+
54
+ This Code of Conduct applies within all community spaces, and also applies when
55
+ an individual is officially representing the community in public spaces.
56
+ Examples of representing our community include using an official e-mail address,
57
+ posting via an official social media account, or acting as an appointed
58
+ representative at an online or offline event.
59
+
60
+ ## Enforcement
61
+
62
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
63
+ reported to the community leaders responsible for enforcement at
64
+ feedback@huggingface.co.
65
+ All complaints will be reviewed and investigated promptly and fairly.
66
+
67
+ All community leaders are obligated to respect the privacy and security of the
68
+ reporter of any incident.
69
+
70
+ ## Enforcement Guidelines
71
+
72
+ Community leaders will follow these Community Impact Guidelines in determining
73
+ the consequences for any action they deem in violation of this Code of Conduct:
74
+
75
+ ### 1. Correction
76
+
77
+ **Community Impact**: Use of inappropriate language or other behavior deemed
78
+ unprofessional or unwelcome in the community.
79
+
80
+ **Consequence**: A private, written warning from community leaders, providing
81
+ clarity around the nature of the violation and an explanation of why the
82
+ behavior was inappropriate. A public apology may be requested.
83
+
84
+ ### 2. Warning
85
+
86
+ **Community Impact**: A violation through a single incident or series of
87
+ actions.
88
+
89
+ **Consequence**: A warning with consequences for continued behavior. No
90
+ interaction with the people involved, including unsolicited interaction with
91
+ those enforcing the Code of Conduct, for a specified period of time. This
92
+ includes avoiding interactions in community spaces as well as external channels
93
+ like social media. Violating these terms may lead to a temporary or permanent
94
+ ban.
95
+
96
+ ### 3. Temporary Ban
97
+
98
+ **Community Impact**: A serious violation of community standards, including
99
+ sustained inappropriate behavior.
100
+
101
+ **Consequence**: A temporary ban from any sort of interaction or public
102
+ communication with the community for a specified period of time. No public or
103
+ private interaction with the people involved, including unsolicited interaction
104
+ with those enforcing the Code of Conduct, is allowed during this period.
105
+ Violating these terms may lead to a permanent ban.
106
+
107
+ ### 4. Permanent Ban
108
+
109
+ **Community Impact**: Demonstrating a pattern of violation of community
110
+ standards, including sustained inappropriate behavior, harassment of an
111
+ individual, or aggression toward or disparagement of classes of individuals.
112
+
113
+ **Consequence**: A permanent ban from any sort of public interaction within the
114
+ community.
115
+
116
+ ## Attribution
117
+
118
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage],
119
+ version 2.1, available at
120
+ [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
121
+
122
+ Community Impact Guidelines were inspired by
123
+ [Mozilla's code of conduct enforcement ladder][Mozilla CoC].
124
+
125
+ For answers to common questions about this code of conduct, see the FAQ at
126
+ [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
127
+ [https://www.contributor-covenant.org/translations][translations].
128
+
129
+ [homepage]: https://www.contributor-covenant.org
130
+ [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
131
+ [Mozilla CoC]: https://github.com/mozilla/diversity
132
+ [FAQ]: https://www.contributor-covenant.org/faq
133
+ [translations]: https://www.contributor-covenant.org/translations
CONTRIBUTING.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!---
2
+ Copyright 2025 The HuggingFace Team. All rights reserved.
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ -->
16
+
17
+ # Contribute to smolagents
18
+
19
+ Everyone is welcome to contribute, and we value everybody's contribution. Code
20
+ contributions are not the only way to help the community. Answering questions, helping
21
+ others, and improving the documentation are also immensely valuable.
22
+
23
+ It also helps us if you spread the word! Reference the library in blog posts
24
+ about the awesome projects it made possible, shout out on Twitter every time it has
25
+ helped you, or simply ⭐️ the repository to say thank you.
26
+
27
+ However you choose to contribute, please be mindful and respect our
28
+ [code of conduct](https://github.com/huggingface/smolagents/blob/main/CODE_OF_CONDUCT.md).
29
+
30
+ **This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).**
31
+
32
+ ## Ways to contribute
33
+
34
+ There are several ways you can contribute to smolagents.
35
+
36
+ * Submit issues related to bugs or desired new features.
37
+ * Contribute to the examples or to the documentation.
38
+ * Fix outstanding issues with the existing code.
39
+
40
+ > All contributions are equally valuable to the community. 🥰
41
+
42
+ ## Submitting a bug-related issue or feature request
43
+
44
+ At any moment, feel welcome to open an issue, citing your exact error traces and package versions if it's a bug.
45
+ It's often even better to open a PR with your proposed fixes/changes!
46
+
47
+ Do your best to follow these guidelines when submitting a bug-related issue or a feature
48
+ request. It will make it easier for us to come back to you quickly and with good
49
+ feedback.
50
+
51
+ ### Did you find a bug?
52
+
53
+ The smolagents library is robust and reliable thanks to users who report the problems they encounter.
54
+
55
+ Before you report an issue, we would really appreciate it if you could **make sure the bug was not
56
+ already reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the
57
+ library itself, and not your code.
58
+
59
+ Once you've confirmed the bug hasn't already been reported, please include the following information in your issue so
60
+ we can quickly resolve it:
61
+
62
+ * Your **OS type and version**, as well as your environment versions (versions of rust, python, and dependencies).
63
+ * A short, self-contained, code snippet that allows us to reproduce the bug.
64
+ * The *full* traceback if an exception is raised.
65
+ * Attach any other additional information, like screenshots, you think may help.
66
+
67
+ ### Do you want a new feature?
68
+
69
+ If there is a new feature you'd like to see in smolagents, please open an issue and describe:
70
+
71
+ 1. What is the *motivation* behind this feature? Is it related to a problem or frustration with the library? Is it
72
+ a feature related to something you need for a project? Is it something you worked on and think it could benefit
73
+ the community?
74
+
75
+ Whatever it is, we'd love to hear about it!
76
+
77
+ 2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better
78
+ we'll be able to help you.
79
+ 3. Provide a *code snippet* that demonstrates the feature's usage.
80
+ 4. If the feature is related to a paper, please include a link.
81
+
82
+ If your issue is well written we're already 80% of the way there by the time you create it.
83
+
84
+ ## Do you want to add documentation?
85
+
86
+ We're always looking for improvements to the documentation that make it more clear and accurate. Please let us know
87
+ how the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We'll be
88
+ happy to make the changes or help you make a contribution if you're interested!
89
+
90
+ ## Fixing outstanding issues
91
+
92
+ If you notice an issue with the existing code and have a fix in mind, feel free to [start contributing](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) and open
93
+ a Pull Request!
94
+
95
+ ### Making code changes
96
+
97
+ To install dev dependencies, run:
98
+ ```
99
+ pip install -e ".[dev]"
100
+ ```
101
+
102
+ When making changes to the codebase, please check that it follows the repo's code quality requirements by running:
103
+ To check code quality of the source code:
104
+ ```
105
+ make quality
106
+ ```
107
+
108
+ If the checks fail, you can run the formatter with:
109
+ ```
110
+ make style
111
+ ```
112
+
113
+ And commit the changes.
114
+
115
+ To run tests locally, run this command:
116
+ ```bash
117
+ make test
118
+ ```
119
+ </details>
120
+
121
+ ## I want to become a maintainer of the project. How do I get there?
122
+
123
+ smolagents is a project led and managed by Hugging Face. We are more than
124
+ happy to have motivated individuals from other organizations join us as maintainers with the goal of helping smolagents
125
+ make a dent in the world of Agents.
126
+
127
+ If you are such an individual (or organization), please reach out to us and let's collaborate.
Dockerfile ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Base Python image
2
+ FROM python:3.12-slim
3
+
4
+ # Set working directory
5
+ WORKDIR /app
6
+
7
+ # Install build dependencies
8
+ RUN apt-get update && apt-get install -y \
9
+ build-essential \
10
+ zlib1g-dev \
11
+ libjpeg-dev \
12
+ libpng-dev \
13
+ && rm -rf /var/lib/apt/lists/*
14
+
15
+ # Copy package files
16
+ COPY . /app/
17
+
18
+ # Install dependencies
19
+ RUN pip install --no-cache-dir -r requirements.txt
20
+
21
+ # Install the package
22
+ RUN pip install -e .
23
+
24
+ COPY server.py /app/server.py
25
+
26
+ # Expose the port your server will run on
27
+ EXPOSE 65432
28
+
29
+ CMD ["python", "/app/server.py"]
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
Makefile ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .PHONY: quality style test docs utils
2
+
3
+ check_dirs := examples src tests utils
4
+
5
+ # Check code quality of the source code
6
+ quality:
7
+ ruff check $(check_dirs)
8
+ ruff format --check $(check_dirs)
9
+ python utils/check_tests_in_ci.py
10
+
11
+ # Format source code automatically
12
+ style:
13
+ ruff check $(check_dirs) --fix
14
+ ruff format $(check_dirs)
15
+
16
+ # Run smolagents tests
17
+ test:
18
+ pytest ./tests/
README.md CHANGED
@@ -1,12 +1,248 @@
1
- ---
2
- title: Moonshot DeepResearch
3
- emoji: 💬
4
- colorFrom: yellow
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 5.0.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Moonshot_DeepResearch
3
+ app_file: examples/open_deep_research/app.py
4
+ sdk: gradio
5
+ sdk_version: 5.20.1
6
+ ---
7
+ <!---
8
+ Copyright 2024 The HuggingFace Team. All rights reserved.
9
+
10
+ Licensed under the Apache License, Version 2.0 (the "License");
11
+ you may not use this file except in compliance with the License.
12
+ You may obtain a copy of the License at
13
+
14
+ http://www.apache.org/licenses/LICENSE-2.0
15
+
16
+ Unless required by applicable law or agreed to in writing, software
17
+ distributed under the License is distributed on an "AS IS" BASIS,
18
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ See the License for the specific language governing permissions and
20
+ limitations under the License.
21
+ -->
22
+ <p align="center">
23
+ <!-- Uncomment when CircleCI is set up
24
+ <a href="https://circleci.com/gh/huggingface/accelerate"><img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"></a>
25
+ -->
26
+ <a href="https://github.com/huggingface/smolagents/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/huggingface/smolagents.svg?color=blue"></a>
27
+ <a href="https://huggingface.co/docs/smolagents"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/smolagents/index.html.svg?down_color=red&down_message=offline&up_message=online"></a>
28
+ <a href="https://github.com/huggingface/smolagents/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/smolagents.svg"></a>
29
+ <a href="https://github.com/huggingface/smolagents/blob/main/CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"></a>
30
+ </p>
31
+
32
+ <h3 align="center">
33
+ <div style="display:flex;flex-direction:row;">
34
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/smolagents.png" alt="Hugging Face mascot as James Bond" width=400px>
35
+ <p>A smol library to build great agents!</p>
36
+ </div>
37
+ </h3>
38
+
39
+ `smolagents` is a library that enables you to run powerful agents in a few lines of code. It offers:
40
+
41
+ ✨ **Simplicity**: the logic for agents fits in ~1,000 lines of code (see [agents.py](https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py)). We kept abstractions to their minimal shape above raw code!
42
+
43
+ 🧑‍💻 **First-class support for Code Agents**. Our [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent) writes its actions in code (as opposed to "agents being used to write code"). To make it secure, we support executing in sandboxed environments via [E2B](https://e2b.dev/) or via Docker.
44
+
45
+ 🤗 **Hub integrations**: you can [share/pull tools to/from the Hub](https://huggingface.co/docs/smolagents/reference/tools#smolagents.Tool.from_hub), and more is to come!
46
+
47
+ 🌐 **Model-agnostic**: smolagents supports any LLM. It can be a local `transformers` or `ollama` model, one of [many providers on the Hub](https://huggingface.co/blog/inference-providers), or any model from OpenAI, Anthropic and many others via our [LiteLLM](https://www.litellm.ai/) integration.
48
+
49
+ 👁️ **Modality-agnostic**: Agents support text, vision, video, even audio inputs! Cf [this tutorial](https://huggingface.co/docs/smolagents/examples/web_browser) for vision.
50
+
51
+ 🛠️ **Tool-agnostic**: you can use tools from [LangChain](https://huggingface.co/docs/smolagents/reference/tools#smolagents.Tool.from_langchain), [Anthropic's MCP](https://huggingface.co/docs/smolagents/reference/tools#smolagents.ToolCollection.from_mcp), you can even use a [Hub Space](https://huggingface.co/docs/smolagents/reference/tools#smolagents.Tool.from_space) as a tool.
52
+
53
+ Full documentation can be found [here](https://huggingface.co/docs/smolagents/index).
54
+
55
+ > [!NOTE]
56
+ > Check the our [launch blog post](https://huggingface.co/blog/smolagents) to learn more about `smolagents`!
57
+
58
+ ## Quick demo
59
+
60
+ First install the package.
61
+ ```bash
62
+ pip install smolagents
63
+ ```
64
+ Then define your agent, give it the tools it needs and run it!
65
+ ```py
66
+ from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel
67
+
68
+ model = HfApiModel()
69
+ agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=model)
70
+
71
+ agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?")
72
+ ```
73
+
74
+ https://github.com/user-attachments/assets/cd0226e2-7479-4102-aea0-57c22ca47884
75
+
76
+ You can even share your agent to hub:
77
+ ```py
78
+ agent.push_to_hub("m-ric/my_agent")
79
+
80
+ # agent.from_hub("m-ric/my_agent") to load an agent from Hub
81
+ ```
82
+
83
+ Our library is LLM-agnostic: you could switch the example above to any inference provider.
84
+
85
+ <details>
86
+ <summary> <b>HfApiModel, gateway for 4 inference providers</b></summary>
87
+
88
+ ```py
89
+ from smolagents import HfApiModel
90
+
91
+ model = HfApiModel(
92
+ model_id="deepseek-ai/DeepSeek-R1",
93
+ provider="together",
94
+ )
95
+ ```
96
+ </details>
97
+ <details>
98
+ <summary> <b>LiteLLM to access 100+ LLMs</b></summary>
99
+
100
+ ```py
101
+ from smolagents import LiteLLMModel
102
+
103
+ model = LiteLLMModel(
104
+ "anthropic/claude-3-5-sonnet-latest",
105
+ temperature=0.2,
106
+ api_key=os.environ["ANTHROPIC_API_KEY"]
107
+ )
108
+ ```
109
+ </details>
110
+ <details>
111
+ <summary> <b>OpenAI-compatible servers</b></summary>
112
+
113
+ ```py
114
+ import os
115
+ from smolagents import OpenAIServerModel
116
+
117
+ model = OpenAIServerModel(
118
+ model_id="deepseek-ai/DeepSeek-R1",
119
+ api_base="https://api.together.xyz/v1/", # Leave this blank to query OpenAI servers.
120
+ api_key=os.environ["TOGETHER_API_KEY"], # Switch to the API key for the server you're targeting.
121
+ )
122
+ ```
123
+ </details>
124
+ <details>
125
+ <summary> <b>Local `transformers` model</b></summary>
126
+
127
+ ```py
128
+ from smolagents import TransformersModel
129
+
130
+ model = TransformersModel(
131
+ model_id="Qwen/Qwen2.5-Coder-32B-Instruct",
132
+ max_new_tokens=4096,
133
+ device_map="auto"
134
+ )
135
+ ```
136
+ </details>
137
+ <details>
138
+ <summary> <b>Azure models</b></summary>
139
+
140
+ ```py
141
+ import os
142
+ from smolagents import AzureOpenAIServerModel
143
+
144
+ model = AzureOpenAIServerModel(
145
+ model_id = os.environ.get("AZURE_OPENAI_MODEL"),
146
+ azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
147
+ api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
148
+ api_version=os.environ.get("OPENAI_API_VERSION")
149
+ )
150
+ ```
151
+ </details>
152
+
153
+ ## CLI
154
+
155
+ You can run agents from CLI using two commands: `smolagent` and `webagent`.
156
+
157
+ `smolagent` is a generalist command to run a multi-step `CodeAgent` that can be equipped with various tools.
158
+
159
+ ```bash
160
+ smolagent "Plan a trip to Tokyo, Kyoto and Osaka between Mar 28 and Apr 7." --model-type "HfApiModel" --model-id "Qwen/Qwen2.5-Coder-32B-Instruct" --imports "pandas numpy" --tools "web_search"
161
+ ```
162
+
163
+ Meanwhile `webagent` is a specific web-browsing agent using [helium](https://github.com/mherrmann/helium) (read more [here](https://github.com/huggingface/smolagents/blob/main/src/smolagents/vision_web_browser.py)).
164
+
165
+ For instance:
166
+ ```bash
167
+ webagent "go to xyz.com/men, get to sale section, click the first clothing item you see. Get the product details, and the price, return them. note that I'm shopping from France" --model-type "LiteLLMModel" --model-id "gpt-4o"
168
+ ```
169
+
170
+ ## How do Code agents work?
171
+
172
+ Our [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent) works mostly like classical ReAct agents - the exception being that the LLM engine writes its actions as Python code snippets.
173
+
174
+ ```mermaid
175
+ flowchart TB
176
+ Task[User Task]
177
+ Memory[agent.memory]
178
+ Generate[Generate from agent.model]
179
+ Execute[Execute Code action - Tool calls are written as functions]
180
+ Answer[Return the argument given to 'final_answer']
181
+
182
+ Task -->|Add task to agent.memory| Memory
183
+
184
+ subgraph ReAct[ReAct loop]
185
+ Memory -->|Memory as chat messages| Generate
186
+ Generate -->|Parse output to extract code action| Execute
187
+ Execute -->|No call to 'final_answer' tool => Store execution logs in memory and keep running| Memory
188
+ end
189
+
190
+ Execute -->|Call to 'final_answer' tool| Answer
191
+
192
+ %% Styling
193
+ classDef default fill:#d4b702,stroke:#8b7701,color:#ffffff
194
+ classDef io fill:#4a5568,stroke:#2d3748,color:#ffffff
195
+
196
+ class Task,Answer io
197
+ ```
198
+
199
+ Actions are now Python code snippets. Hence, tool calls will be performed as Python function calls. For instance, here is how the agent can perform web search over several websites in one single action:
200
+ ```py
201
+ requests_to_search = ["gulf of mexico america", "greenland denmark", "tariffs"]
202
+ for request in requests_to_search:
203
+ print(f"Here are the search results for {request}:", web_search(request))
204
+ ```
205
+
206
+ Writing actions as code snippets is demonstrated to work better than the current industry practice of letting the LLM output a dictionary of the tools it wants to call: [uses 30% fewer steps](https://huggingface.co/papers/2402.01030) (thus 30% fewer LLM calls) and [reaches higher performance on difficult benchmarks](https://huggingface.co/papers/2411.01747). Head to [our high-level intro to agents](https://huggingface.co/docs/smolagents/conceptual_guides/intro_agents) to learn more on that.
207
+
208
+ Especially, since code execution can be a security concern (arbitrary code execution!), we provide options at runtime:
209
+ - a secure python interpreter to run code more safely in your environment (more secure than raw code execution but still risky)
210
+ - a sandboxed environment using [E2B](https://e2b.dev/) or Docker (removes the risk to your own system).
211
+
212
+ On top of this [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent) class, we still support the standard [`ToolCallingAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.ToolCallingAgent) that writes actions as JSON/text blobs. But we recommend always using `CodeAgent`.
213
+
214
+ ## How smol is this library?
215
+
216
+ We strived to keep abstractions to a strict minimum: the main code in `agents.py` has <1,000 lines of code.
217
+ Still, we implement several types of agents: `CodeAgent` writes its actions as Python code snippets, and the more classic `ToolCallingAgent` leverages built-in tool calling methods. We also have multi-agent hierarchies, import from tool collections, remote code execution, vision models...
218
+
219
+ By the way, why use a framework at all? Well, because a big part of this stuff is non-trivial. For instance, the code agent has to keep a consistent format for code throughout its system prompt, its parser, the execution. So our framework handles this complexity for you. But of course we still encourage you to hack into the source code and use only the bits that you need, to the exclusion of everything else!
220
+
221
+ ## How strong are open models for agentic workflows?
222
+
223
+ We've created [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent) instances with some leading models, and compared them on [this benchmark](https://huggingface.co/datasets/m-ric/agents_medium_benchmark_2) that gathers questions from a few different benchmarks to propose a varied blend of challenges.
224
+
225
+ [Find the benchmarking code here](https://github.com/huggingface/smolagents/blob/main/examples/benchmark.ipynb) for more detail on the agentic setup used, and see a comparison of using LLMs code agents compared to vanilla (spoilers: code agents works better).
226
+
227
+ <p align="center">
228
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/benchmark_code_agents.jpeg" alt="benchmark of different models on agentic workflows. Open model DeepSeek-R1 beats closed-source models." width=60% max-width=500px>
229
+ </p>
230
+
231
+ This comparison shows that open-source models can now take on the best closed models!
232
+
233
+ ## Contribute
234
+
235
+ Everyone is welcome to contribute, get started with our [contribution guide](https://github.com/huggingface/smolagents/blob/main/CONTRIBUTING.md).
236
+
237
+ ## Cite smolagents
238
+
239
+ If you use `smolagents` in your publication, please cite it by using the following BibTeX entry.
240
+
241
+ ```bibtex
242
+ @Misc{smolagents,
243
+ title = {`smolagents`: a smol library to build great agentic systems.},
244
+ author = {Aymeric Roucher and Albert Villanova del Moral and Thomas Wolf and Leandro von Werra and Erik Kaunismäki},
245
+ howpublished = {\url{https://github.com/huggingface/smolagents}},
246
+ year = {2025}
247
+ }
248
+ ```
deploy_space_action.yaml ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Deploy to Hugging Face Spaces
2
+ on:
3
+ push:
4
+ branches: [main]
5
+
6
+ jobs:
7
+ deploy:
8
+ runs-on: ubuntu-latest
9
+ steps:
10
+ - uses: actions/checkout@v3
11
+ - name: Deploy to HF Spaces
12
+ uses: huggingface/huggingface-deploy-action@main
13
+ with:
14
+ space-name: ${{ env.SPACE_NAME }}
15
+ title: ${{ env.SPACE_TITLE }}
16
+ space-type: gradio
17
+ artifact-path: .
18
+ package-path: .
19
+ hf-token: ${{ secrets.HF_TOKEN }}
docs/README.md ADDED
@@ -0,0 +1,274 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!---
2
+ Copyright 2024 The HuggingFace Team. All rights reserved.
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ -->
16
+
17
+ # Generating the documentation
18
+
19
+ To generate the documentation, you have to build it. Several packages are necessary to build the doc.
20
+
21
+ First, you need to install the project itself by running the following command at the root of the code repository:
22
+
23
+ ```bash
24
+ pip install -e .
25
+ ```
26
+
27
+ You also need to install 2 extra packages:
28
+
29
+ ```bash
30
+ # `hf-doc-builder` to build the docs
31
+ pip install git+https://github.com/huggingface/doc-builder@main
32
+ # `watchdog` for live reloads
33
+ pip install watchdog
34
+ ```
35
+
36
+ ---
37
+ **NOTE**
38
+
39
+ You only need to generate the documentation to inspect it locally (if you're planning changes and want to
40
+ check how they look before committing for instance). You don't have to commit the built documentation.
41
+
42
+ ---
43
+
44
+ ## Building the documentation
45
+
46
+ Once you have setup the `doc-builder` and additional packages with the pip install command above,
47
+ you can generate the documentation by typing the following command:
48
+
49
+ ```bash
50
+ doc-builder build smolagents docs/source/en/ --build_dir ~/tmp/test-build
51
+ ```
52
+
53
+ You can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate
54
+ the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite
55
+ Markdown editor.
56
+
57
+ ## Previewing the documentation
58
+
59
+ To preview the docs, run the following command:
60
+
61
+ ```bash
62
+ doc-builder preview smolagents docs/source/en/
63
+ ```
64
+
65
+ The docs will be viewable at [http://localhost:5173](http://localhost:5173). You can also preview the docs once you
66
+ have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
67
+
68
+ ---
69
+ **NOTE**
70
+
71
+ The `preview` command only works with existing doc files. When you add a completely new file, you need to update
72
+ `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
73
+
74
+ ---
75
+
76
+ ## Adding a new element to the navigation bar
77
+
78
+ Accepted files are Markdown (.md).
79
+
80
+ Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
81
+ the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/smolagents/blob/main/docs/source/_toctree.yml) file.
82
+
83
+ ## Renaming section headers and moving sections
84
+
85
+ It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
86
+
87
+ Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
88
+
89
+ So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
90
+
91
+ ```
92
+ Sections that were moved:
93
+
94
+ [ <a href="#section-b">Section A</a><a id="section-a"></a> ]
95
+ ```
96
+ and of course, if you moved it to another file, then:
97
+
98
+ ```
99
+ Sections that were moved:
100
+
101
+ [ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
102
+ ```
103
+
104
+ Use the relative style to link to the new file so that the versioned docs continue to work.
105
+
106
+ For an example of a rich moved section set please see the very end of [the transformers Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.md).
107
+
108
+
109
+ ## Writing Documentation - Specification
110
+
111
+ The `huggingface/smolagents` documentation follows the
112
+ [Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,
113
+ although we can write them directly in Markdown.
114
+
115
+ ### Adding a new tutorial
116
+
117
+ Adding a new tutorial or section is done in two steps:
118
+
119
+ - Add a new Markdown (.md) file under `./source`.
120
+ - Link that file in `./source/_toctree.yml` on the correct toc-tree.
121
+
122
+ Make sure to put your new file under the proper section. If you have a doubt, feel free to ask in a Github Issue or PR.
123
+
124
+ ### Translating
125
+
126
+ When translating, refer to the guide at [./TRANSLATING.md](https://github.com/huggingface/smolagents/blob/main/docs/TRANSLATING.md).
127
+
128
+ ### Writing source documentation
129
+
130
+ Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
131
+ and objects like True, None, or any strings should usually be put in `code`.
132
+
133
+ When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool
134
+ adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or
135
+ function to be in the main package.
136
+
137
+ If you want to create a link to some internal class or function, you need to
138
+ provide its path. For instance: \[\`utils.ModelOutput\`\]. This will be converted into a link with
139
+ `utils.ModelOutput` in the description. To get rid of the path and only keep the name of the object you are
140
+ linking to in the description, add a ~: \[\`~utils.ModelOutput\`\] will generate a link with `ModelOutput` in the description.
141
+
142
+ The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\].
143
+
144
+ #### Defining arguments in a method
145
+
146
+ Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and
147
+ an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its
148
+ description:
149
+
150
+ ```
151
+ Args:
152
+ n_layers (`int`): The number of layers of the model.
153
+ ```
154
+
155
+ If the description is too long to fit in one line, another indentation is necessary before writing the description
156
+ after the argument.
157
+
158
+ Here's an example showcasing everything so far:
159
+
160
+ ```
161
+ Args:
162
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
163
+ Indices of input sequence tokens in the vocabulary.
164
+
165
+ Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and
166
+ [`~PreTrainedTokenizer.__call__`] for details.
167
+
168
+ [What are input IDs?](../glossary#input-ids)
169
+ ```
170
+
171
+ For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the
172
+ following signature:
173
+
174
+ ```
175
+ def my_function(x: str = None, a: float = 1):
176
+ ```
177
+
178
+ then its documentation should look like this:
179
+
180
+ ```
181
+ Args:
182
+ x (`str`, *optional*):
183
+ This argument controls ...
184
+ a (`float`, *optional*, defaults to 1):
185
+ This argument is used to ...
186
+ ```
187
+
188
+ Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even
189
+ if the first line describing your argument type and its default gets long, you can't break it on several lines. You can
190
+ however write as many lines as you want in the indented description (see the example above with `input_ids`).
191
+
192
+ #### Writing a multi-line code block
193
+
194
+ Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
195
+
196
+
197
+ ````
198
+ ```
199
+ # first line of code
200
+ # second line
201
+ # etc
202
+ ```
203
+ ````
204
+
205
+ #### Writing a return block
206
+
207
+ The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.
208
+ The first line should be the type of the return, followed by a line return. No need to indent further for the elements
209
+ building the return.
210
+
211
+ Here's an example of a single value return:
212
+
213
+ ```
214
+ Returns:
215
+ `List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
216
+ ```
217
+
218
+ Here's an example of a tuple return, comprising several objects:
219
+
220
+ ```
221
+ Returns:
222
+ `tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
223
+ - ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
224
+ Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
225
+ - **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
226
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
227
+ ```
228
+
229
+ #### Adding an image
230
+
231
+ Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
232
+ the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference
233
+ them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
234
+ If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
235
+ to this dataset.
236
+
237
+ #### Writing documentation examples
238
+
239
+ The syntax for Example docstrings can look as follows:
240
+
241
+ ```
242
+ Example:
243
+
244
+ ```python
245
+ >>> from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
246
+ >>> from datasets import load_dataset
247
+ >>> import torch
248
+
249
+ >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
250
+ >>> dataset = dataset.sort("id")
251
+ >>> sampling_rate = dataset.features["audio"].sampling_rate
252
+
253
+ >>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
254
+ >>> model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
255
+
256
+ >>> # audio file is decoded on the fly
257
+ >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
258
+ >>> with torch.no_grad():
259
+ ... logits = model(**inputs).logits
260
+ >>> predicted_ids = torch.argmax(logits, dim=-1)
261
+
262
+ >>> # transcribe speech
263
+ >>> transcription = processor.batch_decode(predicted_ids)
264
+ >>> transcription[0]
265
+ 'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'
266
+ ```
267
+ ```
268
+
269
+ The docstring should give a minimal, clear example of how the respective model
270
+ is to be used in inference and also include the expected (ideally sensible)
271
+ output.
272
+ Often, readers will try out the example before even going through the function
273
+ or class definitions. Therefore, it is of utmost importance that the example
274
+ works as expected.
docs/source/en/_config.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # docstyle-ignore
2
+ INSTALL_CONTENT = """
3
+ # Installation
4
+ ! pip install smolagents
5
+ # To install from source instead of the last release, comment the command above and uncomment the following one.
6
+ # ! pip install git+https://github.com/huggingface/smolagents.git
7
+ """
8
+
9
+ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}]
10
+ black_avoid_patterns = {
11
+ "{processor_class}": "FakeProcessorClass",
12
+ "{model_class}": "FakeModelClass",
13
+ "{object_class}": "FakeObjectClass",
14
+ }
docs/source/en/_toctree.yml ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ - title: Get started
2
+ sections:
3
+ - local: index
4
+ title: 🤗 Agents
5
+ - local: guided_tour
6
+ title: Guided tour
7
+ - title: Tutorials
8
+ sections:
9
+ - local: tutorials/building_good_agents
10
+ title: ✨ Building good agents
11
+ - local: tutorials/inspect_runs
12
+ title: 📊 Inspect your agent runs using telemetry
13
+ - local: tutorials/tools
14
+ title: 🛠️ Tools - in-depth guide
15
+ - local: tutorials/secure_code_execution
16
+ title: 🛡️ Secure code execution
17
+ - local: tutorials/memory
18
+ title: 📚 Manage your agent's memory
19
+ - title: Conceptual guides
20
+ sections:
21
+ - local: conceptual_guides/intro_agents
22
+ title: 🤖 An introduction to agentic systems
23
+ - local: conceptual_guides/react
24
+ title: 🤔 How do Multi-step agents work?
25
+ - title: Examples
26
+ sections:
27
+ - local: examples/text_to_sql
28
+ title: Self-correcting Text-to-SQL
29
+ - local: examples/rag
30
+ title: Master you knowledge base with agentic RAG
31
+ - local: examples/multiagents
32
+ title: Orchestrate a multi-agent system
33
+ - local: examples/web_browser
34
+ title: Build a web browser agent using vision models
35
+ - title: Reference
36
+ sections:
37
+ - local: reference/agents
38
+ title: Agent-related objects
39
+ - local: reference/models
40
+ title: Model-related objects
41
+ - local: reference/tools
42
+ title: Tool-related objects
docs/source/en/ai_assistant_architecture.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AI Assistant Architecture and Design Documentation
2
+
3
+ ## Overview
4
+
5
+ This document provides a comprehensive overview of the AI Assistant architecture, designed to help future AI systems understand the core concepts, components, and interactions within the system.
6
+
7
+ ## Core Components
8
+
9
+ ### 1. Agent System
10
+
11
+ #### 1.1 Agent Types
12
+ - **Base Agent**: Foundation for all agent implementations
13
+ - **Task-Specific Agents**: Specialized agents for particular domains
14
+ - **Multi-Agent System**: Collaborative network of agents working together
15
+
16
+ #### 1.2 Agent Capabilities
17
+ - Natural language understanding and generation
18
+ - Context management and memory systems
19
+ - Tool usage and integration
20
+ - Decision making and planning
21
+ - Self-improvement and learning
22
+
23
+ ### 2. Tool System
24
+
25
+ #### 2.1 Tool Categories
26
+ - **File Operations**: Create, read, update, delete operations
27
+ - **Code Analysis**: Static analysis, dependency tracking
28
+ - **Command Execution**: Safe command running in controlled environments
29
+ - **Search Operations**: Content and pattern matching
30
+ - **UI Interaction**: Preview and visual feedback tools
31
+
32
+ #### 2.2 Tool Management
33
+ - Tool registration and discovery
34
+ - Parameter validation
35
+ - Execution safety measures
36
+ - Result processing and error handling
37
+
38
+ ### 3. Execution System
39
+
40
+ #### 3.1 Execution Environments
41
+ - Local Python executor
42
+ - Remote sandboxed environments
43
+ - Containerized execution
44
+
45
+ #### 3.2 Safety Mechanisms
46
+ - Resource limitations
47
+ - Permission management
48
+ - Input validation
49
+ - Output sanitization
50
+
51
+ ## System Architecture
52
+
53
+ ### 1. High-Level Architecture
54
+
55
+ ```
56
+ [User Input] → [Agent System] → [Tool System] → [Execution System]
57
+ ↑ ↑ ↑
58
+ └──── Context Management ──────┘
59
+ ```
60
+
61
+ ### 2. Data Flow
62
+
63
+ 1. User input processing
64
+ 2. Context analysis and task planning
65
+ 3. Tool selection and parameter preparation
66
+ 4. Execution and result handling
67
+ 5. Response generation and delivery
68
+
69
+ ## Interaction Patterns
70
+
71
+ ### 1. Command Processing Flow
72
+
73
+ 1. **Input Analysis**
74
+ - Natural language understanding
75
+ - Intent classification
76
+ - Parameter extraction
77
+
78
+ 2. **Context Management**
79
+ - Session state tracking
80
+ - Memory management
81
+ - History retention
82
+
83
+ 3. **Tool Selection**
84
+ - Capability matching
85
+ - Parameter validation
86
+ - Safety checks
87
+
88
+ 4. **Execution**
89
+ - Environment preparation
90
+ - Command running
91
+ - Result capture
92
+
93
+ 5. **Response Generation**
94
+ - Result processing
95
+ - Natural language generation
96
+ - User feedback
97
+
98
+ ## Extension Mechanisms
99
+
100
+ ### 1. Adding New Tools
101
+
102
+ ```python
103
+ from typing import Dict, Any
104
+
105
+ def new_tool(params: Dict[str, Any]) -> Dict[str, Any]:
106
+ """Template for creating new tools
107
+
108
+ Args:
109
+ params: Tool parameters
110
+
111
+ Returns:
112
+ Tool execution results
113
+ """
114
+ # Implementation
115
+ pass
116
+ ```
117
+
118
+ ### 2. Custom Agent Creation
119
+
120
+ ```python
121
+ class CustomAgent:
122
+ def __init__(self, config: Dict[str, Any]):
123
+ self.config = config
124
+
125
+ def process(self, input: str) -> str:
126
+ """Process user input and generate response"""
127
+ # Implementation
128
+ pass
129
+ ```
130
+
131
+ ## Best Practices
132
+
133
+ ### 1. Tool Development
134
+ - Implement comprehensive parameter validation
135
+ - Provide clear documentation and examples
136
+ - Include error handling and recovery mechanisms
137
+ - Ensure idempotency where applicable
138
+
139
+ ### 2. Agent Implementation
140
+ - Maintain consistent context management
141
+ - Implement graceful fallback mechanisms
142
+ - Support progressive enhancement
143
+ - Monitor and log important events
144
+
145
+ ### 3. Security Considerations
146
+ - Input sanitization
147
+ - Resource usage limits
148
+ - Permission management
149
+ - Secure data handling
150
+
151
+ ## Performance Optimization
152
+
153
+ ### 1. Response Time
154
+ - Implement caching mechanisms
155
+ - Optimize tool selection
156
+ - Parallelize operations where possible
157
+
158
+ ### 2. Resource Usage
159
+ - Memory management
160
+ - CPU utilization
161
+ - Network efficiency
162
+
163
+ ## Error Handling
164
+
165
+ ### 1. Error Categories
166
+ - User input errors
167
+ - Tool execution errors
168
+ - System errors
169
+ - Network errors
170
+
171
+ ### 2. Recovery Strategies
172
+ - Graceful degradation
173
+ - Automatic retry mechanisms
174
+ - User feedback
175
+ - System state recovery
176
+
177
+ ## Monitoring and Logging
178
+
179
+ ### 1. Metrics
180
+ - Response times
181
+ - Success rates
182
+ - Resource usage
183
+ - Error frequencies
184
+
185
+ ### 2. Logging
186
+ - Operation logs
187
+ - Error logs
188
+ - Performance metrics
189
+ - User interactions
190
+
191
+ ## Future Enhancements
192
+
193
+ ### 1. Planned Improvements
194
+ - Enhanced natural language understanding
195
+ - Advanced context management
196
+ - Improved tool discovery
197
+ - Better error recovery
198
+
199
+ ### 2. Research Areas
200
+ - Self-learning capabilities
201
+ - Dynamic tool creation
202
+ - Advanced multi-agent coordination
203
+ - Improved security measures
204
+
205
+ ## Conclusion
206
+
207
+ This architecture documentation provides a comprehensive overview of the AI Assistant system. Future AI systems can use this as a reference for understanding the system's components, interactions, and extension mechanisms. The modular design allows for continuous improvement and adaptation to new requirements while maintaining security and performance standards.
docs/source/en/conceptual_guides/intro_agents.mdx ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Introduction to Agents
17
+
18
+ ## 🤔 What are agents?
19
+
20
+ Any efficient system using AI will need to provide LLMs some kind of access to the real world: for instance the possibility to call a search tool to get external information, or to act on certain programs in order to solve a task. In other words, LLMs should have ***agency***. Agentic programs are the gateway to the outside world for LLMs.
21
+
22
+ > [!TIP]
23
+ > AI Agents are **programs where LLM outputs control the workflow**.
24
+
25
+ Any system leveraging LLMs will integrate the LLM outputs into code. The influence of the LLM's input on the code workflow is the level of agency of LLMs in the system.
26
+
27
+ Note that with this definition, "agent" is not a discrete, 0 or 1 definition: instead, "agency" evolves on a continuous spectrum, as you give more or less power to the LLM on your workflow.
28
+
29
+ See in the table below how agency can vary across systems:
30
+
31
+ | Agency Level | Description | How that's called | Example Pattern |
32
+ | ------------ | ------------------------------------------------------- | ----------------- | -------------------------------------------------- |
33
+ | ☆☆☆ | LLM output has no impact on program flow | Simple Processor | `process_llm_output(llm_response)` |
34
+ | ★☆☆ | LLM output determines an if/else switch | Router | `if llm_decision(): path_a() else: path_b()` |
35
+ | ★★☆ | LLM output determines function execution | Tool Caller | `run_function(llm_chosen_tool, llm_chosen_args)` |
36
+ | ★★★ | LLM output controls iteration and program continuation | Multi-step Agent | `while llm_should_continue(): execute_next_step()` |
37
+ | ★★★ | One agentic workflow can start another agentic workflow | Multi-Agent | `if llm_trigger(): execute_agent()` |
38
+
39
+ The multi-step agent has this code structure:
40
+
41
+ ```python
42
+ memory = [user_defined_task]
43
+ while llm_should_continue(memory): # this loop is the multi-step part
44
+ action = llm_get_next_action(memory) # this is the tool-calling part
45
+ observations = execute_action(action)
46
+ memory += [action, observations]
47
+ ```
48
+
49
+ This agentic system runs in a loop, executing a new action at each step (the action can involve calling some pre-determined *tools* that are just functions), until its observations make it apparent that a satisfactory state has been reached to solve the given task. Here’s an example of how a multi-step agent can solve a simple math question:
50
+
51
+ <div class="flex justify-center">
52
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif"/>
53
+ </div>
54
+
55
+
56
+ ## ✅ When to use agents / ⛔ when to avoid them
57
+
58
+ Agents are useful when you need an LLM to determine the workflow of an app. But they’re often overkill. The question is: do I really need flexibility in the workflow to efficiently solve the task at hand?
59
+ If the pre-determined workflow falls short too often, that means you need more flexibility.
60
+ Let's take an example: say you're making an app that handles customer requests on a surfing trip website.
61
+
62
+ You could know in advance that the requests will belong to either of 2 buckets (based on user choice), and you have a predefined workflow for each of these 2 cases.
63
+
64
+ 1. Want some knowledge on the trips? ⇒ give them access to a search bar to search your knowledge base
65
+ 2. Wants to talk to sales? ⇒ let them type in a contact form.
66
+
67
+ If that deterministic workflow fits all queries, by all means just code everything! This will give you a 100% reliable system with no risk of error introduced by letting unpredictable LLMs meddle in your workflow. For the sake of simplicity and robustness, it's advised to regularize towards not using any agentic behaviour.
68
+
69
+ But what if the workflow can't be determined that well in advance?
70
+
71
+ For instance, a user wants to ask: `"I can come on Monday, but I forgot my passport so risk being delayed to Wednesday, is it possible to take me and my stuff to surf on Tuesday morning, with a cancellation insurance?"` This question hinges on many factors, and probably none of the predetermined criteria above will suffice for this request.
72
+
73
+ If the pre-determined workflow falls short too often, that means you need more flexibility.
74
+
75
+ That is where an agentic setup helps.
76
+
77
+ In the above example, you could just make a multi-step agent that has access to a weather API for weather forecasts, Google Maps API to compute travel distance, an employee availability dashboard and a RAG system on your knowledge base.
78
+
79
+ Until recently, computer programs were restricted to pre-determined workflows, trying to handle complexity by piling up if/else switches. They focused on extremely narrow tasks, like "compute the sum of these numbers" or "find the shortest path in this graph". But actually, most real-life tasks, like our trip example above, do not fit in pre-determined workflows. Agentic systems open up the vast world of real-world tasks to programs!
80
+
81
+ ## Why `smolagents`?
82
+
83
+ For some low-level agentic use cases, like chains or routers, you can write all the code yourself. You'll be much better that way, since it will let you control and understand your system better.
84
+
85
+ But once you start going for more complicated behaviours like letting an LLM call a function (that's "tool calling") or letting an LLM run a while loop ("multi-step agent"), some abstractions become necessary:
86
+ - For tool calling, you need to parse the agent's output, so this output needs a predefined format like "Thought: I should call tool 'get_weather'. Action: get_weather(Paris).", that you parse with a predefined function, and system prompt given to the LLM should notify it about this format.
87
+ - For a multi-step agent where the LLM output determines the loop, you need to give a different prompt to the LLM based on what happened in the last loop iteration: so you need some kind of memory.
88
+
89
+ See? With these two examples, we already found the need for a few items to help us:
90
+
91
+ - Of course, an LLM that acts as the engine powering the system
92
+ - A list of tools that the agent can access
93
+ - A parser that extracts tool calls from the LLM output
94
+ - A system prompt synced with the parser
95
+ - A memory
96
+
97
+ But wait, since we give room to LLMs in decisions, surely they will make mistakes: so we need error logging and retry mechanisms.
98
+
99
+ All these elements need tight coupling to make a well-functioning system. That's why we decided we needed to make basic building blocks to make all this stuff work together.
100
+
101
+ ## Code agents
102
+
103
+ In a multi-step agent, at each step, the LLM can write an action, in the form of some calls to external tools. A common format (used by Anthropic, OpenAI, and many others) for writing these actions is generally different shades of "writing actions as a JSON of tools names and arguments to use, which you then parse to know which tool to execute and with which arguments".
104
+
105
+ [Multiple](https://huggingface.co/papers/2402.01030) [research](https://huggingface.co/papers/2411.01747) [papers](https://huggingface.co/papers/2401.00812) have shown that having the tool calling LLMs in code is much better.
106
+
107
+ The reason for this simply that *we crafted our code languages specifically to be the best possible way to express actions performed by a computer*. If JSON snippets were a better expression, JSON would be the top programming language and programming would be hell on earth.
108
+
109
+ The figure below, taken from [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030), illustrates some advantages of writing actions in code:
110
+
111
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png">
112
+
113
+ Writing actions in code rather than JSON-like snippets provides better:
114
+
115
+ - **Composability:** could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function?
116
+ - **Object management:** how do you store the output of an action like `generate_image` in JSON?
117
+ - **Generality:** code is built to express simply anything you can have a computer do.
118
+ - **Representation in LLM training data:** plenty of quality code actions are already included in LLMs’ training data which means they’re already trained for this!
docs/source/en/conceptual_guides/react.mdx ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # How do multi-step agents work?
17
+
18
+ The ReAct framework ([Yao et al., 2022](https://huggingface.co/papers/2210.03629)) is currently the main approach to building agents.
19
+
20
+ The name is based on the concatenation of two words, "Reason" and "Act." Indeed, agents following this architecture will solve their task in as many steps as needed, each step consisting of a Reasoning step, then an Action step where it formulates tool calls that will bring it closer to solving the task at hand.
21
+
22
+ All agents in `smolagents` are based on singular `MultiStepAgent` class, which is an abstraction of ReAct framework.
23
+
24
+ On a basic level, this class performs actions on a cycle of following steps, where existing variables and knowledge is incorporated into the agent logs like below:
25
+
26
+ Initialization: the system prompt is stored in a `SystemPromptStep`, and the user query is logged into a `TaskStep` .
27
+
28
+ While loop (ReAct loop):
29
+
30
+ - Use `agent.write_memory_to_messages()` to write the agent logs into a list of LLM-readable [chat messages](https://huggingface.co/docs/transformers/en/chat_templating).
31
+ - Send these messages to a `Model` object to get its completion. Parse the completion to get the action (a JSON blob for `ToolCallingAgent`, a code snippet for `CodeAgent`).
32
+ - Execute the action and logs result into memory (an `ActionStep`).
33
+ - At the end of each step, we run all callback functions defined in `agent.step_callbacks` .
34
+
35
+ Optionally, when planning is activated, a plan can be periodically revised and stored in a `PlanningStep` . This includes feeding facts about the task at hand to the memory.
36
+
37
+ For a `CodeAgent`, it looks like the figure below.
38
+
39
+ <div class="flex justify-center">
40
+ <img
41
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/codeagent_docs.png"
42
+ />
43
+ </div>
44
+
45
+ Here is a video overview of how that works:
46
+
47
+ <div class="flex justify-center">
48
+ <img
49
+ class="block dark:hidden"
50
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif"
51
+ />
52
+ <img
53
+ class="hidden dark:block"
54
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif"
55
+ />
56
+ </div>
57
+
58
+ We implement two versions of agents:
59
+ - [`CodeAgent`] is the preferred type of agent: it generates its tool calls as blobs of code.
60
+ - [`ToolCallingAgent`] generates tool calls as a JSON in its output, as is commonly done in agentic frameworks. We incorporate this option because it can be useful in some narrow cases where you can do fine with only one tool call per step: for instance, for web browsing, you need to wait after each action on the page to monitor how the page changes.
61
+
62
+ > [!TIP]
63
+ > Read [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) blog post to learn more about multi-step agents.
docs/source/en/examples/multiagents.mdx ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Orchestrate a multi-agent system 🤖🤝🤖
17
+
18
+ [[open-in-colab]]
19
+
20
+ In this notebook we will make a **multi-agent web browser: an agentic system with several agents collaborating to solve problems using the web!**
21
+
22
+ It will be a simple hierarchy:
23
+
24
+ ```
25
+ +----------------+
26
+ | Manager agent |
27
+ +----------------+
28
+ |
29
+ _______________|______________
30
+ | |
31
+ Code Interpreter +------------------+
32
+ tool | Web Search agent |
33
+ +------------------+
34
+ | |
35
+ Web Search tool |
36
+ Visit webpage tool
37
+ ```
38
+ Let's set up this system.
39
+
40
+ Run the line below to install the required dependencies:
41
+
42
+ ```py
43
+ ! pip install markdownify duckduckgo-search smolagents --upgrade -q
44
+ ```
45
+
46
+ Let's login in order to call the HF Inference API:
47
+
48
+ ```py
49
+ from huggingface_hub import login
50
+
51
+ login()
52
+ ```
53
+
54
+ ⚡️ Our agent will be powered by [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) using `HfApiModel` class that uses HF's Inference API: the Inference API allows to quickly and easily run any OS model.
55
+
56
+ _Note:_ The Inference API hosts models based on various criteria, and deployed models may be updated or replaced without prior notice. Learn more about it [here](https://huggingface.co/docs/api-inference/supported-models).
57
+
58
+ ```py
59
+ model_id = "Qwen/Qwen2.5-Coder-32B-Instruct"
60
+ ```
61
+
62
+ ## 🔍 Create a web search tool
63
+
64
+ For web browsing, we can already use our pre-existing [`DuckDuckGoSearchTool`](https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L151-L176) tool to provide a Google search equivalent.
65
+
66
+ But then we will also need to be able to peak into the page found by the `DuckDuckGoSearchTool`.
67
+ To do so, we could import the library's built-in `VisitWebpageTool`, but we will build it again to see how it's done.
68
+
69
+ So let's create our `VisitWebpageTool` tool from scratch using `markdownify`.
70
+
71
+ ```py
72
+ import re
73
+ import requests
74
+ from markdownify import markdownify
75
+ from requests.exceptions import RequestException
76
+ from smolagents import tool
77
+
78
+
79
+ @tool
80
+ def visit_webpage(url: str) -> str:
81
+ """Visits a webpage at the given URL and returns its content as a markdown string.
82
+
83
+ Args:
84
+ url: The URL of the webpage to visit.
85
+
86
+ Returns:
87
+ The content of the webpage converted to Markdown, or an error message if the request fails.
88
+ """
89
+ try:
90
+ # Send a GET request to the URL
91
+ response = requests.get(url)
92
+ response.raise_for_status() # Raise an exception for bad status codes
93
+
94
+ # Convert the HTML content to Markdown
95
+ markdown_content = markdownify(response.text).strip()
96
+
97
+ # Remove multiple line breaks
98
+ markdown_content = re.sub(r"\n{3,}", "\n\n", markdown_content)
99
+
100
+ return markdown_content
101
+
102
+ except RequestException as e:
103
+ return f"Error fetching the webpage: {str(e)}"
104
+ except Exception as e:
105
+ return f"An unexpected error occurred: {str(e)}"
106
+ ```
107
+
108
+ Ok, now let's initialize and test our tool!
109
+
110
+ ```py
111
+ print(visit_webpage("https://en.wikipedia.org/wiki/Hugging_Face")[:500])
112
+ ```
113
+
114
+ ## Build our multi-agent system 🤖🤝🤖
115
+
116
+ Now that we have all the tools `search` and `visit_webpage`, we can use them to create the web agent.
117
+
118
+ Which configuration to choose for this agent?
119
+ - Web browsing is a single-timeline task that does not require parallel tool calls, so JSON tool calling works well for that. We thus choose a `ToolCallingAgent`.
120
+ - Also, since sometimes web search requires exploring many pages before finding the correct answer, we prefer to increase the number of `max_steps` to 10.
121
+
122
+ ```py
123
+ from smolagents import (
124
+ CodeAgent,
125
+ ToolCallingAgent,
126
+ HfApiModel,
127
+ DuckDuckGoSearchTool,
128
+ LiteLLMModel,
129
+ )
130
+
131
+ model = HfApiModel(model_id)
132
+
133
+ web_agent = ToolCallingAgent(
134
+ tools=[DuckDuckGoSearchTool(), visit_webpage],
135
+ model=model,
136
+ max_steps=10,
137
+ name="web_search_agent",
138
+ description="Runs web searches for you.",
139
+ )
140
+ ```
141
+
142
+ Note that we gave this agent attributes `name` and `description`, mandatory attributes to make this agent callable by its manager agent.
143
+
144
+ Then we create a manager agent, and upon initialization we pass our managed agent to it in its `managed_agents` argument.
145
+
146
+ Since this agent is the one tasked with the planning and thinking, advanced reasoning will be beneficial, so a `CodeAgent` will be the best choice.
147
+
148
+ Also, we want to ask a question that involves the current year and does additional data calculations: so let us add `additional_authorized_imports=["time", "numpy", "pandas"]`, just in case the agent needs these packages.
149
+
150
+ ```py
151
+ manager_agent = CodeAgent(
152
+ tools=[],
153
+ model=model,
154
+ managed_agents=[web_agent],
155
+ additional_authorized_imports=["time", "numpy", "pandas"],
156
+ )
157
+ ```
158
+
159
+ That's all! Now let's run our system! We select a question that requires both some calculation and research:
160
+
161
+ ```py
162
+ answer = manager_agent.run("If LLM training continues to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What would that correspond to, compared to some countries? Please provide a source for any numbers used.")
163
+ ```
164
+
165
+ We get this report as the answer:
166
+ ```
167
+ Based on current growth projections and energy consumption estimates, if LLM trainings continue to scale up at the
168
+ current rhythm until 2030:
169
+
170
+ 1. The electric power required to power the biggest training runs by 2030 would be approximately 303.74 GW, which
171
+ translates to about 2,660,762 GWh/year.
172
+
173
+ 2. Comparing this to countries' electricity consumption:
174
+ - It would be equivalent to about 34% of China's total electricity consumption.
175
+ - It would exceed the total electricity consumption of India (184%), Russia (267%), and Japan (291%).
176
+ - It would be nearly 9 times the electricity consumption of countries like Italy or Mexico.
177
+
178
+ 3. Source of numbers:
179
+ - The initial estimate of 5 GW for future LLM training comes from AWS CEO Matt Garman.
180
+ - The growth projection used a CAGR of 79.80% from market research by Springs.
181
+ - Country electricity consumption data is from the U.S. Energy Information Administration, primarily for the year
182
+ 2021.
183
+ ```
184
+
185
+ Seems like we'll need some sizeable powerplants if the [scaling hypothesis](https://gwern.net/scaling-hypothesis) continues to hold true.
186
+
187
+ Our agents managed to efficiently collaborate towards solving the task! ✅
188
+
189
+ 💡 You can easily extend this orchestration to more agents: one does the code execution, one the web search, one handles file loadings...
docs/source/en/examples/rag.mdx ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Agentic RAG
17
+
18
+ [[open-in-colab]]
19
+
20
+ Retrieval-Augmented-Generation (RAG) is “using an LLM to answer a user query, but basing the answer on information retrieved from a knowledge base”. It has many advantages over using a vanilla or fine-tuned LLM: to name a few, it allows to ground the answer on true facts and reduce confabulations, it allows to provide the LLM with domain-specific knowledge, and it allows fine-grained control of access to information from the knowledge base.
21
+
22
+ But vanilla RAG has limitations, most importantly these two:
23
+ - It performs only one retrieval step: if the results are bad, the generation in turn will be bad.
24
+ - Semantic similarity is computed with the user query as a reference, which might be suboptimal: for instance, the user query will often be a question and the document containing the true answer will be in affirmative voice, so its similarity score will be downgraded compared to other source documents in the interrogative form, leading to a risk of missing the relevant information.
25
+
26
+ We can alleviate these problems by making a RAG agent: very simply, an agent armed with a retriever tool!
27
+
28
+ This agent will: ✅ Formulate the query itself and ✅ Critique to re-retrieve if needed.
29
+
30
+ So it should naively recover some advanced RAG techniques!
31
+ - Instead of directly using the user query as the reference in semantic search, the agent formulates itself a reference sentence that can be closer to the targeted documents, as in [HyDE](https://huggingface.co/papers/2212.10496).
32
+ The agent can use the generated snippets and re-retrieve if needed, as in [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/).
33
+
34
+ Let's build this system. 🛠️
35
+
36
+ Run the line below to install required dependencies:
37
+ ```bash
38
+ !pip install smolagents pandas langchain langchain-community sentence-transformers datasets python-dotenv rank_bm25 --upgrade -q
39
+ ```
40
+ To call the HF Inference API, you will need a valid token as your environment variable `HF_TOKEN`.
41
+ We use python-dotenv to load it.
42
+ ```py
43
+ from dotenv import load_dotenv
44
+ load_dotenv()
45
+ ```
46
+
47
+ We first load a knowledge base on which we want to perform RAG: this dataset is a compilation of the documentation pages for many Hugging Face libraries, stored as markdown. We will keep only the documentation for the `transformers` library.
48
+
49
+ Then prepare the knowledge base by processing the dataset and storing it into a vector database to be used by the retriever.
50
+
51
+ We use [LangChain](https://python.langchain.com/docs/introduction/) for its excellent vector database utilities.
52
+
53
+ ```py
54
+ import datasets
55
+ from langchain.docstore.document import Document
56
+ from langchain.text_splitter import RecursiveCharacterTextSplitter
57
+ from langchain_community.retrievers import BM25Retriever
58
+
59
+ knowledge_base = datasets.load_dataset("m-ric/huggingface_doc", split="train")
60
+ knowledge_base = knowledge_base.filter(lambda row: row["source"].startswith("huggingface/transformers"))
61
+
62
+ source_docs = [
63
+ Document(page_content=doc["text"], metadata={"source": doc["source"].split("/")[1]})
64
+ for doc in knowledge_base
65
+ ]
66
+
67
+ text_splitter = RecursiveCharacterTextSplitter(
68
+ chunk_size=500,
69
+ chunk_overlap=50,
70
+ add_start_index=True,
71
+ strip_whitespace=True,
72
+ separators=["\n\n", "\n", ".", " ", ""],
73
+ )
74
+ docs_processed = text_splitter.split_documents(source_docs)
75
+ ```
76
+
77
+ Now the documents are ready.
78
+
79
+ So let’s build our agentic RAG system!
80
+
81
+ 👉 We only need a RetrieverTool that our agent can leverage to retrieve information from the knowledge base.
82
+
83
+ Since we need to add a vectordb as an attribute of the tool, we cannot simply use the simple tool constructor with a `@tool` decorator: so we will follow the advanced setup highlighted in the [tools tutorial](../tutorials/tools).
84
+
85
+ ```py
86
+ from smolagents import Tool
87
+
88
+ class RetrieverTool(Tool):
89
+ name = "retriever"
90
+ description = "Uses semantic search to retrieve the parts of transformers documentation that could be most relevant to answer your query."
91
+ inputs = {
92
+ "query": {
93
+ "type": "string",
94
+ "description": "The query to perform. This should be semantically close to your target documents. Use the affirmative form rather than a question.",
95
+ }
96
+ }
97
+ output_type = "string"
98
+
99
+ def __init__(self, docs, **kwargs):
100
+ super().__init__(**kwargs)
101
+ self.retriever = BM25Retriever.from_documents(
102
+ docs, k=10
103
+ )
104
+
105
+ def forward(self, query: str) -> str:
106
+ assert isinstance(query, str), "Your search query must be a string"
107
+
108
+ docs = self.retriever.invoke(
109
+ query,
110
+ )
111
+ return "\nRetrieved documents:\n" + "".join(
112
+ [
113
+ f"\n\n===== Document {str(i)} =====\n" + doc.page_content
114
+ for i, doc in enumerate(docs)
115
+ ]
116
+ )
117
+
118
+ retriever_tool = RetrieverTool(docs_processed)
119
+ ```
120
+ We have used BM25, a classic retrieval method, because it's lightning fast to setup.
121
+ To improve retrieval accuracy, you could use replace BM25 with semantic search using vector representations for documents: thus you can head to the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) to select a good embedding model.
122
+
123
+ Now it’s straightforward to create an agent that leverages this `retriever_tool`!
124
+
125
+ The agent will need these arguments upon initialization:
126
+ - `tools`: a list of tools that the agent will be able to call.
127
+ - `model`: the LLM that powers the agent.
128
+ Our `model` must be a callable that takes as input a list of messages and returns text. It also needs to accept a stop_sequences argument that indicates when to stop its generation. For convenience, we directly use the HfEngine class provided in the package to get a LLM engine that calls Hugging Face's Inference API.
129
+
130
+ >[!NOTE] To use a specific model, pass it like this: `HfApiModel("meta-llama/Llama-3.3-70B-Instruct")`. The Inference API hosts models based on various criteria, and deployed models may be updated or replaced without prior notice. Learn more about it [here](https://huggingface.co/docs/api-inference/supported-models).
131
+
132
+ ```py
133
+ from smolagents import HfApiModel, CodeAgent
134
+
135
+ agent = CodeAgent(
136
+ tools=[retriever_tool], model=HfApiModel(), max_steps=4, verbosity_level=2
137
+ )
138
+ ```
139
+ Upon initializing the CodeAgent, it has been automatically given a default system prompt that tells the LLM engine to process step-by-step and generate tool calls as code snippets, but you could replace this prompt template with your own as needed.
140
+
141
+ Then when its `.run()` method is launched, the agent takes care of calling the LLM engine, and executing the tool calls, all in a loop that ends only when tool `final_answer` is called with the final answer as its argument.
142
+
143
+ ```py
144
+ agent_output = agent.run("For a transformers model training, which is slower, the forward or the backward pass?")
145
+
146
+ print("Final output:")
147
+ print(agent_output)
148
+ ```
149
+
150
+
151
+
docs/source/en/examples/text_to_sql.mdx ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Text-to-SQL
17
+
18
+ [[open-in-colab]]
19
+
20
+ In this tutorial, we’ll see how to implement an agent that leverages SQL using `smolagents`.
21
+
22
+ > Let's start with the golden question: why not keep it simple and use a standard text-to-SQL pipeline?
23
+
24
+ A standard text-to-sql pipeline is brittle, since the generated SQL query can be incorrect. Even worse, the query could be incorrect, but not raise an error, instead giving some incorrect/useless outputs without raising an alarm.
25
+
26
+ 👉 Instead, an agent system is able to critically inspect outputs and decide if the query needs to be changed or not, thus giving it a huge performance boost.
27
+
28
+ Let’s build this agent! 💪
29
+
30
+ Run the line below to install required dependencies:
31
+ ```bash
32
+ !pip install smolagents python-dotenv sqlalchemy --upgrade -q
33
+ ```
34
+ To call the HF Inference API, you will need a valid token as your environment variable `HF_TOKEN`.
35
+ We use python-dotenv to load it.
36
+ ```py
37
+ from dotenv import load_dotenv
38
+ load_dotenv()
39
+ ```
40
+
41
+ Then, we setup the SQL environment:
42
+ ```py
43
+ from sqlalchemy import (
44
+ create_engine,
45
+ MetaData,
46
+ Table,
47
+ Column,
48
+ String,
49
+ Integer,
50
+ Float,
51
+ insert,
52
+ inspect,
53
+ text,
54
+ )
55
+
56
+ engine = create_engine("sqlite:///:memory:")
57
+ metadata_obj = MetaData()
58
+
59
+ def insert_rows_into_table(rows, table, engine=engine):
60
+ for row in rows:
61
+ stmt = insert(table).values(**row)
62
+ with engine.begin() as connection:
63
+ connection.execute(stmt)
64
+
65
+ table_name = "receipts"
66
+ receipts = Table(
67
+ table_name,
68
+ metadata_obj,
69
+ Column("receipt_id", Integer, primary_key=True),
70
+ Column("customer_name", String(16), primary_key=True),
71
+ Column("price", Float),
72
+ Column("tip", Float),
73
+ )
74
+ metadata_obj.create_all(engine)
75
+
76
+ rows = [
77
+ {"receipt_id": 1, "customer_name": "Alan Payne", "price": 12.06, "tip": 1.20},
78
+ {"receipt_id": 2, "customer_name": "Alex Mason", "price": 23.86, "tip": 0.24},
79
+ {"receipt_id": 3, "customer_name": "Woodrow Wilson", "price": 53.43, "tip": 5.43},
80
+ {"receipt_id": 4, "customer_name": "Margaret James", "price": 21.11, "tip": 1.00},
81
+ ]
82
+ insert_rows_into_table(rows, receipts)
83
+ ```
84
+
85
+ ### Build our agent
86
+
87
+ Now let’s make our SQL table retrievable by a tool.
88
+
89
+ The tool’s description attribute will be embedded in the LLM’s prompt by the agent system: it gives the LLM information about how to use the tool. This is where we want to describe the SQL table.
90
+
91
+ ```py
92
+ inspector = inspect(engine)
93
+ columns_info = [(col["name"], col["type"]) for col in inspector.get_columns("receipts")]
94
+
95
+ table_description = "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info])
96
+ print(table_description)
97
+ ```
98
+
99
+ ```text
100
+ Columns:
101
+ - receipt_id: INTEGER
102
+ - customer_name: VARCHAR(16)
103
+ - price: FLOAT
104
+ - tip: FLOAT
105
+ ```
106
+
107
+ Now let’s build our tool. It needs the following: (read [the tool doc](../tutorials/tools) for more detail)
108
+ - A docstring with an `Args:` part listing arguments.
109
+ - Type hints on both inputs and output.
110
+
111
+ ```py
112
+ from smolagents import tool
113
+
114
+ @tool
115
+ def sql_engine(query: str) -> str:
116
+ """
117
+ Allows you to perform SQL queries on the table. Returns a string representation of the result.
118
+ The table is named 'receipts'. Its description is as follows:
119
+ Columns:
120
+ - receipt_id: INTEGER
121
+ - customer_name: VARCHAR(16)
122
+ - price: FLOAT
123
+ - tip: FLOAT
124
+
125
+ Args:
126
+ query: The query to perform. This should be correct SQL.
127
+ """
128
+ output = ""
129
+ with engine.connect() as con:
130
+ rows = con.execute(text(query))
131
+ for row in rows:
132
+ output += "\n" + str(row)
133
+ return output
134
+ ```
135
+
136
+ Now let us create an agent that leverages this tool.
137
+
138
+ We use the `CodeAgent`, which is smolagents’ main agent class: an agent that writes actions in code and can iterate on previous output according to the ReAct framework.
139
+
140
+ The model is the LLM that powers the agent system. `HfApiModel` allows you to call LLMs using HF’s Inference API, either via Serverless or Dedicated endpoint, but you could also use any proprietary API.
141
+
142
+ ```py
143
+ from smolagents import CodeAgent, HfApiModel
144
+
145
+ agent = CodeAgent(
146
+ tools=[sql_engine],
147
+ model=HfApiModel("meta-llama/Meta-Llama-3.1-8B-Instruct"),
148
+ )
149
+ agent.run("Can you give me the name of the client who got the most expensive receipt?")
150
+ ```
151
+
152
+ ### Level 2: Table joins
153
+
154
+ Now let’s make it more challenging! We want our agent to handle joins across multiple tables.
155
+
156
+ So let’s make a second table recording the names of waiters for each receipt_id!
157
+
158
+ ```py
159
+ table_name = "waiters"
160
+ waiters = Table(
161
+ table_name,
162
+ metadata_obj,
163
+ Column("receipt_id", Integer, primary_key=True),
164
+ Column("waiter_name", String(16), primary_key=True),
165
+ )
166
+ metadata_obj.create_all(engine)
167
+
168
+ rows = [
169
+ {"receipt_id": 1, "waiter_name": "Corey Johnson"},
170
+ {"receipt_id": 2, "waiter_name": "Michael Watts"},
171
+ {"receipt_id": 3, "waiter_name": "Michael Watts"},
172
+ {"receipt_id": 4, "waiter_name": "Margaret James"},
173
+ ]
174
+ insert_rows_into_table(rows, waiters)
175
+ ```
176
+ Since we changed the table, we update the `SQLExecutorTool` with this table’s description to let the LLM properly leverage information from this table.
177
+
178
+ ```py
179
+ updated_description = """Allows you to perform SQL queries on the table. Beware that this tool's output is a string representation of the execution output.
180
+ It can use the following tables:"""
181
+
182
+ inspector = inspect(engine)
183
+ for table in ["receipts", "waiters"]:
184
+ columns_info = [(col["name"], col["type"]) for col in inspector.get_columns(table)]
185
+
186
+ table_description = f"Table '{table}':\n"
187
+
188
+ table_description += "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info])
189
+ updated_description += "\n\n" + table_description
190
+
191
+ print(updated_description)
192
+ ```
193
+ Since this request is a bit harder than the previous one, we’ll switch the LLM engine to use the more powerful [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)!
194
+
195
+ ```py
196
+ sql_engine.description = updated_description
197
+
198
+ agent = CodeAgent(
199
+ tools=[sql_engine],
200
+ model=HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct"),
201
+ )
202
+
203
+ agent.run("Which waiter got more total money from tips?")
204
+ ```
205
+ It directly works! The setup was surprisingly simple, wasn’t it?
206
+
207
+ This example is done! We've touched upon these concepts:
208
+ - Building new tools.
209
+ - Updating a tool's description.
210
+ - Switching to a stronger LLM helps agent reasoning.
211
+
212
+ ✅ Now you can go build this text-to-SQL system you’ve always dreamt of! ✨
docs/source/en/examples/web_browser.mdx ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Web Browser Automation with Agents 🤖🌐
2
+
3
+ [[open-in-colab]]
4
+
5
+ In this notebook, we'll create an **agent-powered web browser automation system**! This system can navigate websites, interact with elements, and extract information automatically.
6
+
7
+ The agent will be able to:
8
+
9
+ - [x] Navigate to web pages
10
+ - [x] Click on elements
11
+ - [x] Search within pages
12
+ - [x] Handle popups and modals
13
+ - [x] Extract information
14
+
15
+ Let's set up this system step by step!
16
+
17
+ First, run these lines to install the required dependencies:
18
+
19
+ ```bash
20
+ pip install smolagents selenium helium pillow -q
21
+ ```
22
+
23
+ Let's import our required libraries and set up environment variables:
24
+
25
+ ```python
26
+ from io import BytesIO
27
+ from time import sleep
28
+
29
+ import helium
30
+ from dotenv import load_dotenv
31
+ from PIL import Image
32
+ from selenium import webdriver
33
+ from selenium.webdriver.common.by import By
34
+ from selenium.webdriver.common.keys import Keys
35
+
36
+ from smolagents import CodeAgent, tool
37
+ from smolagents.agents import ActionStep
38
+
39
+ # Load environment variables
40
+ load_dotenv()
41
+ ```
42
+
43
+ Now let's create our core browser interaction tools that will allow our agent to navigate and interact with web pages:
44
+
45
+ ```python
46
+ @tool
47
+ def search_item_ctrl_f(text: str, nth_result: int = 1) -> str:
48
+ """
49
+ Searches for text on the current page via Ctrl + F and jumps to the nth occurrence.
50
+ Args:
51
+ text: The text to search for
52
+ nth_result: Which occurrence to jump to (default: 1)
53
+ """
54
+ elements = driver.find_elements(By.XPATH, f"//*[contains(text(), '{text}')]")
55
+ if nth_result > len(elements):
56
+ raise Exception(f"Match n°{nth_result} not found (only {len(elements)} matches found)")
57
+ result = f"Found {len(elements)} matches for '{text}'."
58
+ elem = elements[nth_result - 1]
59
+ driver.execute_script("arguments[0].scrollIntoView(true);", elem)
60
+ result += f"Focused on element {nth_result} of {len(elements)}"
61
+ return result
62
+
63
+ @tool
64
+ def go_back() -> None:
65
+ """Goes back to previous page."""
66
+ driver.back()
67
+
68
+ @tool
69
+ def close_popups() -> str:
70
+ """
71
+ Closes any visible modal or pop-up on the page. Use this to dismiss pop-up windows!
72
+ This does not work on cookie consent banners.
73
+ """
74
+ webdriver.ActionChains(driver).send_keys(Keys.ESCAPE).perform()
75
+ ```
76
+
77
+ Let's set up our browser with Chrome and configure screenshot capabilities:
78
+
79
+ ```python
80
+ # Configure Chrome options
81
+ chrome_options = webdriver.ChromeOptions()
82
+ chrome_options.add_argument("--force-device-scale-factor=1")
83
+ chrome_options.add_argument("--window-size=1000,1350")
84
+ chrome_options.add_argument("--disable-pdf-viewer")
85
+ chrome_options.add_argument("--window-position=0,0")
86
+
87
+ # Initialize the browser
88
+ driver = helium.start_chrome(headless=False, options=chrome_options)
89
+
90
+ # Set up screenshot callback
91
+ def save_screenshot(memory_step: ActionStep, agent: CodeAgent) -> None:
92
+ sleep(1.0) # Let JavaScript animations happen before taking the screenshot
93
+ driver = helium.get_driver()
94
+ current_step = memory_step.step_number
95
+ if driver is not None:
96
+ for previous_memory_step in agent.memory.steps: # Remove previous screenshots for lean processing
97
+ if isinstance(previous_memory_step, ActionStep) and previous_memory_step.step_number <= current_step - 2:
98
+ previous_memory_step.observations_images = None
99
+ png_bytes = driver.get_screenshot_as_png()
100
+ image = Image.open(BytesIO(png_bytes))
101
+ print(f"Captured a browser screenshot: {image.size} pixels")
102
+ memory_step.observations_images = [image.copy()] # Create a copy to ensure it persists
103
+
104
+ # Update observations with current URL
105
+ url_info = f"Current url: {driver.current_url}"
106
+ memory_step.observations = (
107
+ url_info if memory_step.observations is None else memory_step.observations + "\n" + url_info
108
+ )
109
+ ```
110
+
111
+ Now let's create our web automation agent:
112
+
113
+ ```python
114
+ from smolagents import HfApiModel
115
+
116
+ # Initialize the model
117
+ model_id = "meta-llama/Llama-3.3-70B-Instruct" # You can change this to your preferred model
118
+ model = HfApiModel(model_id)
119
+
120
+ # Create the agent
121
+ agent = CodeAgent(
122
+ tools=[go_back, close_popups, search_item_ctrl_f],
123
+ model=model,
124
+ additional_authorized_imports=["helium"],
125
+ step_callbacks=[save_screenshot],
126
+ max_steps=20,
127
+ verbosity_level=2,
128
+ )
129
+
130
+ # Import helium for the agent
131
+ agent.python_executor("from helium import *", agent.state)
132
+ ```
133
+
134
+ The agent needs instructions on how to use Helium for web automation. Here are the instructions we'll provide:
135
+
136
+ ```python
137
+ helium_instructions = """
138
+ You can use helium to access websites. Don't bother about the helium driver, it's already managed.
139
+ We've already ran "from helium import *"
140
+ Then you can go to pages!
141
+ Code:
142
+ ```py
143
+ go_to('github.com/trending')
144
+ ```<end_code>
145
+
146
+ You can directly click clickable elements by inputting the text that appears on them.
147
+ Code:
148
+ ```py
149
+ click("Top products")
150
+ ```<end_code>
151
+
152
+ If it's a link:
153
+ Code:
154
+ ```py
155
+ click(Link("Top products"))
156
+ ```<end_code>
157
+
158
+ If you try to interact with an element and it's not found, you'll get a LookupError.
159
+ In general stop your action after each button click to see what happens on your screenshot.
160
+ Never try to login in a page.
161
+
162
+ To scroll up or down, use scroll_down or scroll_up with as an argument the number of pixels to scroll from.
163
+ Code:
164
+ ```py
165
+ scroll_down(num_pixels=1200) # This will scroll one viewport down
166
+ ```<end_code>
167
+
168
+ When you have pop-ups with a cross icon to close, don't try to click the close icon by finding its element or targeting an 'X' element (this most often fails).
169
+ Just use your built-in tool `close_popups` to close them:
170
+ Code:
171
+ ```py
172
+ close_popups()
173
+ ```<end_code>
174
+
175
+ You can use .exists() to check for the existence of an element. For example:
176
+ Code:
177
+ ```py
178
+ if Text('Accept cookies?').exists():
179
+ click('I accept')
180
+ ```<end_code>
181
+ """
182
+ ```
183
+
184
+ Now we can run our agent with a task! Let's try finding information on Wikipedia:
185
+
186
+ ```python
187
+ search_request = """
188
+ Please navigate to https://en.wikipedia.org/wiki/Chicago and give me a sentence containing the word "1992" that mentions a construction accident.
189
+ """
190
+
191
+ agent_output = agent.run(search_request + helium_instructions)
192
+ print("Final output:")
193
+ print(agent_output)
194
+ ```
195
+
196
+ You can run different tasks by modifying the request. For example, here's for me to know if I should work harder:
197
+
198
+ ```python
199
+ github_request = """
200
+ I'm trying to find how hard I have to work to get a repo in github.com/trending.
201
+ Can you navigate to the profile for the top author of the top trending repo, and give me their total number of commits over the last year?
202
+ """
203
+
204
+ agent_output = agent.run(github_request + helium_instructions)
205
+ print("Final output:")
206
+ print(agent_output)
207
+ ```
208
+
209
+ The system is particularly effective for tasks like:
210
+ - Data extraction from websites
211
+ - Web research automation
212
+ - UI testing and verification
213
+ - Content monitoring
docs/source/en/guided_tour.mdx ADDED
@@ -0,0 +1,434 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Agents - Guided tour
17
+
18
+ [[open-in-colab]]
19
+
20
+ In this guided visit, you will learn how to build an agent, how to run it, and how to customize it to make it work better for your use-case.
21
+
22
+ ### Building your agent
23
+
24
+ To initialize a minimal agent, you need at least these two arguments:
25
+
26
+ - `model`, a text-generation model to power your agent - because the agent is different from a simple LLM, it is a system that uses a LLM as its engine. You can use any of these options:
27
+ - [`TransformersModel`] takes a pre-initialized `transformers` pipeline to run inference on your local machine using `transformers`.
28
+ - [`HfApiModel`] leverages a `huggingface_hub.InferenceClient` under the hood and supports all Inference Providers on the Hub.
29
+ - [`LiteLLMModel`] similarly lets you call 100+ different models and providers through [LiteLLM](https://docs.litellm.ai/)!
30
+ - [`AzureOpenAIServerModel`] allows you to use OpenAI models deployed in [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service).
31
+ - [`MLXModel`] creates a [mlx-lm](https://pypi.org/project/mlx-lm/) pipeline to run inference on your local machine.
32
+
33
+ - `tools`, a list of `Tools` that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`.
34
+
35
+ Once you have these two arguments, `tools` and `model`, you can create an agent and run it. You can use any LLM you'd like, either through [Inference Providers](https://huggingface.co/blog/inference-providers), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), [LiteLLM](https://www.litellm.ai/), [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service), or [mlx-lm](https://pypi.org/project/mlx-lm/).
36
+
37
+ <hfoptions id="Pick a LLM">
38
+ <hfoption id="HF Inference API">
39
+
40
+ HF Inference API is free to use without a token, but then it will have a rate limit.
41
+
42
+ To access gated models or rise your rate limits with a PRO account, you need to set the environment variable `HF_TOKEN` or pass `token` variable upon initialization of `HfApiModel`. You can get your token from your [settings page](https://huggingface.co/settings/tokens)
43
+
44
+ ```python
45
+ from smolagents import CodeAgent, HfApiModel
46
+
47
+ model_id = "meta-llama/Llama-3.3-70B-Instruct"
48
+
49
+ model = HfApiModel(model_id=model_id, token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") # You can choose to not pass any model_id to HfApiModel to use a default free model
50
+ # you can also specify a particular provider e.g. provider="together" or provider="sambanova"
51
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
52
+
53
+ agent.run(
54
+ "Could you give me the 118th number in the Fibonacci sequence?",
55
+ )
56
+ ```
57
+ </hfoption>
58
+ <hfoption id="Local Transformers Model">
59
+
60
+ ```python
61
+ # !pip install smolagents[transformers]
62
+ from smolagents import CodeAgent, TransformersModel
63
+
64
+ model_id = "meta-llama/Llama-3.2-3B-Instruct"
65
+
66
+ model = TransformersModel(model_id=model_id)
67
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
68
+
69
+ agent.run(
70
+ "Could you give me the 118th number in the Fibonacci sequence?",
71
+ )
72
+ ```
73
+ </hfoption>
74
+ <hfoption id="OpenAI or Anthropic API">
75
+
76
+ To use `LiteLLMModel`, you need to set the environment variable `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`, or pass `api_key` variable upon initialization.
77
+
78
+ ```python
79
+ # !pip install smolagents[litellm]
80
+ from smolagents import CodeAgent, LiteLLMModel
81
+
82
+ model = LiteLLMModel(model_id="anthropic/claude-3-5-sonnet-latest", api_key="YOUR_ANTHROPIC_API_KEY") # Could use 'gpt-4o'
83
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
84
+
85
+ agent.run(
86
+ "Could you give me the 118th number in the Fibonacci sequence?",
87
+ )
88
+ ```
89
+ </hfoption>
90
+ <hfoption id="Ollama">
91
+
92
+ ```python
93
+ # !pip install smolagents[litellm]
94
+ from smolagents import CodeAgent, LiteLLMModel
95
+
96
+ model = LiteLLMModel(
97
+ model_id="ollama_chat/llama3.2", # This model is a bit weak for agentic behaviours though
98
+ api_base="http://localhost:11434", # replace with 127.0.0.1:11434 or remote open-ai compatible server if necessary
99
+ api_key="YOUR_API_KEY", # replace with API key if necessary
100
+ num_ctx=8192, # ollama default is 2048 which will fail horribly. 8192 works for easy tasks, more is better. Check https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator to calculate how much VRAM this will need for the selected model.
101
+ )
102
+
103
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
104
+
105
+ agent.run(
106
+ "Could you give me the 118th number in the Fibonacci sequence?",
107
+ )
108
+ ```
109
+ </hfoption>
110
+ <hfoption id="Azure OpenAI">
111
+
112
+ To connect to Azure OpenAI, you can either use `AzureOpenAIServerModel` directly, or use `LiteLLMModel` and configure it accordingly.
113
+
114
+ To initialize an instance of `AzureOpenAIServerModel`, you need to pass your model deployment name and then either pass the `azure_endpoint`, `api_key`, and `api_version` arguments, or set the environment variables `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`.
115
+
116
+ ```python
117
+ # !pip install smolagents[openai]
118
+ from smolagents import CodeAgent, AzureOpenAIServerModel
119
+
120
+ model = AzureOpenAIServerModel(model_id="gpt-4o-mini")
121
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
122
+
123
+ agent.run(
124
+ "Could you give me the 118th number in the Fibonacci sequence?",
125
+ )
126
+ ```
127
+
128
+ Similarly, you can configure `LiteLLMModel` to connect to Azure OpenAI as follows:
129
+
130
+ - pass your model deployment name as `model_id`, and make sure to prefix it with `azure/`
131
+ - make sure to set the environment variable `AZURE_API_VERSION`
132
+ - either pass the `api_base` and `api_key` arguments, or set the environment variables `AZURE_API_KEY`, and `AZURE_API_BASE`
133
+
134
+ ```python
135
+ import os
136
+ from smolagents import CodeAgent, LiteLLMModel
137
+
138
+ AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="gpt-35-turbo-16k-deployment" # example of deployment name
139
+
140
+ os.environ["AZURE_API_KEY"] = "" # api_key
141
+ os.environ["AZURE_API_BASE"] = "" # "https://example-endpoint.openai.azure.com"
142
+ os.environ["AZURE_API_VERSION"] = "" # "2024-10-01-preview"
143
+
144
+ model = LiteLLMModel(model_id="azure/" + AZURE_OPENAI_CHAT_DEPLOYMENT_NAME)
145
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
146
+
147
+ agent.run(
148
+ "Could you give me the 118th number in the Fibonacci sequence?",
149
+ )
150
+ ```
151
+
152
+ </hfoption>
153
+ <hfoption id="mlx-lm">
154
+
155
+ ```python
156
+ # !pip install smolagents[mlx-lm]
157
+ from smolagents import CodeAgent, MLXModel
158
+
159
+ mlx_model = MLXModel("mlx-community/Qwen2.5-Coder-32B-Instruct-4bit")
160
+ agent = CodeAgent(model=mlx_model, tools=[], add_base_tools=True)
161
+
162
+ agent.run("Could you give me the 118th number in the Fibonacci sequence?")
163
+ ```
164
+
165
+ </hfoption>
166
+ </hfoptions>
167
+
168
+ #### CodeAgent and ToolCallingAgent
169
+
170
+ The [`CodeAgent`] is our default agent. It will write and execute python code snippets at each step.
171
+
172
+ By default, the execution is done in your local environment.
173
+ This should be safe because the only functions that can be called are the tools you provided (especially if it's only tools by Hugging Face) and a set of predefined safe functions like `print` or functions from the `math` module, so you're already limited in what can be executed.
174
+
175
+ The Python interpreter also doesn't allow imports by default outside of a safe list, so all the most obvious attacks shouldn't be an issue.
176
+ You can authorize additional imports by passing the authorized modules as a list of strings in argument `additional_authorized_imports` upon initialization of your [`CodeAgent`]:
177
+
178
+ ```py
179
+ model = HfApiModel()
180
+ agent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4'])
181
+ agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")
182
+ ```
183
+
184
+ > [!WARNING]
185
+ > The LLM can generate arbitrary code that will then be executed: do not add any unsafe imports!
186
+
187
+ The execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent.
188
+
189
+ You can also use [E2B code executor](https://e2b.dev/docs#what-is-e2-b) or Docker instead of a local Python interpreter. For E2B, first [set the `E2B_API_KEY` environment variable](https://e2b.dev/dashboard?tab=keys) and then pass `executor_type="e2b"` upon agent initialization. For Docker, pass `executor_type="docker"` during initialization.
190
+
191
+
192
+ > [!TIP]
193
+ > Learn more about code execution [in this tutorial](tutorials/secure_code_execution).
194
+
195
+ We also support the widely-used way of writing actions as JSON-like blobs: this is [`ToolCallingAgent`], it works much in the same way like [`CodeAgent`], of course without `additional_authorized_imports` since it doesn't execute code:
196
+
197
+ ```py
198
+ from smolagents import ToolCallingAgent
199
+
200
+ agent = ToolCallingAgent(tools=[], model=model)
201
+ agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")
202
+ ```
203
+
204
+ ### Inspecting an agent run
205
+
206
+ Here are a few useful attributes to inspect what happened after a run:
207
+ - `agent.logs` stores the fine-grained logs of the agent. At every step of the agent's run, everything gets stored in a dictionary that then is appended to `agent.logs`.
208
+ - Running `agent.write_memory_to_messages()` writes the agent's memory as list of chat messages for the Model to view. This method goes over each step of the log and only stores what it's interested in as a message: for instance, it will save the system prompt and task in separate messages, then for each step it will store the LLM output as a message, and the tool call output as another message. Use this if you want a higher-level view of what has happened - but not every log will be transcripted by this method.
209
+
210
+ ## Tools
211
+
212
+ A tool is an atomic function to be used by an agent. To be used by an LLM, it also needs a few attributes that constitute its API and will be used to describe to the LLM how to call this tool:
213
+ - A name
214
+ - A description
215
+ - Input types and descriptions
216
+ - An output type
217
+
218
+ You can for instance check the [`PythonInterpreterTool`]: it has a name, a description, input descriptions, an output type, and a `forward` method to perform the action.
219
+
220
+ When the agent is initialized, the tool attributes are used to generate a tool description which is baked into the agent's system prompt. This lets the agent know which tools it can use and why.
221
+
222
+ ### Default toolbox
223
+
224
+ `smolagents` comes with a default toolbox for empowering agents, that you can add to your agent upon initialization with argument `add_base_tools = True`:
225
+
226
+ - **DuckDuckGo web search***: performs a web search using DuckDuckGo browser.
227
+ - **Python code interpreter**: runs your LLM generated Python code in a secure environment. This tool will only be added to [`ToolCallingAgent`] if you initialize it with `add_base_tools=True`, since code-based agent can already natively execute Python code
228
+ - **Transcriber**: a speech-to-text pipeline built on Whisper-Turbo that transcribes an audio to text.
229
+
230
+ You can manually use a tool by calling it with its arguments.
231
+
232
+ ```python
233
+ from smolagents import DuckDuckGoSearchTool
234
+
235
+ search_tool = DuckDuckGoSearchTool()
236
+ print(search_tool("Who's the current president of Russia?"))
237
+ ```
238
+
239
+ ### Create a new tool
240
+
241
+ You can create your own tool for use cases not covered by the default tools from Hugging Face.
242
+ For example, let's create a tool that returns the most downloaded model for a given task from the Hub.
243
+
244
+ You'll start with the code below.
245
+
246
+ ```python
247
+ from huggingface_hub import list_models
248
+
249
+ task = "text-classification"
250
+
251
+ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
252
+ print(most_downloaded_model.id)
253
+ ```
254
+
255
+ This code can quickly be converted into a tool, just by wrapping it in a function and adding the `tool` decorator:
256
+ This is not the only way to build the tool: you can directly define it as a subclass of [`Tool`], which gives you more flexibility, for instance the possibility to initialize heavy class attributes.
257
+
258
+ Let's see how it works for both options:
259
+
260
+ <hfoptions id="build-a-tool">
261
+ <hfoption id="Decorate a function with @tool">
262
+
263
+ ```py
264
+ from smolagents import tool
265
+
266
+ @tool
267
+ def model_download_tool(task: str) -> str:
268
+ """
269
+ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.
270
+ It returns the name of the checkpoint.
271
+
272
+ Args:
273
+ task: The task for which to get the download count.
274
+ """
275
+ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
276
+ return most_downloaded_model.id
277
+ ```
278
+
279
+ The function needs:
280
+ - A clear name. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`.
281
+ - Type hints on both inputs and output
282
+ - A description, that includes an 'Args:' part where each argument is described (without a type indication this time, it will be pulled from the type hint). Same as for the tool name, this description is an instruction manual for the LLM powering you agent, so do not neglect it.
283
+ All these elements will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!
284
+
285
+ > [!TIP]
286
+ > This definition format is the same as tool schemas used in `apply_chat_template`, the only difference is the added `tool` decorator: read more on our tool use API [here](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template).
287
+ </hfoption>
288
+ <hfoption id="Subclass Tool">
289
+
290
+ ```py
291
+ from smolagents import Tool
292
+
293
+ class ModelDownloadTool(Tool):
294
+ name = "model_download_tool"
295
+ description = "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint."
296
+ inputs = {"task": {"type": "string", "description": "The task for which to get the download count."}}
297
+ output_type = "string"
298
+
299
+ def forward(self, task: str) -> str:
300
+ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
301
+ return most_downloaded_model.id
302
+ ```
303
+
304
+ The subclass needs the following attributes:
305
+ - A clear `name`. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`.
306
+ - A `description`. Same as for the `name`, this description is an instruction manual for the LLM powering you agent, so do not neglect it.
307
+ - Input types and descriptions
308
+ - Output type
309
+ All these attributes will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!
310
+ </hfoption>
311
+ </hfoptions>
312
+
313
+
314
+ Then you can directly initialize your agent:
315
+ ```py
316
+ from smolagents import CodeAgent, HfApiModel
317
+ agent = CodeAgent(tools=[model_download_tool], model=HfApiModel())
318
+ agent.run(
319
+ "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"
320
+ )
321
+ ```
322
+
323
+ You get the following logs:
324
+ ```text
325
+ ╭──────────────────────────────────────── New run ─────────────────────────────────────────╮
326
+ │ │
327
+ │ Can you give me the name of the model that has the most downloads in the 'text-to-video' │
328
+ │ task on the Hugging Face Hub? │
329
+ │ │
330
+ ╰─ HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct ───────────────────────────────────────────╯
331
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
332
+ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮
333
+ │ 1 model_name = model_download_tool(task="text-to-video") │
334
+ │ 2 print(model_name) │
335
+ ╰──────────────────────────────────────────────────────────────────────────────────────────╯
336
+ Execution logs:
337
+ ByteDance/AnimateDiff-Lightning
338
+
339
+ Out: None
340
+ [Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60]
341
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
342
+ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮
343
+ │ 1 final_answer("ByteDance/AnimateDiff-Lightning") │
344
+ ╰──────────────────────────────────────────────────────────────────────────────────────────╯
345
+ Out - Final answer: ByteDance/AnimateDiff-Lightning
346
+ [Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148]
347
+ Out[20]: 'ByteDance/AnimateDiff-Lightning'
348
+ ```
349
+
350
+ > [!TIP]
351
+ > Read more on tools in the [dedicated tutorial](./tutorials/tools#what-is-a-tool-and-how-to-build-one).
352
+
353
+ ## Multi-agents
354
+
355
+ Multi-agent systems have been introduced with Microsoft's framework [Autogen](https://huggingface.co/papers/2308.08155).
356
+
357
+ In this type of framework, you have several agents working together to solve your task instead of only one.
358
+ It empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization. For instance, why fill the memory of the code generating agent with all the content of webpages visited by the web search agent? It's better to keep them separate.
359
+
360
+ You can easily build hierarchical multi-agent systems with `smolagents`.
361
+
362
+ To do so, just ensure your agent has `name` and`description` attributes, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools.
363
+ Then you can pass this managed agent in the parameter managed_agents upon initialization of the manager agent.
364
+
365
+ Here's an example of making an agent that managed a specific web search agent using our [`DuckDuckGoSearchTool`]:
366
+
367
+ ```py
368
+ from smolagents import CodeAgent, HfApiModel, DuckDuckGoSearchTool
369
+
370
+ model = HfApiModel()
371
+
372
+ web_agent = CodeAgent(
373
+ tools=[DuckDuckGoSearchTool()],
374
+ model=model,
375
+ name="web_search",
376
+ description="Runs web searches for you. Give it your query as an argument."
377
+ )
378
+
379
+ manager_agent = CodeAgent(
380
+ tools=[], model=model, managed_agents=[web_agent]
381
+ )
382
+
383
+ manager_agent.run("Who is the CEO of Hugging Face?")
384
+ ```
385
+
386
+ > [!TIP]
387
+ > For an in-depth example of an efficient multi-agent implementation, see [how we pushed our multi-agent system to the top of the GAIA leaderboard](https://huggingface.co/blog/beating-gaia).
388
+
389
+
390
+ ## Talk with your agent and visualize its thoughts in a cool Gradio interface
391
+
392
+ You can use `GradioUI` to interactively submit tasks to your agent and observe its thought and execution process, here is an example:
393
+
394
+ ```py
395
+ from smolagents import (
396
+ load_tool,
397
+ CodeAgent,
398
+ HfApiModel,
399
+ GradioUI
400
+ )
401
+
402
+ # Import tool from Hub
403
+ image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True)
404
+
405
+ model = HfApiModel(model_id)
406
+
407
+ # Initialize the agent with the image generation tool
408
+ agent = CodeAgent(tools=[image_generation_tool], model=model)
409
+
410
+ GradioUI(agent).launch()
411
+ ```
412
+
413
+ Under the hood, when the user types a new answer, the agent is launched with `agent.run(user_request, reset=False)`.
414
+ The `reset=False` flag means the agent's memory is not flushed before launching this new task, which lets the conversation go on.
415
+
416
+ You can also use this `reset=False` argument to keep the conversation going in any other agentic application.
417
+
418
+ ## Next steps
419
+
420
+ Finally, when you've configured your agent to your needs, you can share it to the Hub!
421
+
422
+ ```py
423
+ agent.push_to_hub("m-ric/my_agent")
424
+ ```
425
+
426
+ Similarly, to load an agent that has been pushed to hub, if you trust the code from its tools, use:
427
+ ```py
428
+ agent.from_hub("m-ric/my_agent", trust_remote_code=True)
429
+ ```
430
+
431
+ For more in-depth usage, you will then want to check out our tutorials:
432
+ - [the explanation of how our code agents work](./tutorials/secure_code_execution)
433
+ - [this guide on how to build good agents](./tutorials/building_good_agents).
434
+ - [the in-depth guide for tool usage](./tutorials/building_good_agents).
docs/source/en/index.mdx ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # `smolagents`
17
+
18
+ <div class="flex justify-center">
19
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/license_to_call.png" width=100%/>
20
+ </div>
21
+
22
+ This library is the simplest framework out there to build powerful agents! By the way, wtf are "agents"? We provide our definition [in this page](conceptual_guides/intro_agents), where you'll also find tips for when to use them or not (spoilers: you'll often be better off without agents).
23
+
24
+ This library offers:
25
+
26
+ ✨ **Simplicity**: the logic for agents fits in ~thousand lines of code. We kept abstractions to their minimal shape above raw code!
27
+
28
+ 🌐 **Support for any LLM**: it supports models hosted on the Hub loaded in their `transformers` version or through our inference API and Inference providers, but also models from OpenAI, Anthropic... it's really easy to power an agent with any LLM.
29
+
30
+ 🧑‍💻 **First-class support for Code Agents**, i.e. agents that write their actions in code (as opposed to "agents being used to write code"), [read more here](tutorials/secure_code_execution).
31
+
32
+ 🤗 **Hub integrations**: you can share and load Gradio Spaces as tools to/from the Hub, and more is to come!
33
+
34
+ <div class="mt-10">
35
+ <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
36
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guided_tour"
37
+ ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Guided tour</div>
38
+ <p class="text-gray-700">Learn the basics and become familiar with using Agents. Start here if you are using Agents for the first time!</p>
39
+ </a>
40
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./examples/text_to_sql"
41
+ ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
42
+ <p class="text-gray-700">Practical guides to help you achieve a specific goal: create an agent to generate and test SQL queries!</p>
43
+ </a>
44
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/intro_agents"
45
+ ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
46
+ <p class="text-gray-700">High-level explanations for building a better understanding of important topics.</p>
47
+ </a>
48
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/building_good_agents"
49
+ ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
50
+ <p class="text-gray-700">Horizontal tutorials that cover important aspects of building agents.</p>
51
+ </a>
52
+ </div>
53
+ </div>
docs/source/en/reference/agents.mdx ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Agents
17
+
18
+ <Tip warning={true}>
19
+
20
+ Smolagents is an experimental API which is subject to change at any time. Results returned by the agents
21
+ can vary as the APIs or underlying models are prone to change.
22
+
23
+ </Tip>
24
+
25
+ To learn more about agents and tools make sure to read the [introductory guide](../index). This page
26
+ contains the API docs for the underlying classes.
27
+
28
+ ## Agents
29
+
30
+ Our agents inherit from [`MultiStepAgent`], which means they can act in multiple steps, each step consisting of one thought, then one tool call and execution. Read more in [this conceptual guide](../conceptual_guides/react).
31
+
32
+ We provide two types of agents, based on the main [`Agent`] class.
33
+ - [`CodeAgent`] is the default agent, it writes its tool calls in Python code.
34
+ - [`ToolCallingAgent`] writes its tool calls in JSON.
35
+
36
+ Both require arguments `model` and list of tools `tools` at initialization.
37
+
38
+ ### Classes of agents
39
+
40
+ [[autodoc]] MultiStepAgent
41
+
42
+ [[autodoc]] CodeAgent
43
+
44
+ [[autodoc]] ToolCallingAgent
45
+
46
+ ### ManagedAgent
47
+
48
+ _This class is deprecated since 1.8.0: now you simply need to pass attributes `name` and `description` to a normal agent to make it callable by a manager agent._
49
+
50
+ ### stream_to_gradio
51
+
52
+ [[autodoc]] stream_to_gradio
53
+
54
+ ### GradioUI
55
+
56
+ > [!TIP]
57
+ > You must have `gradio` installed to use the UI. Please run `pip install smolagents[gradio]` if it's not the case.
58
+
59
+ [[autodoc]] GradioUI
60
+
61
+ ## Prompts
62
+
63
+ [[autodoc]] smolagents.agents.PromptTemplates
64
+
65
+ [[autodoc]] smolagents.agents.PlanningPromptTemplate
66
+
67
+ [[autodoc]] smolagents.agents.ManagedAgentPromptTemplate
68
+
69
+ [[autodoc]] smolagents.agents.FinalAnswerPromptTemplate
docs/source/en/reference/models.mdx ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Models
17
+
18
+ <Tip warning={true}>
19
+
20
+ Smolagents is an experimental API which is subject to change at any time. Results returned by the agents
21
+ can vary as the APIs or underlying models are prone to change.
22
+
23
+ </Tip>
24
+
25
+ To learn more about agents and tools make sure to read the [introductory guide](../index). This page
26
+ contains the API docs for the underlying classes.
27
+
28
+ ## Models
29
+
30
+ You're free to create and use your own models to power your agent.
31
+
32
+ You could use any `model` callable for your agent, as long as:
33
+ 1. It follows the [messages format](./chat_templating) (`List[Dict[str, str]]`) for its input `messages`, and it returns a `str`.
34
+ 2. It stops generating outputs *before* the sequences passed in the argument `stop_sequences`
35
+
36
+ For defining your LLM, you can make a `custom_model` method which accepts a list of [messages](./chat_templating) and returns an object with a .content attribute containing the text. This callable also needs to accept a `stop_sequences` argument that indicates when to stop generating.
37
+
38
+ ```python
39
+ from huggingface_hub import login, InferenceClient
40
+
41
+ login("<YOUR_HUGGINGFACEHUB_API_TOKEN>")
42
+
43
+ model_id = "meta-llama/Llama-3.3-70B-Instruct"
44
+
45
+ client = InferenceClient(model=model_id)
46
+
47
+ def custom_model(messages, stop_sequences=["Task"]):
48
+ response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000)
49
+ answer = response.choices[0].message
50
+ return answer
51
+ ```
52
+
53
+ Additionally, `custom_model` can also take a `grammar` argument. In the case where you specify a `grammar` upon agent initialization, this argument will be passed to the calls to model, with the `grammar` that you defined upon initialization, to allow [constrained generation](https://huggingface.co/docs/text-generation-inference/conceptual/guidance) in order to force properly-formatted agent outputs.
54
+
55
+ ### TransformersModel
56
+
57
+ For convenience, we have added a `TransformersModel` that implements the points above by building a local `transformers` pipeline for the model_id given at initialization.
58
+
59
+ ```python
60
+ from smolagents import TransformersModel
61
+
62
+ model = TransformersModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct")
63
+
64
+ print(model([{"role": "user", "content": [{"type": "text", "text": "Ok!"}]}], stop_sequences=["great"]))
65
+ ```
66
+ ```text
67
+ >>> What a
68
+ ```
69
+
70
+ > [!TIP]
71
+ > You must have `transformers` and `torch` installed on your machine. Please run `pip install smolagents[transformers]` if it's not the case.
72
+
73
+ [[autodoc]] TransformersModel
74
+
75
+ ### HfApiModel
76
+
77
+ The `HfApiModel` wraps huggingface_hub's [InferenceClient](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference) for the execution of the LLM. It supports both HF's own [Inference API](https://huggingface.co/docs/api-inference/index) as well as all [Inference Providers](https://huggingface.co/blog/inference-providers) available on the Hub.
78
+
79
+ ```python
80
+ from smolagents import HfApiModel
81
+
82
+ messages = [
83
+ {"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]}
84
+ ]
85
+
86
+ model = HfApiModel()
87
+ print(model(messages))
88
+ ```
89
+ ```text
90
+ >>> Of course! If you change your mind, feel free to reach out. Take care!
91
+ ```
92
+ [[autodoc]] HfApiModel
93
+
94
+ ### LiteLLMModel
95
+
96
+ The `LiteLLMModel` leverages [LiteLLM](https://www.litellm.ai/) to support 100+ LLMs from various providers.
97
+ You can pass kwargs upon model initialization that will then be used whenever using the model, for instance below we pass `temperature`.
98
+
99
+ ```python
100
+ from smolagents import LiteLLMModel
101
+
102
+ messages = [
103
+ {"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]}
104
+ ]
105
+
106
+ model = LiteLLMModel("anthropic/claude-3-5-sonnet-latest", temperature=0.2, max_tokens=10)
107
+ print(model(messages))
108
+ ```
109
+
110
+ [[autodoc]] LiteLLMModel
111
+
112
+ ### OpenAIServerModel
113
+
114
+ This class lets you call any OpenAIServer compatible model.
115
+ Here's how you can set it (you can customise the `api_base` url to point to another server):
116
+ ```py
117
+ import os
118
+ from smolagents import OpenAIServerModel
119
+
120
+ model = OpenAIServerModel(
121
+ model_id="gpt-4o",
122
+ api_base="https://api.openai.com/v1",
123
+ api_key=os.environ["OPENAI_API_KEY"],
124
+ )
125
+ ```
126
+
127
+ [[autodoc]] OpenAIServerModel
128
+
129
+ ### AzureOpenAIServerModel
130
+
131
+ `AzureOpenAIServerModel` allows you to connect to any Azure OpenAI deployment.
132
+
133
+ Below you can find an example of how to set it up, note that you can omit the `azure_endpoint`, `api_key`, and `api_version` arguments, provided you've set the corresponding environment variables -- `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`.
134
+
135
+ Pay attention to the lack of an `AZURE_` prefix for `OPENAI_API_VERSION`, this is due to the way the underlying [openai](https://github.com/openai/openai-python) package is designed.
136
+
137
+ ```py
138
+ import os
139
+
140
+ from smolagents import AzureOpenAIServerModel
141
+
142
+ model = AzureOpenAIServerModel(
143
+ model_id = os.environ.get("AZURE_OPENAI_MODEL"),
144
+ azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
145
+ api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
146
+ api_version=os.environ.get("OPENAI_API_VERSION")
147
+ )
148
+ ```
149
+
150
+ [[autodoc]] AzureOpenAIServerModel
151
+
152
+ ### MLXModel
153
+
154
+
155
+ ```python
156
+ from smolagents import MLXModel
157
+
158
+ model = MLXModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct")
159
+
160
+ print(model([{"role": "user", "content": "Ok!"}], stop_sequences=["great"]))
161
+ ```
162
+ ```text
163
+ >>> What a
164
+ ```
165
+
166
+ > [!TIP]
167
+ > You must have `mlx-lm` installed on your machine. Please run `pip install smolagents[mlx-lm]` if it's not the case.
168
+
169
+ [[autodoc]] MLXModel
docs/source/en/reference/tools.mdx ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Tools
17
+
18
+ <Tip warning={true}>
19
+
20
+ Smolagents is an experimental API which is subject to change at any time. Results returned by the agents
21
+ can vary as the APIs or underlying models are prone to change.
22
+
23
+ </Tip>
24
+
25
+ To learn more about agents and tools make sure to read the [introductory guide](../index). This page
26
+ contains the API docs for the underlying classes.
27
+
28
+ ## Tools
29
+
30
+ ### load_tool
31
+
32
+ [[autodoc]] load_tool
33
+
34
+ ### tool
35
+
36
+ [[autodoc]] tool
37
+
38
+ ### Tool
39
+
40
+ [[autodoc]] Tool
41
+
42
+ ### launch_gradio_demo
43
+
44
+ [[autodoc]] launch_gradio_demo
45
+
46
+ ## Default tools
47
+
48
+ ### PythonInterpreterTool
49
+
50
+ [[autodoc]] PythonInterpreterTool
51
+
52
+ ### FinalAnswerTool
53
+
54
+ [[autodoc]] FinalAnswerTool
55
+
56
+ ### UserInputTool
57
+
58
+ [[autodoc]] UserInputTool
59
+
60
+ ### DuckDuckGoSearchTool
61
+
62
+ [[autodoc]] DuckDuckGoSearchTool
63
+
64
+ ### GoogleSearchTool
65
+
66
+ [[autodoc]] GoogleSearchTool
67
+
68
+ ### VisitWebpageTool
69
+
70
+ [[autodoc]] VisitWebpageTool
71
+
72
+ ### SpeechToTextTool
73
+
74
+ [[autodoc]] SpeechToTextTool
75
+
76
+ ## ToolCollection
77
+
78
+ [[autodoc]] ToolCollection
79
+
80
+ ## Agent Types
81
+
82
+ Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return
83
+ text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to
84
+ correctly render these returns in ipython (jupyter, colab, ipython notebooks, ...), we implement wrapper classes
85
+ around these types.
86
+
87
+ The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image
88
+ object should still behave as a `PIL.Image`.
89
+
90
+ These types have three specific purposes:
91
+
92
+ - Calling `to_raw` on the type should return the underlying object
93
+ - Calling `to_string` on the type should return the object as a string: that can be the string in case of an `AgentText`
94
+ but will be the path of the serialized version of the object in other instances
95
+ - Displaying it in an ipython kernel should display the object correctly
96
+
97
+ ### AgentText
98
+
99
+ [[autodoc]] smolagents.agent_types.AgentText
100
+
101
+ ### AgentImage
102
+
103
+ [[autodoc]] smolagents.agent_types.AgentImage
104
+
105
+ ### AgentAudio
106
+
107
+ [[autodoc]] smolagents.agent_types.AgentAudio
docs/source/en/tutorials/building_good_agents.mdx ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Building good agents
17
+
18
+ [[open-in-colab]]
19
+
20
+ There's a world of difference between building an agent that works and one that doesn't.
21
+ How can we build agents that fall into the former category?
22
+ In this guide, we're going to talk about best practices for building agents.
23
+
24
+ > [!TIP]
25
+ > If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).
26
+
27
+ ### The best agentic systems are the simplest: simplify the workflow as much as you can
28
+
29
+ Giving an LLM some agency in your workflow introduces some risk of errors.
30
+
31
+ Well-programmed agentic systems have good error logging and retry mechanisms anyway, so the LLM engine has a chance to self-correct their mistake. But to reduce the risk of LLM error to the maximum, you should simplify your workflow!
32
+
33
+ Let's revisit the example from the [intro to agents](../conceptual_guides/intro_agents): a bot that answers user queries for a surf trip company.
34
+ Instead of letting the agent do 2 different calls for "travel distance API" and "weather API" each time they are asked about a new surf spot, you could just make one unified tool "return_spot_information", a function that calls both APIs at once and returns their concatenated outputs to the user.
35
+
36
+ This will reduce costs, latency, and error risk!
37
+
38
+ The main guideline is: Reduce the number of LLM calls as much as you can.
39
+
40
+ This leads to a few takeaways:
41
+ - Whenever possible, group 2 tools in one, like in our example of the two APIs.
42
+ - Whenever possible, logic should be based on deterministic functions rather than agentic decisions.
43
+
44
+ ### Improve the information flow to the LLM engine
45
+
46
+ Remember that your LLM engine is like an *intelligent* robot, tapped into a room with the only communication with the outside world being notes passed under a door.
47
+
48
+ It won't know of anything that happened if you don't explicitly put that into its prompt.
49
+
50
+ So first start with making your task very clear!
51
+ Since an agent is powered by an LLM, minor variations in your task formulation might yield completely different results.
52
+
53
+ Then, improve the information flow towards your agent in tool use.
54
+
55
+ Particular guidelines to follow:
56
+ - Each tool should log (by simply using `print` statements inside the tool's `forward` method) everything that could be useful for the LLM engine.
57
+ - In particular, logging detail on tool execution errors would help a lot!
58
+
59
+ For instance, here's a tool that retrieves weather data based on location and date-time:
60
+
61
+ First, here's a poor version:
62
+ ```python
63
+ import datetime
64
+ from smolagents import tool
65
+
66
+ def get_weather_report_at_coordinates(coordinates, date_time):
67
+ # Dummy function, returns a list of [temperature in °C, risk of rain on a scale 0-1, wave height in m]
68
+ return [28.0, 0.35, 0.85]
69
+
70
+ def convert_location_to_coordinates(location):
71
+ # Returns dummy coordinates
72
+ return [3.3, -42.0]
73
+
74
+ @tool
75
+ def get_weather_api(location: str, date_time: str) -> str:
76
+ """
77
+ Returns the weather report.
78
+
79
+ Args:
80
+ location: the name of the place that you want the weather for.
81
+ date_time: the date and time for which you want the report.
82
+ """
83
+ lon, lat = convert_location_to_coordinates(location)
84
+ date_time = datetime.strptime(date_time)
85
+ return str(get_weather_report_at_coordinates((lon, lat), date_time))
86
+ ```
87
+
88
+ Why is it bad?
89
+ - there's no precision of the format that should be used for `date_time`
90
+ - there's no detail on how location should be specified.
91
+ - there's no logging mechanism trying to make explicit failure cases like location not being in a proper format, or date_time not being properly formatted.
92
+ - the output format is hard to understand
93
+
94
+ If the tool call fails, the error trace logged in memory can help the LLM reverse engineer the tool to fix the errors. But why leave it with so much heavy lifting to do?
95
+
96
+ A better way to build this tool would have been the following:
97
+ ```python
98
+ @tool
99
+ def get_weather_api(location: str, date_time: str) -> str:
100
+ """
101
+ Returns the weather report.
102
+
103
+ Args:
104
+ location: the name of the place that you want the weather for. Should be a place name, followed by possibly a city name, then a country, like "Anchor Point, Taghazout, Morocco".
105
+ date_time: the date and time for which you want the report, formatted as '%m/%d/%y %H:%M:%S'.
106
+ """
107
+ lon, lat = convert_location_to_coordinates(location)
108
+ try:
109
+ date_time = datetime.strptime(date_time)
110
+ except Exception as e:
111
+ raise ValueError("Conversion of `date_time` to datetime format failed, make sure to provide a string in format '%m/%d/%y %H:%M:%S'. Full trace:" + str(e))
112
+ temperature_celsius, risk_of_rain, wave_height = get_weather_report_at_coordinates((lon, lat), date_time)
113
+ return f"Weather report for {location}, {date_time}: Temperature will be {temperature_celsius}°C, risk of rain is {risk_of_rain*100:.0f}%, wave height is {wave_height}m."
114
+ ```
115
+
116
+ In general, to ease the load on your LLM, the good question to ask yourself is: "How easy would it be for me, if I was dumb and using this tool for the first time ever, to program with this tool and correct my own errors?".
117
+
118
+ ### Give more arguments to the agent
119
+
120
+ To pass some additional objects to your agent beyond the simple string describing the task, you can use the `additional_args` argument to pass any type of object:
121
+
122
+ ```py
123
+ from smolagents import CodeAgent, HfApiModel
124
+
125
+ model_id = "meta-llama/Llama-3.3-70B-Instruct"
126
+
127
+ agent = CodeAgent(tools=[], model=HfApiModel(model_id=model_id), add_base_tools=True)
128
+
129
+ agent.run(
130
+ "Why does Mike not know many people in New York?",
131
+ additional_args={"mp3_sound_file_url":'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3'}
132
+ )
133
+ ```
134
+ For instance, you can use this `additional_args` argument to pass images or strings that you want your agent to leverage.
135
+
136
+
137
+
138
+ ## How to debug your agent
139
+
140
+ ### 1. Use a stronger LLM
141
+
142
+ In an agentic workflows, some of the errors are actual errors, some other are the fault of your LLM engine not reasoning properly.
143
+ For instance, consider this trace for an `CodeAgent` that I asked to create a car picture:
144
+ ```
145
+ ==================================================================================================== New task ====================================================================================================
146
+ Make me a cool car picture
147
+ ──────────────────────────────────────────────────────────────────────────────────────────────────── New step ────────────────────────────────────────────────────────────────────────────────────────────────────
148
+ Agent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
149
+ image_generator(prompt="A cool, futuristic sports car with LED headlights, aerodynamic design, and vibrant color, high-res, photorealistic")
150
+ ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
151
+
152
+ Last output from code snippet: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
153
+ /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png
154
+ Step 1:
155
+
156
+ - Time taken: 16.35 seconds
157
+ - Input tokens: 1,383
158
+ - Output tokens: 77
159
+ ─────���────────────────────────────────────────────────────────────────────────────────────────────── New step ────────────────────────────────────────────────────────────────────────────────────────────────────
160
+ Agent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
161
+ final_answer("/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png")
162
+ ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
163
+ Print outputs:
164
+
165
+ Last output from code snippet: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
166
+ /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png
167
+ Final answer:
168
+ /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png
169
+ ```
170
+ The user sees, instead of an image being returned, a path being returned to them.
171
+ It could look like a bug from the system, but actually the agentic system didn't cause the error: it's just that the LLM brain did the mistake of not saving the image output into a variable.
172
+ Thus it cannot access the image again except by leveraging the path that was logged while saving the image, so it returns the path instead of an image.
173
+
174
+ The first step to debugging your agent is thus "Use a more powerful LLM". Alternatives like `Qwen2/5-72B-Instruct` wouldn't have made that mistake.
175
+
176
+ ### 2. Provide more guidance / more information
177
+
178
+ You can also use less powerful models, provided you guide them more effectively.
179
+
180
+ Put yourself in the shoes of your model: if you were the model solving the task, would you struggle with the information available to you (from the system prompt + task formulation + tool description) ?
181
+
182
+ Would you need some added clarifications?
183
+
184
+ To provide extra information, we do not recommend to change the system prompt right away: the default system prompt has many adjustments that you do not want to mess up except if you understand the prompt very well.
185
+ Better ways to guide your LLM engine are:
186
+ - If it's about the task to solve: add all these details to the task. The task could be 100s of pages long.
187
+ - If it's about how to use tools: the description attribute of your tools.
188
+
189
+
190
+ ### 3. Change the system prompt (generally not advised)
191
+
192
+ If above clarifications are not sufficient, you can change the system prompt.
193
+
194
+ Let's see how it works. For example, let us check the default system prompt for the [`CodeAgent`] (below version is shortened by skipping zero-shot examples).
195
+
196
+ ```python
197
+ print(agent.prompt_templates["system_prompt"])
198
+ ```
199
+ Here is what you get:
200
+ ```text
201
+ You are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can.
202
+ To do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code.
203
+ To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.
204
+
205
+ At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.
206
+ Then in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '<end_code>' sequence.
207
+ During each intermediate step, you can use 'print()' to save whatever important information you will then need.
208
+ These print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.
209
+ In the end you have to return a final answer using the `final_answer` tool.
210
+
211
+ Here are a few examples using notional tools:
212
+ ---
213
+ {examples}
214
+
215
+ Above example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools:
216
+
217
+ {{tool_descriptions}}
218
+
219
+ {{managed_agents_descriptions}}
220
+
221
+ Here are the rules you should always follow to solve your task:
222
+ 1. Always provide a 'Thought:' sequence, and a 'Code:\n```py' sequence ending with '```<end_code>' sequence, else you will fail.
223
+ 2. Use only variables that you have defined!
224
+ 3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': "What is the place where James Bond lives?"})', but use the arguments directly as in 'answer = wiki(query="What is the place where James Bond lives?")'.
225
+ 4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block.
226
+ 5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters.
227
+ 6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'.
228
+ 7. Never create any notional variables in our code, as having these in your logs might derail you from the true variables.
229
+ 8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}}
230
+ 9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist.
231
+ 10. Don't give up! You're in charge of solving the task, not providing directions to solve it.
232
+
233
+ Now Begin! If you solve the task correctly, you will receive a reward of $1,000,000.
234
+ ```
235
+
236
+ As you can see, there are placeholders like `"{{tool_descriptions}}"`: these will be used upon agent initialization to insert certain automatically generated descriptions of tools or managed agents.
237
+
238
+ So while you can overwrite this system prompt template by passing your custom prompt as an argument to the `system_prompt` parameter, your new system prompt must contain the following placeholders:
239
+ - `"{{tool_descriptions}}"` to insert tool descriptions.
240
+ - `"{{managed_agents_description}}"` to insert the description for managed agents if there are any.
241
+ - For `CodeAgent` only: `"{{authorized_imports}}"` to insert the list of authorized imports.
242
+
243
+ Then you can change the system prompt as follows:
244
+
245
+ ```py
246
+ agent.prompt_templates["system_prompt"] = agent.prompt_templates["system_prompt"] + "\nHere you go!"
247
+ ```
248
+
249
+ This also works with the [`ToolCallingAgent`].
250
+
251
+
252
+ ### 4. Extra planning
253
+
254
+ We provide a model for a supplementary planning step, that an agent can run regularly in-between normal action steps. In this step, there is no tool call, the LLM is simply asked to update a list of facts it knows and to reflect on what steps it should take next based on those facts.
255
+
256
+ ```py
257
+ from smolagents import load_tool, CodeAgent, HfApiModel, DuckDuckGoSearchTool
258
+ from dotenv import load_dotenv
259
+
260
+ load_dotenv()
261
+
262
+ # Import tool from Hub
263
+ image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True)
264
+
265
+ search_tool = DuckDuckGoSearchTool()
266
+
267
+ agent = CodeAgent(
268
+ tools=[search_tool, image_generation_tool],
269
+ model=HfApiModel("Qwen/Qwen2.5-72B-Instruct"),
270
+ planning_interval=3 # This is where you activate planning!
271
+ )
272
+
273
+ # Run it!
274
+ result = agent.run(
275
+ "How long would a cheetah at full speed take to run the length of Pont Alexandre III?",
276
+ )
277
+ ```
docs/source/en/tutorials/inspect_runs.mdx ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Inspecting runs with OpenTelemetry
17
+
18
+ [[open-in-colab]]
19
+
20
+ > [!TIP]
21
+ > If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).
22
+
23
+ ## Why log your agent runs?
24
+
25
+ Agent runs are complicated to debug.
26
+
27
+ Validating that a run went properly is hard, since agent workflows are [unpredictable by design](../conceptual_guides/intro_agents) (if they were predictable, you'd just be using good old code).
28
+
29
+ And inspecting a run is hard as well: multi-step agents tend to quickly fill a console with logs, and most of the errors are just "LLM dumb" kind of errors, from which the LLM auto-corrects in the next step by writing better code or tool calls.
30
+
31
+ So using instrumentation to record agent runs is necessary in production for later inspection and monitoring!
32
+
33
+ We've adopted the [OpenTelemetry](https://opentelemetry.io/) standard for instrumenting agent runs.
34
+
35
+ This means that you can just run some instrumentation code, then run your agents normally, and everything gets logged into your platform. Below are some examples of how to do this with different OpenTelemetry backends.
36
+
37
+ Here's how it then looks like on the platform:
38
+
39
+ <div class="flex justify-center">
40
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/inspect_run_phoenix.gif"/>
41
+ </div>
42
+
43
+
44
+ ## Setting up telemetry with Arize AI Phoenix
45
+ First install the required packages. Here we install [Phoenix by Arize AI](https://github.com/Arize-ai/phoenix) because that's a good solution to collect and inspect the logs, but there are other OpenTelemetry-compatible platforms that you could use for this collection & inspection part.
46
+
47
+ ```shell
48
+ pip install 'smolagents[telemetry]'
49
+ ```
50
+
51
+ Then run the collector in the background.
52
+
53
+ ```shell
54
+ python -m phoenix.server.main serve
55
+ ```
56
+
57
+ Finally, set up `SmolagentsInstrumentor` to trace your agents and send the traces to Phoenix default endpoint.
58
+
59
+ ```python
60
+ from phoenix.otel import register
61
+ from openinference.instrumentation.smolagents import SmolagentsInstrumentor
62
+
63
+ register()
64
+ SmolagentsInstrumentor().instrument()
65
+ ```
66
+ Then you can run your agents!
67
+
68
+ ```py
69
+ from smolagents import (
70
+ CodeAgent,
71
+ ToolCallingAgent,
72
+ DuckDuckGoSearchTool,
73
+ VisitWebpageTool,
74
+ HfApiModel,
75
+ )
76
+
77
+ model = HfApiModel()
78
+
79
+ search_agent = ToolCallingAgent(
80
+ tools=[DuckDuckGoSearchTool(), VisitWebpageTool()],
81
+ model=model,
82
+ name="search_agent",
83
+ description="This is an agent that can do web search.",
84
+ )
85
+
86
+ manager_agent = CodeAgent(
87
+ tools=[],
88
+ model=model,
89
+ managed_agents=[search_agent],
90
+ )
91
+ manager_agent.run(
92
+ "If the US keeps its 2024 growth rate, how many years will it take for the GDP to double?"
93
+ )
94
+ ```
95
+ Voilà!
96
+ You can then navigate to `http://0.0.0.0:6006/projects/` to inspect your run!
97
+
98
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/inspect_run_phoenix.png">
99
+
100
+ You can see that the CodeAgent called its managed ToolCallingAgent (by the way, the managed agent could have been a CodeAgent as well) to ask it to run the web search for the U.S. 2024 growth rate. Then the managed agent returned its report and the manager agent acted upon it to calculate the economy doubling time! Sweet, isn't it?
101
+
102
+ ## Setting up telemetry with Langfuse
103
+
104
+ This part shows how to monitor and debug your Hugging Face **smolagents** with **Langfuse** using the `SmolagentsInstrumentor`.
105
+
106
+ > **What is Langfuse?** [Langfuse](https://langfuse.com) is an open-source platform for LLM engineering. It provides tracing and monitoring capabilities for AI agents, helping developers debug, analyze, and optimize their products. Langfuse integrates with various tools and frameworks via native integrations, OpenTelemetry, and SDKs.
107
+
108
+ ### Step 1: Install Dependencies
109
+
110
+ ```python
111
+ %pip install smolagents
112
+ %pip install opentelemetry-sdk opentelemetry-exporter-otlp openinference-instrumentation-smolagents
113
+ ```
114
+
115
+ ### Step 2: Set Up Environment Variables
116
+
117
+ Set your Langfuse API keys and configure the OpenTelemetry endpoint to send traces to Langfuse. Get your Langfuse API keys by signing up for [Langfuse Cloud](https://cloud.langfuse.com) or [self-hosting Langfuse](https://langfuse.com/self-hosting).
118
+
119
+ Also, add your [Hugging Face token](https://huggingface.co/settings/tokens) (`HF_TOKEN`) as an environment variable.
120
+
121
+ ```python
122
+ import os
123
+ import base64
124
+
125
+ LANGFUSE_PUBLIC_KEY="pk-lf-..."
126
+ LANGFUSE_SECRET_KEY="sk-lf-..."
127
+ LANGFUSE_AUTH=base64.b64encode(f"{LANGFUSE_PUBLIC_KEY}:{LANGFUSE_SECRET_KEY}".encode()).decode()
128
+
129
+ os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://cloud.langfuse.com/api/public/otel" # EU data region
130
+ # os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://us.cloud.langfuse.com/api/public/otel" # US data region
131
+ os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"Authorization=Basic {LANGFUSE_AUTH}"
132
+
133
+ # your Hugging Face token
134
+ os.environ["HF_TOKEN"] = "hf_..."
135
+ ```
136
+
137
+ ### Step 3: Initialize the `SmolagentsInstrumentor`
138
+
139
+ Initialize the `SmolagentsInstrumentor` before your application code. Configure `tracer_provider` and add a span processor to export traces to Langfuse. `OTLPSpanExporter()` uses the endpoint and headers from the environment variables.
140
+
141
+
142
+ ```python
143
+ from opentelemetry.sdk.trace import TracerProvider
144
+
145
+ from openinference.instrumentation.smolagents import SmolagentsInstrumentor
146
+ from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
147
+ from opentelemetry.sdk.trace.export import SimpleSpanProcessor
148
+
149
+ trace_provider = TracerProvider()
150
+ trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter()))
151
+
152
+ SmolagentsInstrumentor().instrument(tracer_provider=trace_provider)
153
+ ```
154
+
155
+ ### Step 4: Run your smolagent
156
+
157
+ ```python
158
+ from smolagents import (
159
+ CodeAgent,
160
+ ToolCallingAgent,
161
+ DuckDuckGoSearchTool,
162
+ VisitWebpageTool,
163
+ HfApiModel,
164
+ )
165
+
166
+ model = HfApiModel(
167
+ model_id="deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
168
+ )
169
+
170
+ search_agent = ToolCallingAgent(
171
+ tools=[DuckDuckGoSearchTool(), VisitWebpageTool()],
172
+ model=model,
173
+ name="search_agent",
174
+ description="This is an agent that can do web search.",
175
+ )
176
+
177
+ manager_agent = CodeAgent(
178
+ tools=[],
179
+ model=model,
180
+ managed_agents=[search_agent],
181
+ )
182
+ manager_agent.run(
183
+ "How can Langfuse be used to monitor and improve the reasoning and decision-making of smolagents when they execute multi-step tasks, like dynamically adjusting a recipe based on user feedback or available ingredients?"
184
+ )
185
+ ```
186
+
187
+ ### Step 5: View Traces in Langfuse
188
+
189
+ After running the agent, you can view the traces generated by your smolagents application in [Langfuse](https://cloud.langfuse.com). You should see detailed steps of the LLM interactions, which can help you debug and optimize your AI agent.
190
+
191
+ ![smolagents example trace](https://langfuse.com/images/cookbook/integration-smolagents/smolagent_example_trace.png)
192
+
193
+ _[Public example trace in Langfuse](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/ce5160f9bfd5a6cd63b07d2bfcec6f54?timestamp=2025-02-11T09%3A25%3A45.163Z&display=details)_
docs/source/en/tutorials/memory.mdx ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # 📚 Manage your agent's memory
17
+
18
+ [[open-in-colab]]
19
+
20
+ In the end, an agent can be defined by simple components: it has tools, prompts.
21
+ And most importantly, it has a memory of past steps, drawing a history of planning, execution, and errors.
22
+
23
+ ### Replay your agent's memory
24
+
25
+ We propose several features to inspect a past agent run.
26
+
27
+ You can instrument the agent's run to display it in a great UI that lets you zoom in/out on specific steps, as highlighted in the [instrumentation guide](./inspect_runs).
28
+
29
+ You can also use `agent.replay()`, as follows:
30
+
31
+ After the agent has run:
32
+ ```py
33
+ from smolagents import HfApiModel, CodeAgent
34
+
35
+ agent = CodeAgent(tools=[], model=HfApiModel(), verbosity_level=0)
36
+
37
+ result = agent.run("What's the 20th Fibonacci number?")
38
+ ```
39
+
40
+ If you want to replay this last run, just use:
41
+ ```py
42
+ agent.replay()
43
+ ```
44
+
45
+ ### Dynamically change the agent's memory
46
+
47
+ Many advanced use cases require dynamic modification of the agent's memory.
48
+
49
+ You can access the agent's memory using:
50
+
51
+ ```py
52
+ from smolagents import ActionStep
53
+
54
+ system_prompt_step = agent.memory.system_prompt
55
+ print("The system prompt given to the agent was:")
56
+ print(system_prompt_step.system_prompt)
57
+
58
+ task_step = agent.memory.steps[0]
59
+ print("\n\nThe first task step was:")
60
+ print(task_step.task)
61
+
62
+ for step in agent.memory.steps:
63
+ if isinstance(step, ActionStep):
64
+ if step.error is not None:
65
+ print(f"\nStep {step.step_number} got this error:\n{step.error}\n")
66
+ else:
67
+ print(f"\nStep {step.step_number} got these observations:\n{step.observations}\n")
68
+ ```
69
+
70
+ Use `agent.memory.get_full_steps()` to get full steps as dictionaries.
71
+
72
+ You can also use step callbacks to dynamically change the agent's memory.
73
+
74
+ Step callbacks can access the `agent` itself in their arguments, so they can access any memory step as highlighted above, and change it if needed. For instance, let's say you are observing screenshots of each step performed by a web browser agent. You want to log the newest screenshot, and remove the images from ancient steps to save on token costs.
75
+
76
+ You culd run something like the following.
77
+ _Note: this code is incomplete, some imports and object definitions have been removed for the sake of concision, visit [the original script](https://github.com/huggingface/smolagents/blob/main/src/smolagents/vision_web_browser.py) to get the full working code._
78
+
79
+ ```py
80
+ import helium
81
+ from PIL import Image
82
+ from io import BytesIO
83
+ from time import sleep
84
+
85
+ def update_screenshot(memory_step: ActionStep, agent: CodeAgent) -> None:
86
+ sleep(1.0) # Let JavaScript animations happen before taking the screenshot
87
+ driver = helium.get_driver()
88
+ latest_step = memory_step.step_number
89
+ for previous_memory_step in agent.memory.steps: # Remove previous screenshots from logs for lean processing
90
+ if isinstance(previous_memory_step, ActionStep) and previous_memory_step.step_number <= latest_step - 2:
91
+ previous_memory_step.observations_images = None
92
+ png_bytes = driver.get_screenshot_as_png()
93
+ image = Image.open(BytesIO(png_bytes))
94
+ memory_step.observations_images = [image.copy()]
95
+ ```
96
+
97
+ Then you should pass this function in the `step_callbacks` argument upon initialization of your agent:
98
+
99
+ ```py
100
+ CodeAgent(
101
+ tools=[DuckDuckGoSearchTool(), go_back, close_popups, search_item_ctrl_f],
102
+ model=model,
103
+ additional_authorized_imports=["helium"],
104
+ step_callbacks=[update_screenshot],
105
+ max_steps=20,
106
+ verbosity_level=2,
107
+ )
108
+ ```
109
+
110
+ Head to our [vision web browser code](https://github.com/huggingface/smolagents/blob/main/src/smolagents/vision_web_browser.py) to see the full working example.
111
+
112
+ ### Run agents one step at a time
113
+
114
+ This can be useful in case you have tool calls that take days: you can just run your agents step by step.
115
+ This will also let you update the memory on each step.
116
+
117
+ ```py
118
+ from smolagents import HfApiModel, CodeAgent, ActionStep, TaskStep
119
+
120
+ agent = CodeAgent(tools=[], model=HfApiModel(), verbosity_level=1)
121
+ print(agent.memory.system_prompt)
122
+
123
+ task = "What is the 20th Fibonacci number?"
124
+
125
+ # You could modify the memory as needed here by inputting the memory of another agent.
126
+ # agent.memory.steps = previous_agent.memory.steps
127
+
128
+ # Let's start a new task!
129
+ agent.memory.steps.append(TaskStep(task=task, task_images=[]))
130
+
131
+ final_answer = None
132
+ step_number = 1
133
+ while final_answer is None and step_number <= 10:
134
+ memory_step = ActionStep(
135
+ step_number=step_number,
136
+ observations_images=[],
137
+ )
138
+ # Run one step.
139
+ final_answer = agent.step(memory_step)
140
+ agent.memory.steps.append(memory_step)
141
+ step_number += 1
142
+
143
+ # Change the memory as you please!
144
+ # For instance to update the latest step:
145
+ # agent.memory.steps[-1] = ...
146
+
147
+ print("The final answer is:", final_answer)
148
+ ```
docs/source/en/tutorials/secure_code_execution.mdx ADDED
@@ -0,0 +1,317 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Secure code execution
17
+
18
+ [[open-in-colab]]
19
+
20
+ > [!TIP]
21
+ > If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).
22
+
23
+ ### Code agents
24
+
25
+ [Multiple](https://huggingface.co/papers/2402.01030) [research](https://huggingface.co/papers/2411.01747) [papers](https://huggingface.co/papers/2401.00812) have shown that having the LLM write its actions (the tool calls) in code is much better than the current standard format for tool calling, which is across the industry different shades of "writing actions as a JSON of tools names and arguments to use".
26
+
27
+ Why is code better? Well, because we crafted our code languages specifically to be great at expressing actions performed by a computer. If JSON snippets were a better way, this package would have been written in JSON snippets and the devil would be laughing at us.
28
+
29
+ Code is just a better way to express actions on a computer. It has better:
30
+ - **Composability:** could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function?
31
+ - **Object management:** how do you store the output of an action like `generate_image` in JSON?
32
+ - **Generality:** code is built to express simply anything you can have a computer do.
33
+ - **Representation in LLM training corpus:** why not leverage this benediction of the sky that plenty of quality actions have already been included in LLM training corpus?
34
+
35
+ This is illustrated on the figure below, taken from [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030).
36
+
37
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png">
38
+
39
+ This is why we put emphasis on proposing code agents, in this case python agents, which meant putting higher effort on building secure python interpreters.
40
+
41
+ ### Local code execution??
42
+
43
+ By default, the `CodeAgent` runs LLM-generated code in your environment.
44
+
45
+ This is inherently risky, LLM-generated code could be harmful to your environment.
46
+ One could argue that on the [spectrum of agency](../conceptual_guides/intro_agents), code agents give much higher agency to the LLM on your system than other less agentic setups: this goes hand-in-hand with higher risk.
47
+
48
+ So you need to be mindful of security.
49
+
50
+ To add a first layer of security, code execution in `smolagents` is not performed by the vanilla Python interpreter.
51
+ We have re-built a more secure `LocalPythonExecutor` from the ground up.
52
+
53
+ To be precise, this interpreter works by loading the Abstract Syntax Tree (AST) from your Code and executes it operation by operation, making sure to always follow certain rules:
54
+ - By default, imports are disallowed unless they have been explicitly added to an authorization list by the user.
55
+ - Even so, because some innocuous packages like `re` can give access to potentially harmful packages as in `re.subprocess`, subpackages that match a list of dangerous patterns are not imported.
56
+ - The total count of elementary operations processed is capped to prevent infinite loops and resource bloating.
57
+ - Any operation that has not been explicitly defined in our custom interpreter will raise an error.
58
+
59
+ As a result, this interpreter is safer. We have used it on a diversity of use cases, without ever observing any damage to the environment.
60
+
61
+ However, this solution is certainly not watertight, as no local python sandbox can really be: one could imagine occasions where LLMs fine-tuned for malignant actions could still hurt your environment.
62
+
63
+ For instance, if you have allowed an innocuous package like `Pillow` to process images, the LLM could generate thousands of image saves to bloat your hard drive.
64
+ Other examples of attacks can be found [here](https://gynvael.coldwind.pl/n/python_sandbox_escape).
65
+
66
+ Running these targeted malicious code snippet require a supply chain attack, meaning the LLM you use has been intoxicated.
67
+
68
+ The likelihood of this happening is low when using well-known LLMs from trusted inference providers, but it is still non-zero.
69
+
70
+ > [!WARNING]
71
+ > The only way to run LLM-generated code securely is to isolate the execution from your local environment.
72
+
73
+ So if you want to exercise caution, you should use a remote execution sandbox.
74
+
75
+ Here are examples of how to do it.
76
+
77
+ ## Sandbox setup for secure code execution
78
+
79
+ When working with AI agents that execute code, security is paramount. This guide describes how to set up and use secure sandboxes for your agent applications using either E2B cloud sandboxes or local Docker containers.
80
+
81
+ ### E2B setup
82
+
83
+ #### Installation
84
+
85
+ 1. Create an E2B account at [e2b.dev](https://e2b.dev)
86
+ 2. Install the required packages:
87
+ ```bash
88
+ pip install 'smolagents[e2b]'
89
+ ```
90
+
91
+ #### Running your agent in E2B: mono agents
92
+
93
+ We provide a simple way to use an E2B Sandbox: simply add `executor_type="e2b"` to the agent initialization, like:
94
+ ```py
95
+ from smolagents import HfApiModel, CodeAgent
96
+
97
+ agent = CodeAgent(model=HfApiModel(), tools=[], executor_type="e2b")
98
+
99
+ agent.run("Can you give me the 100th Fibonacci number?")
100
+ ```
101
+
102
+ However, this does not work (yet) with more complicated multi-agent setups.
103
+
104
+ #### Running your agent in E2B: multi-agents
105
+
106
+ To use multi-agents in an E2B sandbox, you need to run your agents completely from within E2B.
107
+
108
+ Here is how to do it:
109
+
110
+ ```python
111
+ from e2b_code_interpreter import Sandbox
112
+ import os
113
+
114
+ # Create the sandbox
115
+ sandbox = Sandbox()
116
+
117
+ # Install required packages
118
+ sandbox.commands.run("pip install smolagents")
119
+
120
+ def run_code_raise_errors(sandbox, code: str, verbose: bool = False) -> str:
121
+ execution = sandbox.run_code(
122
+ code,
123
+ envs={'HF_TOKEN': os.getenv('HF_TOKEN')}
124
+ )
125
+ if execution.error:
126
+ execution_logs = "\n".join([str(log) for log in execution.logs.stdout])
127
+ logs = execution_logs
128
+ logs += execution.error.traceback
129
+ raise ValueError(logs)
130
+ return "\n".join([str(log) for log in execution.logs.stdout])
131
+
132
+ # Define your agent application
133
+ agent_code = """
134
+ import os
135
+ from smolagents import CodeAgent, HfApiModel
136
+
137
+ # Initialize the agents
138
+ agent = CodeAgent(
139
+ model=HfApiModel(token=os.getenv("HF_TOKEN"), provider="together"),
140
+ tools=[],
141
+ name="coder_agent",
142
+ description="This agent takes care of your difficult algorithmic problems using code."
143
+ )
144
+
145
+ manager_agent = CodeAgent(
146
+ model=HfApiModel(token=os.getenv("HF_TOKEN"), provider="together"),
147
+ tools=[],
148
+ managed_agents=[agent],
149
+ )
150
+
151
+ # Run the agent
152
+ response = manager_agent.run("What's the 20th Fibonacci number?")
153
+ print(response)
154
+ """
155
+
156
+ # Run the agent code in the sandbox
157
+ execution_logs = run_code_raise_errors(sandbox, agent_code)
158
+ print(execution_logs)
159
+ ```
160
+
161
+ ### Docker setup
162
+
163
+ #### Installation
164
+
165
+ 1. [Install Docker on your system](https://docs.docker.com/get-started/get-docker/)
166
+ 2. Install the required packages:
167
+ ```bash
168
+ pip install 'smolagents[docker]'
169
+ ```
170
+
171
+ #### Setting up the docker sandbox
172
+
173
+ Create a Dockerfile for your agent environment:
174
+
175
+ ```dockerfile
176
+ FROM python:3.10-bullseye
177
+
178
+ # Install build dependencies
179
+ RUN apt-get update && \
180
+ apt-get install -y --no-install-recommends \
181
+ build-essential \
182
+ python3-dev && \
183
+ pip install --no-cache-dir --upgrade pip && \
184
+ pip install --no-cache-dir smolagents && \
185
+ apt-get clean && \
186
+ rm -rf /var/lib/apt/lists/*
187
+
188
+ # Set working directory
189
+ WORKDIR /app
190
+
191
+ # Run with limited privileges
192
+ USER nobody
193
+
194
+ # Default command
195
+ CMD ["python", "-c", "print('Container ready')"]
196
+ ```
197
+
198
+ Create a sandbox manager to run code:
199
+
200
+ ```python
201
+ import docker
202
+ import os
203
+ from typing import Optional
204
+
205
+ class DockerSandbox:
206
+ def __init__(self):
207
+ self.client = docker.from_env()
208
+ self.container = None
209
+
210
+ def create_container(self):
211
+ try:
212
+ image, build_logs = self.client.images.build(
213
+ path=".",
214
+ tag="agent-sandbox",
215
+ rm=True,
216
+ forcerm=True,
217
+ buildargs={},
218
+ # decode=True
219
+ )
220
+ except docker.errors.BuildError as e:
221
+ print("Build error logs:")
222
+ for log in e.build_log:
223
+ if 'stream' in log:
224
+ print(log['stream'].strip())
225
+ raise
226
+
227
+ # Create container with security constraints and proper logging
228
+ self.container = self.client.containers.run(
229
+ "agent-sandbox",
230
+ command="tail -f /dev/null", # Keep container running
231
+ detach=True,
232
+ tty=True,
233
+ mem_limit="512m",
234
+ cpu_quota=50000,
235
+ pids_limit=100,
236
+ security_opt=["no-new-privileges"],
237
+ cap_drop=["ALL"],
238
+ environment={
239
+ "HF_TOKEN": os.getenv("HF_TOKEN")
240
+ },
241
+ )
242
+
243
+ def run_code(self, code: str) -> Optional[str]:
244
+ if not self.container:
245
+ self.create_container()
246
+
247
+ # Execute code in container
248
+ exec_result = self.container.exec_run(
249
+ cmd=["python", "-c", code],
250
+ user="nobody"
251
+ )
252
+
253
+ # Collect all output
254
+ return exec_result.output.decode() if exec_result.output else None
255
+
256
+
257
+ def cleanup(self):
258
+ if self.container:
259
+ try:
260
+ self.container.stop()
261
+ except docker.errors.NotFound:
262
+ # Container already removed, this is expected
263
+ pass
264
+ except Exception as e:
265
+ print(f"Error during cleanup: {e}")
266
+ finally:
267
+ self.container = None # Clear the reference
268
+
269
+ # Example usage:
270
+ sandbox = DockerSandbox()
271
+
272
+ try:
273
+ # Define your agent code
274
+ agent_code = """
275
+ import os
276
+ from smolagents import CodeAgent, HfApiModel
277
+
278
+ # Initialize the agent
279
+ agent = CodeAgent(
280
+ model=HfApiModel(token=os.getenv("HF_TOKEN"), provider="together"),
281
+ tools=[]
282
+ )
283
+
284
+ # Run the agent
285
+ response = agent.run("What's the 20th Fibonacci number?")
286
+ print(response)
287
+ """
288
+
289
+ # Run the code in the sandbox
290
+ output = sandbox.run_code(agent_code)
291
+ print(output)
292
+
293
+ finally:
294
+ sandbox.cleanup()
295
+ ```
296
+
297
+ ### Best practices for sandboxes
298
+
299
+ These key practices apply to both E2B and Docker sandboxes:
300
+
301
+ - Resource management
302
+ - Set memory and CPU limits
303
+ - Implement execution timeouts
304
+ - Monitor resource usage
305
+ - Security
306
+ - Run with minimal privileges
307
+ - Disable unnecessary network access
308
+ - Use environment variables for secrets
309
+ - Environment
310
+ - Keep dependencies minimal
311
+ - Use fixed package versions
312
+ - If you use base images, update them regularly
313
+
314
+ - Cleanup
315
+ - Always ensure proper cleanup of resources, especially for Docker containers, to avoid having dangling containers eating up resources.
316
+
317
+ ✨ By following these practices and implementing proper cleanup procedures, you can ensure your agent runs safely and efficiently in a sandboxed environment.
docs/source/en/tutorials/tools.mdx ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Tools
17
+
18
+ [[open-in-colab]]
19
+
20
+ Here, we're going to see advanced tool usage.
21
+
22
+ > [!TIP]
23
+ > If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).
24
+
25
+ - [Tools](#tools)
26
+ - [What is a tool, and how to build one?](#what-is-a-tool-and-how-to-build-one)
27
+ - [Share your tool to the Hub](#share-your-tool-to-the-hub)
28
+ - [Import a Space as a tool](#import-a-space-as-a-tool)
29
+ - [Use LangChain tools](#use-langchain-tools)
30
+ - [Manage your agent's toolbox](#manage-your-agents-toolbox)
31
+ - [Use a collection of tools](#use-a-collection-of-tools)
32
+
33
+ ### What is a tool, and how to build one?
34
+
35
+ A tool is mostly a function that an LLM can use in an agentic system.
36
+
37
+ But to use it, the LLM will need to be given an API: name, tool description, input types and descriptions, output type.
38
+
39
+ So it cannot be only a function. It should be a class.
40
+
41
+ So at core, the tool is a class that wraps a function with metadata that helps the LLM understand how to use it.
42
+
43
+ Here's how it looks:
44
+
45
+ ```python
46
+ from smolagents import Tool
47
+
48
+ class HFModelDownloadsTool(Tool):
49
+ name = "model_download_counter"
50
+ description = """
51
+ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.
52
+ It returns the name of the checkpoint."""
53
+ inputs = {
54
+ "task": {
55
+ "type": "string",
56
+ "description": "the task category (such as text-classification, depth-estimation, etc)",
57
+ }
58
+ }
59
+ output_type = "string"
60
+
61
+ def forward(self, task: str):
62
+ from huggingface_hub import list_models
63
+
64
+ model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
65
+ return model.id
66
+
67
+ model_downloads_tool = HFModelDownloadsTool()
68
+ ```
69
+
70
+ The custom tool subclasses [`Tool`] to inherit useful methods. The child class also defines:
71
+ - An attribute `name`, which corresponds to the name of the tool itself. The name usually describes what the tool does. Since the code returns the model with the most downloads for a task, let's name it `model_download_counter`.
72
+ - An attribute `description` is used to populate the agent's system prompt.
73
+ - An `inputs` attribute, which is a dictionary with keys `"type"` and `"description"`. It contains information that helps the Python interpreter make educated choices about the input.
74
+ - An `output_type` attribute, which specifies the output type. The types for both `inputs` and `output_type` should be [Pydantic formats](https://docs.pydantic.dev/latest/concepts/json_schema/#generating-json-schema), they can be either of these: [`~AUTHORIZED_TYPES`].
75
+ - A `forward` method which contains the inference code to be executed.
76
+
77
+ And that's all it needs to be used in an agent!
78
+
79
+ There's another way to build a tool. In the [guided_tour](../guided_tour), we implemented a tool using the `@tool` decorator. The [`tool`] decorator is the recommended way to define simple tools, but sometimes you need more than this: using several methods in a class for more clarity, or using additional class attributes.
80
+
81
+ In this case, you can build your tool by subclassing [`Tool`] as described above.
82
+
83
+ ### Share your tool to the Hub
84
+
85
+ You can share your custom tool to the Hub by calling [`~Tool.push_to_hub`] on the tool. Make sure you've created a repository for it on the Hub and are using a token with read access.
86
+
87
+ ```python
88
+ model_downloads_tool.push_to_hub("{your_username}/hf-model-downloads", token="<YOUR_HUGGINGFACEHUB_API_TOKEN>")
89
+ ```
90
+
91
+ For the push to Hub to work, your tool will need to respect some rules:
92
+ - All methods are self-contained, e.g. use variables that come either from their args.
93
+ - As per the above point, **all imports should be defined directly within the tool's functions**, else you will get an error when trying to call [`~Tool.save`] or [`~Tool.push_to_hub`] with your custom tool.
94
+ - If you subclass the `__init__` method, you can give it no other argument than `self`. This is because arguments set during a specific tool instance's initialization are hard to track, which prevents from sharing them properly to the hub. And anyway, the idea of making a specific class is that you can already set class attributes for anything you need to hard-code (just set `your_variable=(...)` directly under the `class YourTool(Tool):` line). And of course you can still create a class attribute anywhere in your code by assigning stuff to `self.your_variable`.
95
+
96
+
97
+ Once your tool is pushed to Hub, you can visualize it. [Here](https://huggingface.co/spaces/m-ric/hf-model-downloads) is the `model_downloads_tool` that I've pushed. It has a nice gradio interface.
98
+
99
+ When diving into the tool files, you can find that all the tool's logic is under [tool.py](https://huggingface.co/spaces/m-ric/hf-model-downloads/blob/main/tool.py). That is where you can inspect a tool shared by someone else.
100
+
101
+ Then you can load the tool with [`load_tool`] or create it with [`~Tool.from_hub`] and pass it to the `tools` parameter in your agent.
102
+ Since running tools means running custom code, you need to make sure you trust the repository, thus we require to pass `trust_remote_code=True` to load a tool from the Hub.
103
+
104
+ ```python
105
+ from smolagents import load_tool, CodeAgent
106
+
107
+ model_download_tool = load_tool(
108
+ "{your_username}/hf-model-downloads",
109
+ trust_remote_code=True
110
+ )
111
+ ```
112
+
113
+ ### Import a Space as a tool
114
+
115
+ You can directly import a Space from the Hub as a tool using the [`Tool.from_space`] method!
116
+
117
+ You only need to provide the id of the Space on the Hub, its name, and a description that will help you agent understand what the tool does. Under the hood, this will use [`gradio-client`](https://pypi.org/project/gradio-client/) library to call the Space.
118
+
119
+ For instance, let's import the [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) Space from the Hub and use it to generate an image.
120
+
121
+ ```python
122
+ image_generation_tool = Tool.from_space(
123
+ "black-forest-labs/FLUX.1-schnell",
124
+ name="image_generator",
125
+ description="Generate an image from a prompt"
126
+ )
127
+
128
+ image_generation_tool("A sunny beach")
129
+ ```
130
+ And voilà, here's your image! 🏖️
131
+
132
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sunny_beach.webp">
133
+
134
+ Then you can use this tool just like any other tool. For example, let's improve the prompt `a rabbit wearing a space suit` and generate an image of it. This example also shows how you can pass additional arguments to the agent.
135
+
136
+ ```python
137
+ from smolagents import CodeAgent, HfApiModel
138
+
139
+ model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct")
140
+ agent = CodeAgent(tools=[image_generation_tool], model=model)
141
+
142
+ agent.run(
143
+ "Improve this prompt, then generate an image of it.", additional_args={'user_prompt': 'A rabbit wearing a space suit'}
144
+ )
145
+ ```
146
+
147
+ ```text
148
+ === Agent thoughts:
149
+ improved_prompt could be "A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background"
150
+
151
+ Now that I have improved the prompt, I can use the image generator tool to generate an image based on this prompt.
152
+ >>> Agent is executing the code below:
153
+ image = image_generator(prompt="A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background")
154
+ final_answer(image)
155
+ ```
156
+
157
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit_spacesuit_flux.webp">
158
+
159
+ How cool is this? 🤩
160
+
161
+ ### Use LangChain tools
162
+
163
+ We love Langchain and think it has a very compelling suite of tools.
164
+ To import a tool from LangChain, use the `from_langchain()` method.
165
+
166
+ Here is how you can use it to recreate the intro's search result using a LangChain web search tool.
167
+ This tool will need `pip install langchain google-search-results -q` to work properly.
168
+ ```python
169
+ from langchain.agents import load_tools
170
+
171
+ search_tool = Tool.from_langchain(load_tools(["serpapi"])[0])
172
+
173
+ agent = CodeAgent(tools=[search_tool], model=model)
174
+
175
+ agent.run("How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?")
176
+ ```
177
+
178
+ ### Manage your agent's toolbox
179
+
180
+ You can manage an agent's toolbox by adding or replacing a tool in attribute `agent.tools`, since it is a standard dictionary.
181
+
182
+ Let's add the `model_download_tool` to an existing agent initialized with only the default toolbox.
183
+
184
+ ```python
185
+ from smolagents import HfApiModel
186
+
187
+ model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct")
188
+
189
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
190
+ agent.tools[model_download_tool.name] = model_download_tool
191
+ ```
192
+ Now we can leverage the new tool:
193
+
194
+ ```python
195
+ agent.run(
196
+ "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub but reverse the letters?"
197
+ )
198
+ ```
199
+
200
+
201
+ > [!TIP]
202
+ > Beware of not adding too many tools to an agent: this can overwhelm weaker LLM engines.
203
+
204
+
205
+ ### Use a collection of tools
206
+
207
+ You can leverage tool collections by using the `ToolCollection` object. It supports loading either a collection from the Hub or an MCP server tools.
208
+
209
+ #### Tool Collection from a collection in the Hub
210
+
211
+ You can leverage it with the slug of the collection you want to use.
212
+ Then pass them as a list to initialize your agent, and start using them!
213
+
214
+ ```py
215
+ from smolagents import ToolCollection, CodeAgent
216
+
217
+ image_tool_collection = ToolCollection.from_hub(
218
+ collection_slug="huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f",
219
+ token="<YOUR_HUGGINGFACEHUB_API_TOKEN>"
220
+ )
221
+ agent = CodeAgent(tools=[*image_tool_collection.tools], model=model, add_base_tools=True)
222
+
223
+ agent.run("Please draw me a picture of rivers and lakes.")
224
+ ```
225
+
226
+ To speed up the start, tools are loaded only if called by the agent.
227
+
228
+ #### Tool Collection from any MCP server
229
+
230
+ Leverage tools from the hundreds of MCP servers available on [glama.ai](https://glama.ai/mcp/servers) or [smithery.ai](https://smithery.ai/).
231
+
232
+ The MCP servers tools can be loaded in a `ToolCollection` object as follow:
233
+
234
+ ```py
235
+ from smolagents import ToolCollection, CodeAgent
236
+ from mcp import StdioServerParameters
237
+
238
+ server_parameters = StdioServerParameters(
239
+ command="uv",
240
+ args=["--quiet", "pubmedmcp@0.1.3"],
241
+ env={"UV_PYTHON": "3.12", **os.environ},
242
+ )
243
+
244
+ with ToolCollection.from_mcp(server_parameters) as tool_collection:
245
+ agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True)
246
+ agent.run("Please find a remedy for hangover.")
247
+ ```
docs/source/hi/_config.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # docstyle-ignore
2
+ INSTALL_CONTENT = """
3
+ # Installation
4
+ ! pip install smolagents
5
+ # To install from source instead of the last release, comment the command above and uncomment the following one.
6
+ # ! pip install git+https://github.com/huggingface/smolagents.git
7
+ """
8
+
9
+ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}]
10
+ black_avoid_patterns = {
11
+ "{processor_class}": "FakeProcessorClass",
12
+ "{model_class}": "FakeModelClass",
13
+ "{object_class}": "FakeObjectClass",
14
+ }
docs/source/hi/_toctree.yml ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ - title: Get started
2
+ sections:
3
+ - local: index
4
+ title: 🤗 Agents
5
+ - local: guided_tour
6
+ title: गाइडेड टूर
7
+ - title: Tutorials
8
+ sections:
9
+ - local: tutorials/building_good_agents
10
+ title: ✨ अच्छे Agents का निर्माण
11
+ - local: tutorials/inspect_runs
12
+ title: 📊 OpenTelemetry के साथ runs का निरीक्षण
13
+ - local: tutorials/tools
14
+ title: 🛠️ Tools - in-depth guide
15
+ - local: tutorials/secure_code_execution
16
+ title: 🛡️ E2B के साथ अपने कोड एक्जीक्यूशन को सुरक्षित करें
17
+ - title: Conceptual guides
18
+ sections:
19
+ - local: conceptual_guides/intro_agents
20
+ title: 🤖 Agentic सिस्टम का परिचय
21
+ - local: conceptual_guides/react
22
+ title: 🤔 मल्टी-स्टेप एजेंट कैसे काम करते हैं?
23
+ - title: Examples
24
+ sections:
25
+ - local: examples/text_to_sql
26
+ title: सेल्फ करेक्टिंग Text-to-SQL
27
+ - local: examples/rag
28
+ title: एजेंटिक RAG के साथ अपनी ज्ञान आधारित को मास्टर करें
29
+ - local: examples/multiagents
30
+ title: एक बहु-एजेंट प्रणाली का आयोजन करें
31
+ - title: Reference
32
+ sections:
33
+ - local: reference/agents
34
+ title: एजेंट से संबंधित ऑब्जेक्ट्स
35
+ - local: reference/tools
36
+ title: टूल्स से संबंधित ऑब्जेक्ट्स
docs/source/hi/conceptual_guides/intro_agents.mdx ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Agents का परिचय
17
+
18
+ ## 🤔 Agents क्या हैं?
19
+
20
+ AI का उपयोग करने वाली किसी भी कुशल प्रणाली को LLM को वास्तविक दुनिया तक किसी प्रकार की पहुंच प्रदान करने की आवश्यकता होगी: उदाहरण के लिए बाहरी जानकारी प्राप्त करने के लिए एक खोज टूल को कॉल करने की संभावना, या किसी कार्य को हल करने के लिए कुछ प्रोग्राम पर कार्य करने की। दूसरे शब्दों में, LLM में ***agency*** होनी चाहिए। एजेंटिक प्रोग्राम LLM के लिए बाहरी दुनिया का प्रवेश द्वार हैं।
21
+
22
+ > [!TIP]
23
+ > AI Agents वे **प्रोग्राम हैं जहां LLM आउटपुट वर्कफ़्लो को नियंत्रित करते हैं**।
24
+
25
+ LLM का उपयोग करने वाली कोई भी प्रणाली LLM आउटपुट को कोड में एकीकृत करेगी। कोड वर्कफ़्लो पर LLM के इनपुट का प्रभाव सिस्टम में LLM की एजेंसी का स्तर है।
26
+
27
+ ध्यान दें कि इस परिभाषा के साथ, "agent" एक अलग, 0 या 1 परिभाषा नहीं है: इसके बजाय, "agency" एक निरंतर स्पेक्ट्रम पर विकसित होती है, जैसे-जैसे आप अपने वर्कफ़्लो पर LLM को अधिक या कम शक्ति देते हैं।
28
+
29
+ नीचे दी गई तालिका में देखें कि कैसे एजेंसी विभिन्न प्रणालियों में भिन्न हो सकती है:
30
+
31
+ | एजेंसी स्तर | विवरण | इसे क्या कहा जाता है | उदाहरण पैटर्न |
32
+ |------------|---------|-------------------|----------------|
33
+ | ☆☆☆ | LLM आउटपुट का प्रोग्राम प्रवाह पर कोई प्रभाव नहीं | सरल प्रोसेसर | `process_llm_output(llm_response)` |
34
+ | ★☆☆ | LLM आउटपुट if/else स्विच निर्धारित करता है | राउटर | `if llm_decision(): path_a() else: path_b()` |
35
+ | ★★☆ | LLM आउटपुट फंक्शन एक्जीक्यूशन निर्धारित करता है | टूल कॉलर | `run_function(llm_chosen_tool, llm_chosen_args)` |
36
+ | ★★★ | LLM आउटपुट पुनरावृत्ति और प्रोग्राम की निरंतरता को नियंत्रित करता है | मल्टी-स्टेप एजेंट | `while llm_should_continue(): execute_next_step()` |
37
+ | ★★★ | एक एजेंटिक वर्कफ़्लो दूसरे एजेंटिक वर्कफ़्लो को शुरू कर सकता है | मल्टी-एजेंट | `if llm_trigger(): execute_agent()` |
38
+
39
+ मल्टी-स्टेप agent की यह कोड संरचना है:
40
+
41
+ ```python
42
+ memory = [user_defined_task]
43
+ while llm_should_continue(memory): # यह लूप मल्टी-स्टेप भाग है
44
+ action = llm_get_next_action(memory) # यह टूल-कॉलिंग भाग है
45
+ observations = execute_action(action)
46
+ memory += [action, observations]
47
+ ```
48
+
49
+ यह एजेंटिक सिस्टम एक लूप में चलता है, प्रत्येक चरण में एक नई क्रिया को शुर��� करता है (क्रिया में कुछ पूर्व-निर्धारित *tools* को कॉल करना शामिल हो सकता है जो केवल फंक्शंस हैं), जब तक कि उसके अवलोकन से यह स्पष्ट न हो जाए कि दिए गए कार्य को हल करने के लिए एक संतोषजनक स्थिति प्राप्त कर ली गई है।
50
+
51
+ ## ✅ Agents का उपयोग कब करें / ⛔ कब उनसे बचें
52
+
53
+ Agents तब उपयोगी होते हैं जब आपको किसी ऐप के वर्कफ़्लो को निर्धारित करने के लिए LLM की आवश्यकता होती है। लेकिन वे अक्सर जरूरत से ज्यादा होते हैं। सवाल यह है कि, क्या मुझे वास्तव में दिए गए कार्य को कुशलतापूर्वक हल करने के लिए वर्कफ़्लो में लचीलेपन की आवश्यकता है?
54
+ यदि पूर्व-निर्धारित वर्कफ़्लो बहुत बार विफल होता है, तो इसका मतलब है कि आपको अधिक लचीलेपन की आवश्यकता है।
55
+
56
+ आइए एक उदाहरण लेते हैं: मान लीजिए आप एक ऐप बना रहे हैं जो एक सर्फिंग ट्रिप वेबसाइट पर ग्राहक अनुरोधों को संभालता है।
57
+
58
+ आप पहले से जान सकते हैं कि अनुरोध 2 में से किसी एक श्रेणी में आएंगे (उपयोगकर्ता की पसंद के आधार पर), और आपके पास इन 2 मामलों में से प्रत्येक के लिए एक पूर्व-निर्धारित वर्कफ़्लो है।
59
+
60
+ 1. ट्रिप के बारे में कुछ जानकारी चाहिए? ⇒ उन्हें अपने नॉलेज बेस में खोज करने के लिए एक सर्च बार तक पहुंच दें
61
+ 2. सेल्स टीम से बात करना चाहते हैं? ⇒ उन्हें एक संपर्क फॉर्म में टाइप करने दें।
62
+
63
+ यदि वह निर्धारणात्मक वर्कफ़्लो सभी प्रश्नों के लिए फिट बैठता है, तो बेशक बस सब कुछ कोड करें! यह आपको एक 100% विश्वसनीय सिस्टम देगा और एलएलएम द्वारा अनपेक्षित कार्यप्रवाह में हस्तक्षेप करने से त्रुटियों का कोई जोखिम नहीं होगा। साधारणता और मजबूती के लिए, सलाह दी जाती है कि एजेंटिक व्यवहार का उपयोग न किया जाए।
64
+
65
+ लेकिन क्या होगा अगर वर्कफ़्लो को पहले से इतनी अच्छी तरह से निर्धारित नहीं किया जा सकता?
66
+
67
+ उदाहरण के लिए, एक उपयोगकर्ता पूछना चाहता है: `"मैं सोमवार को आ सकता हूं, लेकिन मैं अपना पासपोर्ट भूल गया जिससे मुझे बुधवार तक देर हो सकती है, क्या आप मुझे और मेरी चीजों को मंगलवार सुबह सर्फ करने ले जा सकते हैं, क्या मुझे कैंसलेशन इंश्योरेंस मिल सकता है?"` यह प्रश्न कई कारकों पर निर्भर करता है, और शायद ऊपर दिए गए पूर्व-निर्धारित मानदंडों में से कोई भी इस अनुरोध के लिए पर्याप्त नहीं होगा।
68
+
69
+ यदि पूर्व-निर्धारित वर्कफ़्लो बहुत बार विफल होता है, तो इसका मतलब है कि आपको अधिक लचीलेपन की आवश्यकता है।
70
+
71
+ यहीं पर एक एजेंटिक सेटअप मदद करता है।
72
+
73
+ ऊपर दिए गए उदाहरण में, आप बस एक मल्टी-स्टेप agent बना सकते हैं जिसके पास मौसम पूर्वानुमान के लिए एक मौसम API, यात्रा की दूरी जानने के लिए के लिए Google Maps API, एक कर्मचारी उपलब्धता डैशबोर्ड और आपके नॉलेज बेस पर एक RAG सिस्टम तक पहुंच है।
74
+
75
+ हाल ही तक, कंप्यूटर प्रोग्राम पूर्व-निर्धारित वर्कफ़्लो तक सीमित थे, if/else स्विच का
76
+ ढेर लगाकार जटिलता को संभालने का प्रयास कर रहे थे। वे बेहद संकीर्ण कार्यों पर केंद्रित थे, जैसे "इन संख्याओं का योग निकालें" या "इस ग्राफ़ में सबसे छोटा रास्ता खोजें"। लेकिन वास्तव में, अधिकांश वास्तविक जीवन के कार्य, जैसे ऊपर दिया गया हमारा यात्रा उदाहरण, पूर्व-निर्धारित वर्कफ़्लो में फिट नहीं होते हैं। एजेंटिक सिस्टम प्रोग्राम के लिए वास्तविक दुनिया के कार्यों की विशाल दुनिया खोलते हैं!
77
+
78
+ ## क्यों `smolagents`?
79
+
80
+ कुछ लो-लेवल एजेंटिक उपयोग के मामलों के लिए, जैसे चेन या राउटर, आप सभी कोड खुद लिख सकते हैं। आप इस तरह से बहुत बेहतर होंगे, क्योंकि यह आपको अपने सिस्टम को बेहतर ढंग से नियंत्रित और समझने की अनुमति देगा।
81
+
82
+ लेकिन जैसे ही आप अधिक जटिल व्यवहारों की ओर बढ़ते हैं जैसे कि LLM को एक फ़ंक्शन कॉल करने देना (यह "tool calling" है) या LLM को एक while लूप चलाने देना ("multi-step agent"), कुछ एब्सट्रैक्शन्स की आवश्यकता होती है:
83
+ - टूल कॉलिंग के लिए, आपको एजेंट के आउटपुट को पार्स करने की आवश्यकता होती है, इसलिए इस आउटपुट को एक पूर्व-निर्धारित प्रारूप की आवश्यकता होती है जैसे "विचार: मुझे 'get_weather' टूल कॉल करना चाहिए। क्रिया: get_weather(Paris)।", जिसे आप एक पूर्व-निर्धारित फ़ंक्शन के साथ पार्स करते हैं, और LLM को दिए गए सिस्टम प्रॉम्प्ट को इस प्रारूप के बारे में सूचित करना चाहिए।
84
+ - एक मल्टी-स्टेप एजेंट के लिए जहां LLM आउटपुट लूप को निर्धारित करता है, आपको पिछले लूप इटरेशन में क्या हुआ इसके आधार पर LLM को एक अलग प्रॉम्प्ट देने की आवश्यकता होती है: इसलिए आपको किसी प्रकार की मेमोरी की आवश्यकता होती है।
85
+
86
+ इन दो उदाहरणों के साथ, हमने पहले ही कुछ चीजों की आवश्यकता का पता लगा लिया:
87
+
88
+ - बेशक, एक LLM जो सिस्टम को पावर देने वाले इंजन के रूप में कार्य करता है
89
+ - एजेंट द्वारा एक्सेस किए जा सकने वाले टूल्स की एक सूची
90
+ - एक पार्सर जो LLM आउटपुट से टूल कॉल को निकालता है
91
+ - एक सिस्टम प्रोम्प्ट जो पा���्सर के साथ सिंक्रनाइज़ होता है
92
+ - एक मेमोरी
93
+
94
+ लेकिन रुकिए, चूंकि हम निर्णयों में LLM को जगह देते हैं, निश्चित रूप से वे गलतियां करेंगे: इसलिए हमें एरर लॉगिंग और पुनः प्रयास तंत्र की आवश्यकता है।
95
+
96
+ ये सभी तत्व एक अच्छे कामकाजी सिस्टम बनाने के लिए एक-दूसरे से घनिष्ठ रूप से जुड़े हुए हैं। यही कारण है कि हमने तय किया कि इन सभी चीजों को एक साथ काम करने के लिए बुनियादी निर्माण ब्लॉक्स की आवश्यकता है।
97
+
98
+ ## कोड Agents
99
+
100
+ एक मल्टी-स्टेप एजेंट में, प्रत्येक चरण पर, LLM बाहरी टूल्स को कुछ कॉल के रूप में एक क्रिया लिख सकता है। इन क्रियाओं को लिखने के लिए एक सामान्य स्वरूप (Anthropic, OpenAI और कई अन्य द्वारा उपयोग किया जाता है) आमतौर पर "टूल्स के नाम और उपयोग करने के लिए तर्कों के JSON के रूप में क्रियाएं लिखने" के विभिन्न रूप होते हैं, जिन्हें आप फिर पार्स करते हैं यह जानने के लिए कि कौन सा टूल किन तर्कों के साथ निष्पादित करना है"।
101
+
102
+ [कई](https://huggingface.co/papers/2402.01030) [शोध](https://huggingface.co/papers/2411.01747) [पत्रों](https://huggingface.co/papers/2401.00812) ने दिखाया है कि कोड में टूल कॉलिंग LLM का होना बहुत बेहतर है।
103
+
104
+ इसका कारण बस यह है कि *हमने अपनी कोड भाषाओं को विशेष रूप से कंप्यूटर द्वारा किए गए कार्यों को व्यक्त करने का सर्वोत्तम संभव तरीका बनाने के लिए तैयार किया*। यदि JSON स्निपेट्स बेहतर अभिव्यक्ति होते, तो JSON शीर्ष प्रोग्रामिंग भाषा होती और प्रोग्रामिंग नरक में होती।
105
+
106
+ नीचे दी गई छवि, [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030) से ली गई है, जो कोड में क्रियाएं लिखने के कुछ फायदे दर्शाती है:
107
+
108
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png">
109
+
110
+ JSON जैसे स्निपेट्स की बजाय कोड में क्रियाएं लिखने से बेहतर प्राप्त होता है:
111
+
112
+ - **कम्पोजेबिलिटी:** क्या आप JSON क्रियाओं को एक-दूसरे के भीतर नेस्ट कर सकते हैं, या बाद में पुन: उपयोग करने के लिए JSON क्रियाओं का एक सेट परिभाषित कर सकते हैं, उसी तरह जैसे आप बस एक पायथन फंक्शन परिभाषित कर सकते हैं?
113
+ - **ऑब्जेक्ट प्रबंधन:** आप `generate_image` जैसी क्रिया के आउटपुट को JSON में कैसे स्टोर करते हैं?
114
+ - **सामान्यता:** कोड को सरल रूप से कुछ भी व्यक्त करने के लिए बनाया गया है जो आप कंप्यूटर से करवा सकते हैं।
115
+ - **LLM प्रशिक्षण डेटा में प्रतिनिधित्व:** बहुत सारी गुणवत्तापूर्ण कोड क्रियाएं पहले से ही LLM के ट्रेनिंग डेटा में शामिल हैं जिसका मतलब है कि वे इसके लिए पहले स�� ही प्रशिक्षित हैं!
docs/source/hi/conceptual_guides/react.mdx ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # मल्टी-स्टेप एजेंट्स कैसे काम करते हैं?
17
+
18
+ ReAct फ्रेमवर्क ([Yao et al., 2022](https://huggingface.co/papers/2210.03629)) वर्तमान में एजेंट्स बनाने का मुख्य दृष्टिकोण है।
19
+
20
+ नाम दो शब्दों, "Reason" (तर्क) और "Act" (क्रिया) के संयोजन पर आधारित है। वास्तव में, इस आर्किटेक्चर का पालन करने वाले एजेंट अपने कार्य को उतने चरणों में हल करेंगे जितने आवश्यक हों, प्रत्येक चरण में एक Reasoning कदम होगा, फिर एक Action कदम होगा, जहाँ यह टूल कॉल्स तैयार करेगा जो उसे कार्य को हल करने के करीब ले जाएंगे।
21
+
22
+ ReAct प्रक्रिया में पिछले चरणों की मेमोरी रखना शामिल है।
23
+
24
+ > [!TIP]
25
+ > मल्टी-स्टेप एजेंट्स के बारे में अधिक जानने के लिए [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) ब्लॉग पोस्ट पढ़ें।
26
+
27
+ यहाँ एक वीडियो ओवरव्यू है कि यह कैसे काम करता है:
28
+
29
+ <div class="flex justify-center">
30
+ <img
31
+ class="block dark:hidden"
32
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif"
33
+ />
34
+ <img
35
+ class="hidden dark:block"
36
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif"
37
+ />
38
+ </div>
39
+
40
+ ![ReAct एजेंट का फ्रेमवर्क](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-source-llms-as-agents/ReAct.png)
41
+
42
+ हम दो प्रकार के ToolCallingAgent को लागू करते हैं:
43
+ - [`ToolCallingAgent`] अपने आउटपुट में टूल कॉल को JSON के रूप में जनरेट करता है।
44
+ - [`CodeAgent`] ToolCallingAgent का एक नया प्रकार है जो अपने टूल कॉल को कोड के ब्लॉब्स के रूप में जनरेट करता है, जो उन LLM के लिए वास्तव में अच्छी तरह काम करता है जिनका कोडिंग प्रदर्शन मजबूत है।
docs/source/hi/examples/multiagents.mdx ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # मल्टी-एजेंट सिस्टम का आयोजन करें 🤖🤝🤖
17
+
18
+ [[open-in-colab]]
19
+
20
+ इस नोटबुक में हम एक **मल्टी-एजेंट वेब ब्राउज़र बनाएंगे: एक एजेंटिक सिस्टम जिसमें कई एजेंट वेब का उपयोग करके समस्याओं को हल करने के लिए सहयोग करते हैं!**
21
+
22
+ यह एक सरल संरचना होगी, जो प्रबंधित वेब खोज एजेंट को रैप करने के लिए `ManagedAgent` ऑब्जेक्ट का उपयोग करता है:
23
+
24
+ ```
25
+ +----------------+
26
+ | Manager agent |
27
+ +----------------+
28
+ |
29
+ _______________|______________
30
+ | |
31
+ Code interpreter +--------------------------------+
32
+ tool | Managed agent |
33
+ | +------------------+ |
34
+ | | Web Search agent | |
35
+ | +------------------+ |
36
+ | | | |
37
+ | Web Search tool | |
38
+ | Visit webpage tool |
39
+ +--------------------------------+
40
+ ```
41
+ आइए इस सिस्टम को सेट करें।
42
+
43
+ आवश्यक डिपेंडेंसी इंस्टॉल करने के लिए नीचे दी गई लाइन चलाएं:
44
+
45
+ ```
46
+ !pip install markdownify duckduckgo-search smolagents --upgrade -q
47
+ ```
48
+
49
+ HF Inference API को कॉल करने के लिए लॉगिन करें:
50
+
51
+ ```
52
+ from huggingface_hub import login
53
+
54
+ login()
55
+ ```
56
+
57
+ ⚡️ हमारा एजेंट [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) द्वारा संचालित होगा जो `HfApiModel` क्लास का उपयोग करता है जो HF के Inference API का उपयोग करता है: Inference API किसी भी OS मॉडल को जल्दी और आसानी से चलाने की अनुमति देता है।
58
+
59
+ _नोट:_ The Inference API विभिन्न मानदंडों के आधार पर मॉडल होस्ट करता है, और डिप्लॉय किए गए मॉडल बिना पूर्व सूचना के अपडेट या बदले जा सकते हैं। इसके बारे में अधिक जानें [यहां](https://huggingface.co/docs/api-inference/supported-models)।
60
+
61
+ ```py
62
+ model_id = "Qwen/Qwen2.5-Coder-32B-Instruct"
63
+ ```
64
+
65
+ ## 🔍 एक वेब सर्च टूल बनाएं
66
+
67
+ वेब ब्राउज़िंग के लिए, हम पहले से मौजूद [`DuckDuckGoSearchTool`](https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L151-L176) टूल का उपयोग कर सकते हैं जो Google search के समान सुविधा प्रदान करता है।
68
+
69
+ लेकिन फिर हमें `DuckDuckGoSearchTool` द्वारा खोजे गए पेज को देखने में भी सक्षम होने की आवश्यकता होगी।
70
+ ऐसा करने के लिए, हम लाइब्रेरी के बिल्ट-इन `VisitWebpageTool` को इम्पोर्ट कर सकते हैं, लेकिन हम इसे फिर से बनाएंगे यह देखने के लिए कि यह कैसे किया जाता है।
71
+
72
+ तो आइए `markdownify` का उपयोग करके शुरू से अपना `VisitWebpageTool` टूल बनाएं।
73
+
74
+ ```py
75
+ import re
76
+ import requests
77
+ from markdownify import markdownify
78
+ from requests.exceptions import RequestException
79
+ from smolagents import tool
80
+
81
+
82
+ @tool
83
+ def visit_webpage(url: str) -> str:
84
+ """Visits a webpage at the given URL and returns its content as a markdown string.
85
+
86
+ Args:
87
+ url: The URL of the webpage to visit.
88
+
89
+ Returns:
90
+ The content of the webpage converted to Markdown, or an error message if the request fails.
91
+ """
92
+ try:
93
+ # Send a GET request to the URL
94
+ response = requests.get(url)
95
+ response.raise_for_status() # Raise an exception for bad status codes
96
+
97
+ # Convert the HTML content to Markdown
98
+ markdown_content = markdownify(response.text).strip()
99
+
100
+ # Remove multiple line breaks
101
+ markdown_content = re.sub(r"\n{3,}", "\n\n", markdown_content)
102
+
103
+ return markdown_content
104
+
105
+ except RequestException as e:
106
+ return f"Error fetching the webpage: {str(e)}"
107
+ except Exception as e:
108
+ return f"An unexpected error occurred: {str(e)}"
109
+ ```
110
+
111
+ ठीक है, अब चलिए हमारे टूल को टेस्ट करें!
112
+
113
+ ```py
114
+ print(visit_webpage("https://en.wikipedia.org/wiki/Hugging_Face")[:500])
115
+ ```
116
+
117
+ ## हमारी मल्टी-एजेंट सिस्टम का निर्माण करें 🤖🤝🤖
118
+
119
+ अब जब हमारे पास सभी टूल्स `search` और `visit_webpage` हैं, हम उनका उपयोग वेब एजेंट बनाने के लिए कर सकते हैं।
120
+
121
+ इस एजेंट के लिए कौन सा कॉन्फ़िगरेशन चुनें?
122
+ - वेब ब्राउज़िंग एक सिंगल-टाइमलाइन टास्क है जिसे समानांतर टूल कॉल की आवश्यकता नहीं है, इसलिए JSON टूल कॉलिंग इसके लिए अच्छी तरह काम करती है। इसलिए हम `ToolCallingAgent` चुनते हैं।
123
+ - साथ ही, चूंकि कभी-कभी वेब सर्च में सही उत्तर खोजने से पहले कई पेजों की सर्च करने की आवश्यकता होती है, हम `max_steps` को बढ़ाकर 10 करना पसंद करते हैं।
124
+
125
+ ```py
126
+ from smolagents import (
127
+ CodeAgent,
128
+ ToolCallingAgent,
129
+ HfApiModel,
130
+ ManagedAgent,
131
+ DuckDuckGoSearchTool,
132
+ LiteLLMModel,
133
+ )
134
+
135
+ model = HfApiModel(model_id)
136
+
137
+ web_agent = ToolCallingAgent(
138
+ tools=[DuckDuckGoSearchTool(), visit_webpage],
139
+ model=model,
140
+ max_steps=10,
141
+ )
142
+ ```
143
+
144
+ फिर हम इस एजेंट को एक `ManagedAgent` में रैप करते हैं जो इसे इसके मैनेजर एजेंट द्वारा कॉल करने योग्य बनाएगा।
145
+
146
+ ```py
147
+ managed_web_agent = ManagedAgent(
148
+ agent=web_agent,
149
+ name="search",
150
+ description="Runs web searches for you. Give it your query as an argument.",
151
+ )
152
+ ```
153
+
154
+ अंत में हम एक मैनेजर एजेंट बनाते हैं, और इनिशियलाइजेशन पर हम अपने मैनेज्ड एजेंट को इसके `managed_agents` आर्गुमेंट में पास करते हैं।
155
+
156
+ चूंकि यह एजेंट योजना बनाने और सोचने का काम करता है, उन्नत तर्क लाभदायक होगा, इसलिए `CodeAgent` सबसे अच्छा विकल्प होगा।
157
+
158
+ साथ ही, हम एक ऐसा प्रश्न पूछना चाहते हैं जिसमें वर्तमान वर्ष और अतिरिक्त डेटा गणना शामिल है: इसलिए आइए `additional_authorized_imports=["time", "numpy", "pandas"]` जोड़ें, यदि एजेंट को इन पैकेजों की आवश्यकता हो।
159
+
160
+ ```py
161
+ manager_agent = CodeAgent(
162
+ tools=[],
163
+ model=model,
164
+ managed_agents=[managed_web_agent],
165
+ additional_authorized_imports=["time", "numpy", "pandas"],
166
+ )
167
+ ```
168
+
169
+ बस इतना ही! अब चलिए हमारे सिस्टम को चलाते हैं! हम एक ऐसा प्रश्न चुनते हैं जिसमें गणना और शोध दोनों की आवश्यकता है।
170
+
171
+ ```py
172
+ answer = manager_agent.run("If LLM training continues to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What would that correspond to, compared to some countries? Please provide a source for any numbers used.")
173
+ ```
174
+
175
+ We get this report as the answer:
176
+ ```
177
+ Based on current growth projections and energy consumption estimates, if LLM trainings continue to scale up at the
178
+ current rhythm until 2030:
179
+
180
+ 1. The electric power required to power the biggest training runs by 2030 would be approximately 303.74 GW, which
181
+ translates to about 2,660,762 GWh/year.
182
+
183
+ 2. Comparing this to countries' electricity consumption:
184
+ - It would be equivalent to about 34% of China's total electricity consumption.
185
+ - It would exceed the total electricity consumption of India (184%), Russia (267%), and Japan (291%).
186
+ - It would be nearly 9 times the electricity consumption of countries like Italy or Mexico.
187
+
188
+ 3. Source of numbers:
189
+ - The initial estimate of 5 GW for future LLM training comes from AWS CEO Matt Garman.
190
+ - The growth projection used a CAGR of 79.80% from market research by Springs.
191
+ - Country electricity consumption data is from the U.S. Energy Information Administration, primarily for the year
192
+ 2021.
193
+ ```
194
+
195
+ लगता है कि यदि [स्केलिंग हाइपोथिसिस](https://gwern.net/scaling-hypothesis) सत्य बनी रहती है तो हमें कुछ बड़े पावरप्लांट्स की आवश्यकता होगी।
196
+
197
+ हमारे एजेंट्स ने कार्य को हल करने के लिए कुशलतापूर्वक सहयोग किया! ✅
198
+
199
+ 💡 आप इस ऑर्केस्ट्रेशन को आसानी से अधिक एजेंट्स में विस्तारित कर सकते हैं: एक कोड एक्जीक्यूशन करता है, एक वेब सर्च करता है, एक फाइल लोडिंग को संभालता है।
docs/source/hi/examples/rag.mdx ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # एजेंटिक RAG
17
+
18
+ [[open-in-colab]]
19
+
20
+ रिट्रीवल-ऑगमेंटेड-जनरेशन (RAG) है "एक यूजर के प्रश्न का उत्तर देने के लिए LLM का उपयोग करना, लेकिन उत्तर को एक नॉलेज बेस से प्राप्त जानकारी पर आधारित करना"। इसमें वैनिला या फाइन-ट्यून्ड LLM का उपयोग करने की तुलना में कई फायदे हैं: कुछ नाम लेने के लिए, यह उत्तर को सत्य तथ्यों पर आधारित करने और काल्पनिक बातों को कम करने की अनुमति देता है, यह LLM को डोमेन-विशिष्ट ज्ञान प्रदान करने की अनुमति देता है, और यह नॉलेज बेस से जानकारी तक पहुंच का सूक्ष्म नियंत्रण प्रदान करता है।
21
+
22
+ लेकिन वैनिला RAG की सीमाएं हैं, सबसे महत्वपूर्ण ये दो:
23
+ - यह केवल एक रिट्रीवल स्टेप करता है: यदि परिणाम खराब हैं, तो जनरेशन भी बदले में खराब होगा।
24
+ - सिमेंटिक समानता की गणना यूजर के प्रश्न को संदर्भ के रूप में करके की जाती है, जो अनुकूल नहीं हो सकती: उदाहरण के लिए, यूजर का प्रश्न अक्सर एक सवाल होगा, जबकि सही उत्तर देने वाला डॉक्यूमेंट सकारात्मक स्वर में हो सकता है, और इसका समानता स्कोर अन्य स्रोत दस्तावेज़ों की तुलना में कम हो सकता है, जो प्रश्नवाचक स्वर में हो सकते हैं। इससे संबंधित जानकारी को चूकने का जोखिम होता है।
25
+
26
+ हम एक RAG एजेंट बनाकर इन समस्याओं को कम कर सकते हैं: बहुत सरल तरीके से, एक रिट्रीवर टूल से लैस एजेंट!
27
+
28
+ यह एजेंट करेगा: ✅ स्वयं क्वेरी तैयार करेगा और ✅ आवश्यकता पड़ने पर पुनः-प्राप्ति के लिए समीक्षा करेगा।
29
+
30
+ इसलिए यह सहज रूप से कुछ उन्नत RAG तकनीकों को प्राप्त कर लेना चाहिए!
31
+ - सिमेंटिक खोज में सीधे यूजर क्वेरी का संदर्भ के रूप में उपयोग करने के बजाय, एजेंट स्वयं एक संदर्भ वाक्य तैयार करता है जो लक्षित डॉक्यूमेंट्स के करीब हो सकता है, जैसा कि [HyDE](https://huggingface.co/papers/2212.10496) में किया गया है।
32
+ एजेंट जनरेट किए गए स्निपेट्स का उपयोग कर सकता है और आवश्यकता पड़ने पर पुनः-प्राप्ति कर सकता है, जैसा कि [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/) में किया गया है।
33
+
34
+ चलिए इस सिस्टम को बनाते हैं। 🛠️
35
+
36
+ आवश्य�� डिपेंडेंसी इंस्टॉल करने के लिए नीचे दी गई लाइन चलाएं।
37
+ ```bash
38
+ !pip install smolagents pandas langchain langchain-community sentence-transformers rank_bm25 --upgrade -q
39
+ ```
40
+ HF Inference API को कॉल करने के लिए, आपको अपने एनवायरनमेंट वेरिएबल `HF_TOKEN` के रूप में एक वैध टोकन की आवश्यकता होगी।
41
+ हम इसे लोड करने के लिए python-dotenv का उपयोग करते हैं।
42
+ ```py
43
+ from dotenv import load_dotenv
44
+ load_dotenv()
45
+ ```
46
+
47
+ हम पहले एक नॉलेज बेस लोड करते हैं जिस पर हम RAG को लागू करना चाहते हैं: यह डेटा सेट Hugging Face के कई लाइब्रेरी के डॉक्यूमेंट पृष्ठों का संकलन है, जिन्हें Markdown में स्टोर किया गया है। हम केवल `transformers` लाइब्रेरी के दस्तावेज़ों को रखेंगे।
48
+
49
+ फिर डेटासेट को प्रोसेस करके और इसे एक वेक्टर डेटाबेस में स्टोर करके नॉलेज बेस तैयार करें जिसे रिट्रीवर द्वारा उपयोग किया जाएगा।
50
+
51
+ हम [LangChain](https://python.langchain.com/docs/introduction/) का उपयोग करते हैं क्योंकि इसमें उत्कृष्ट वेक्टर डेटाबेस उपयोगिताएं हैं।
52
+
53
+ ```py
54
+ import datasets
55
+ from langchain.docstore.document import Document
56
+ from langchain.text_splitter import RecursiveCharacterTextSplitter
57
+ from langchain_community.retrievers import BM25Retriever
58
+
59
+ knowledge_base = datasets.load_dataset("m-ric/huggingface_doc", split="train")
60
+ knowledge_base = knowledge_base.filter(lambda row: row["source"].startswith("huggingface/transformers"))
61
+
62
+ source_docs = [
63
+ Document(page_content=doc["text"], metadata={"source": doc["source"].split("/")[1]})
64
+ for doc in knowledge_base
65
+ ]
66
+
67
+ text_splitter = RecursiveCharacterTextSplitter(
68
+ chunk_size=500,
69
+ chunk_overlap=50,
70
+ add_start_index=True,
71
+ strip_whitespace=True,
72
+ separators=["\n\n", "\n", ".", " ", ""],
73
+ )
74
+ docs_processed = text_splitter.split_documents(source_docs)
75
+ ```
76
+
77
+ अब डॉक्यूमेंट्स तैयार हैं।
78
+
79
+ तो चलिए अपना एजेंटिक RAG सिस्टम बनाएं!
80
+
81
+ 👉 हमें केवल एक RetrieverTool की आवश्यकता है जिसका उपयोग हमारा एजेंट नॉलेज बेस से जानकारी प्राप्त करने के लिए कर सकता है।
82
+
83
+ चूंकि हमें टूल के एट्रीब्यूट के रूप में एक vectordb जोड़ने की आवश्यकता है, हम सरल टूल कंस्ट्रक्टर को `@tool` डेकोरेटर के साथ सीधे उपयोग नहीं कर सकते: इसलिए हम [tools tutorial](../tutorials/tools) में हाइलाइट किए गए सेटअप का पालन करेंगे।
84
+
85
+ ```py
86
+ from smolagents import Tool
87
+
88
+ class RetrieverTool(Tool):
89
+ name = "retriever"
90
+ description = "Uses semantic search to retrieve the parts of transformers documentation that could be most relevant to answer your query."
91
+ inputs = {
92
+ "query": {
93
+ "type": "string",
94
+ "description": "The query to perform. This should be semantically close to your target documents. Use the affirmative form rather than a question.",
95
+ }
96
+ }
97
+ output_type = "string"
98
+
99
+ def __init__(self, docs, **kwargs):
100
+ super().__init__(**kwargs)
101
+ self.retriever = BM25Retriever.from_documents(
102
+ docs, k=10
103
+ )
104
+
105
+ def forward(self, query: str) -> str:
106
+ assert isinstance(query, str), "Your search query must be a string"
107
+
108
+ docs = self.retriever.invoke(
109
+ query,
110
+ )
111
+ return "\nRetrieved documents:\n" + "".join(
112
+ [
113
+ f"\n\n===== Document {str(i)} =====\n" + doc.page_content
114
+ for i, doc in enumerate(docs)
115
+ ]
116
+ )
117
+
118
+ retriever_tool = RetrieverTool(docs_processed)
119
+ ```
120
+ हमने BM25 का उपयोग किया है, जो एक क्लासिक रिट्रीवल विधि है, क्योंकि इसे सेटअप कर���ा बहुत आसान है।
121
+ रिट्रीवल सटीकता में सुधार करने के लिए, आप BM25 को डॉक्यूमेंट्स के लिए वेक्टर प्रतिनिधित्व का उपयोग करके सिमेंटिक खोज से बदल सकते हैं: इस प्रकार आप एक अच्छा एम्बेडिंग मॉडल चुनने के लिए [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) पर जा सकते हैं।
122
+
123
+ अब यह सीधा है कि एक एजेंट बनाया जाए जो इस `retriever_tool` का उपयोग करेगा!
124
+
125
+
126
+ एजेंट को इनिशियलाइजेशन पर इन आर्गुमेंट्स की आवश्यकता होगी:
127
+ - `tools`: टूल्स की एक सूची जिन्हें एजेंट कॉल कर सकेगा।
128
+ - `model`: LLM जो एजेंट को पावर देता है।
129
+ हमारा `model` एक कॉलेबल होना चाहिए जो इनपुट के रूप में संदेशों की एक सूची लेता है और टेक्स्ट लौटाता है। इसे एक stop_sequences आर्गुमेंट भी स्वीकार करने की आवश्यकता है जो बताता है कि जनरेशन कब रोकनी है। सुविधा के लिए, हम सीधे पैकेज में प्रदान की गई HfEngine क्लास का उपयोग करते हैं ताकि एक LLM इंजन मिल सके जो Hugging Face के Inference API को कॉल करता है।
130
+
131
+ और हम [meta-llama/Llama-3.3-70B-Instruct](meta-llama/Llama-3.3-70B-Instruct) का उपयोग llm इंजन के रूप में करते हैं क्योंकि:
132
+ - इसमें लंबा 128k कॉन्टेक्स्ट है, जो लंबे स्रोत दस्तावेजों को प्रोसेस करने में मददगार है
133
+ - यह हर समय HF के Inference API पर मुफ्त में उपलब्ध है!
134
+
135
+ _नोट:_ Inference API विभिन्न मानदंडों के आधार पर मॉडल होस्ट करता है, और डिप्लॉय किए गए मॉडल बिना पूर्व सूचना के अपडेट या बदले जा सकते हैं। इसके बारे में अधिक जानें [यहां](https://huggingface.co/docs/api-inference/supported-models) पढ़ें।
136
+
137
+ ```py
138
+ from smolagents import HfApiModel, CodeAgent
139
+
140
+ agent = CodeAgent(
141
+ tools=[retriever_tool], model=HfApiModel("meta-llama/Llama-3.3-70B-Instruct"), max_steps=4, verbosity_level=2
142
+ )
143
+ ```
144
+
145
+ CodeAgent को इनिशियलाइज करने पर, इसे स्वचालित रूप से एक डिफ़ॉल्ट सिस्टम प्रॉम्प्ट दिया गया है जो LLM इंजन को चरण-दर-चरण प्रोसेस करने और कोड स्निपेट्स के रूप में टूल कॉल जनरेट करने के लिए कहता है, लेकिन आप आवश्यकतानुसार इस प्रॉम्प्ट टेम्पलेट को अपने से बदल सकते हैं।
146
+
147
+ जब CodeAgent का `.run()` मेथड लॉन्च किया जाता है, तो एजेंट LLM इंजन को कॉल करने का कार्य करता है, और टूल कॉल्स को निष्पादित करता है, यह सब एक लूप में होता है, जो तब तक चलता है जब तक टूल final_answer के साथ अंतिम उत्तर के रूप में नहीं बुलाया जाता।
148
+
149
+ ```py
150
+ agent_output = agent.run("For a transformers model training, which is slower, the forward or the backward pass?")
151
+
152
+ print("Final output:")
153
+ print(agent_output)
154
+ ```
155
+
156
+
docs/source/hi/examples/text_to_sql.mdx ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Text-to-SQL
17
+
18
+ [[open-in-colab]]
19
+
20
+ इस ट्यूटोरियल में, हम देखेंगे कि कैसे `smolagents` का उपयोग करके एक एजेंट को SQL का उपयोग करने के लिए लागू किया जा सकता है।
21
+
22
+ > आइए सबसे महत्वपूर्ण प्रश्न से शुरू करें: इसे साधारण क्यों नहीं रखें और एक सामान्य text-to-SQL पाइपलाइन का उपयोग करें?
23
+
24
+ एक सामान्य text-to-SQL पाइपलाइन कमजोर होती है, क्योंकि उत्पन्न SQL क्वेरी गलत हो सकती है। इससे भी बुरी बात यह है कि क्वेरी गलत हो सकती है, लेकिन कोई एरर नहीं दिखाएगी, बल्कि बिना किसी अलार्म के गलत/बेकार आउटपुट दे सकती है।
25
+
26
+
27
+ 👉 इसके बजाय, एक एजेंट सिस्टम आउटपुट का गंभीरता से निरीक्षण कर सकता है और तय कर सकता है कि क्वेरी को बदलने की जरूरत है या नहीं, इस प्रकार इसे बेहतर प्रदर्शन में मदद मिलती है।
28
+
29
+ आइए इस एजेंट को बनाएं! 💪
30
+
31
+ पहले, हम SQL एनवायरनमेंट सेटअप करते हैं:
32
+ ```py
33
+ from sqlalchemy import (
34
+ create_engine,
35
+ MetaData,
36
+ Table,
37
+ Column,
38
+ String,
39
+ Integer,
40
+ Float,
41
+ insert,
42
+ inspect,
43
+ text,
44
+ )
45
+
46
+ engine = create_engine("sqlite:///:memory:")
47
+ metadata_obj = MetaData()
48
+
49
+ # create city SQL table
50
+ table_name = "receipts"
51
+ receipts = Table(
52
+ table_name,
53
+ metadata_obj,
54
+ Column("receipt_id", Integer, primary_key=True),
55
+ Column("customer_name", String(16), primary_key=True),
56
+ Column("price", Float),
57
+ Column("tip", Float),
58
+ )
59
+ metadata_obj.create_all(engine)
60
+
61
+ rows = [
62
+ {"receipt_id": 1, "customer_name": "Alan Payne", "price": 12.06, "tip": 1.20},
63
+ {"receipt_id": 2, "customer_name": "Alex Mason", "price": 23.86, "tip": 0.24},
64
+ {"receipt_id": 3, "customer_name": "Woodrow Wilson", "price": 53.43, "tip": 5.43},
65
+ {"receipt_id": 4, "customer_name": "Margaret James", "price": 21.11, "tip": 1.00},
66
+ ]
67
+ for row in rows:
68
+ stmt = insert(receipts).values(**row)
69
+ with engine.begin() as connection:
70
+ cursor = connection.execute(stmt)
71
+ ```
72
+
73
+ ### Agent बनाएं
74
+
75
+ अब आइए हमारी SQL टेबल को एक टूल द्वारा पुनर्प्राप्त करने योग्य बनाएं।
76
+
77
+ टूल का विवरण विशेषता एजेंट सिस्टम द्वारा LLM के prompt में एम्बेड किया जाएगा: यह LLM को टूल का उपयोग करने के बारे में जानकारी देता है। यहीं पर हम SQL टेबल का वर्णन करना चाहते हैं।
78
+
79
+ ```py
80
+ inspector = inspect(engine)
81
+ columns_info = [(col["name"], col["type"]) for col in inspector.get_columns("receipts")]
82
+
83
+ table_description = "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info])
84
+ print(table_description)
85
+ ```
86
+
87
+ ```text
88
+ Columns:
89
+ - receipt_id: INTEGER
90
+ - customer_name: VARCHAR(16)
91
+ - price: FLOAT
92
+ - tip: FLOAT
93
+ ```
94
+
95
+ अब आइए हमारा टूल बनाएं। इसे निम्नलिखित की आवश्यकता है: (अधिक जानकारी के लिए [टूल doc](../tutorials/tools) पढ़ें)
96
+ - एक डॉकस्ट्रिंग जिसमें आर्ग्युमे���ट्स की सूची वाला `Args:` भाग हो।
97
+ - इनपुट और आउटपुट दोनों पर टाइप हिंट्स।
98
+
99
+ ```py
100
+ from smolagents import tool
101
+
102
+ @tool
103
+ def sql_engine(query: str) -> str:
104
+ """
105
+ Allows you to perform SQL queries on the table. Returns a string representation of the result.
106
+ The table is named 'receipts'. Its description is as follows:
107
+ Columns:
108
+ - receipt_id: INTEGER
109
+ - customer_name: VARCHAR(16)
110
+ - price: FLOAT
111
+ - tip: FLOAT
112
+
113
+ Args:
114
+ query: The query to perform. This should be correct SQL.
115
+ """
116
+ output = ""
117
+ with engine.connect() as con:
118
+ rows = con.execute(text(query))
119
+ for row in rows:
120
+ output += "\n" + str(row)
121
+ return output
122
+ ```
123
+
124
+ अब आइए एक एजेंट बनाएं जो इस टूल का लाभ उठाता है।
125
+
126
+ हम `CodeAgent` का उपयोग करते हैं, जो smolagents का मुख्य एजेंट क्लास है: एक एजेंट जो कोड में एक्शन लिखता है और ReAct फ्रेमवर्क के अनुसार पिछले आउटपुट पर पुनरावृत्ति कर सकता है।
127
+
128
+ मॉडल वह LLM है जो एजेंट सिस्टम को संचालित करता है। `HfApiModel` आपको HF के Inference API का उपयोग करके LLM को कॉल करने की अनुमति देता है, या तो सर्वरलेस या डेडिकेटेड एंडपॉइंट के माध्यम से, लेकिन आप किसी भी प्रोप्राइटरी API का भी उपयोग कर सकते हैं।
129
+
130
+ ```py
131
+ from smolagents import CodeAgent, HfApiModel
132
+
133
+ agent = CodeAgent(
134
+ tools=[sql_engine],
135
+ model=HfApiModel("meta-llama/Meta-Llama-3.1-8B-Instruct"),
136
+ )
137
+ agent.run("Can you give me the name of the client who got the most expensive receipt?")
138
+ ```
139
+
140
+ ### लेवल 2: टेबल जॉइन्स
141
+
142
+ अब आइए इसे और चुनौतीपूर्ण बनाएं! हम चाहते हैं कि हमारा एजेंट कई टेबल्स के बीच जॉइन को संभाल सके।
143
+
144
+ तो आइए हम प्रत्येक receipt_id के लिए वेटर्स के नाम रिकॉर्ड करने वाली एक दूसरी टेबल बनाते हैं!
145
+
146
+ ```py
147
+ table_name = "waiters"
148
+ receipts = Table(
149
+ table_name,
150
+ metadata_obj,
151
+ Column("receipt_id", Integer, primary_key=True),
152
+ Column("waiter_name", String(16), primary_key=True),
153
+ )
154
+ metadata_obj.create_all(engine)
155
+
156
+ rows = [
157
+ {"receipt_id": 1, "waiter_name": "Corey Johnson"},
158
+ {"receipt_id": 2, "waiter_name": "Michael Watts"},
159
+ {"receipt_id": 3, "waiter_name": "Michael Watts"},
160
+ {"receipt_id": 4, "waiter_name": "Margaret James"},
161
+ ]
162
+ for row in rows:
163
+ stmt = insert(receipts).values(**row)
164
+ with engine.begin() as connection:
165
+ cursor = connection.execute(stmt)
166
+ ```
167
+ चूंकि हमने टेबल को बदल दिया है, हम LLM को इस टेबल की जानकारी का उचित उपयोग करने देने के लिए इस टेबल के विवरण के साथ `SQLExecutorTool` को अपडेट करते हैं।
168
+
169
+ ```py
170
+ updated_description = """Allows you to perform SQL queries on the table. Beware that this tool's output is a string representation of the execution output.
171
+ It can use the following tables:"""
172
+
173
+ inspector = inspect(engine)
174
+ for table in ["receipts", "waiters"]:
175
+ columns_info = [(col["name"], col["type"]) for col in inspector.get_columns(table)]
176
+
177
+ table_description = f"Table '{table}':\n"
178
+
179
+ table_description += "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info])
180
+ updated_description += "\n\n" + table_description
181
+
182
+ print(updated_description)
183
+ ```
184
+ चूंकि यह रिक्वेस्ट पिछले वाले से थोड़ी कठिन है, हम LLM इंजन को अधिक शक्तिशाली [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) का उपयोग करने के लिए स्विच करेंगे!
185
+
186
+ ```py
187
+ sql_engine.description = updated_description
188
+
189
+ agent = CodeAgent(
190
+ tools=[sql_engine],
191
+ model=HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct"),
192
+ )
193
+
194
+ agent.run("Which waiter got more total money from tips?")
195
+ ```
196
+ यह सीधे काम करता है! सेटअप ���श्चर्यजनक रूप से सरल था, है ना?
197
+
198
+ यह उदाहरण पूरा हो गया! हमने इन अवधारणाओं को छुआ है:
199
+ - नए टूल्स का निर्माण।
200
+ - टूल के विवरण को अपडेट करना।
201
+ - एक मजबूत LLM में स्विच करने से एजेंट की तर्कशक्ति में मदद मिलती है।
202
+
203
+ ✅ अब आप वह text-to-SQL सिस्टम बना सकते हैं जिसका आपने हमेशा सपना देखा है! ✨
docs/source/hi/guided_tour.mdx ADDED
@@ -0,0 +1,360 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Agents - गाइडेड टूर
17
+
18
+ [[open-in-colab]]
19
+
20
+ इस गाइडेड विजिट में, आप सीखेंगे कि एक एजेंट कैसे बनाएं, इसे कैसे चलाएं, और अपने यूज-केस के लिए बेहतर काम करने के लिए इसे कैसे कस्टमाइज़ करें।
21
+
22
+ ### अपना Agent बनाना
23
+
24
+ एक मिनिमल एजेंट को इनिशियलाइज़ करने के लिए, आपको कम से कम इन दो आर्ग्यूमेंट्स की आवश्यकता है:
25
+
26
+ - `model`, आपके एजेंट को पावर देने के लिए एक टेक्स्ट-जनरेशन मॉडल - क्योंकि एजेंट एक सिंपल LLM से अलग है, यह एक सिस्टम है जो LLM को अपने इंजन के रूप में उपयोग करता है। आप इनमें से कोई भी विकल्प उपयोग कर सकते हैं:
27
+ - [`TransformersModel`] `transformers` पाइपलाइन को पहले से इनिशियलाइज़ करता है जो `transformers` का उपयोग करके आपकी लोकल मशीन पर इन्फरेंस चलाने के लिए होता है।
28
+ - [`HfApiModel`] अंदर से `huggingface_hub.InferenceClient` का लाभ उठाता है।
29
+ - [`LiteLLMModel`] आपको [LiteLLM](https://docs.litellm.ai/) के माध्यम से 100+ अलग-अलग मॉडल्स को कॉल करने देता है!
30
+
31
+ - `tools`, `Tools` की एक लिस्ट जिसे एजेंट टास्क को हल करने के लिए उपयोग कर सकता है। यह एक खाली लिस्ट हो सकती है। आप ऑप्शनल आर्ग्यूमेंट `add_base_tools=True` को परिभाषित करके अपनी `tools` लिस्ट के ऊपर डिफ़ॉल्ट टूलबॉक्स भी जोड़ सकते हैं।
32
+
33
+ एक बार जब आपके पास ये दो आर्ग्यूमेंट्स, `tools` और `model` हैं, तो आप एक एजेंट बना सकते हैं और इसे चला सकते हैं। आप कोई भी LLM उपयोग कर सकते हैं, या तो [Hugging Face API](https://huggingface.co/docs/api-inference/en/index), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), या [LiteLLM](https://www.litellm.ai/) के माध्यम से।
34
+
35
+ <hfoptions id="एक LLM चुनें">
36
+ <hfoption id="Hugging Face API">
37
+
38
+ Hugging Face API टोकन के बिना उपयोग करने के लिए मुफ्त है, लेकिन फिर इसमें रेट लिमिटेशन होगी।
39
+
40
+ गेटेड मॉडल्स तक पहुंचने या PRO अकाउंट के साथ अपनी रेट लिमिट्स बढ़ाने के लिए, आपको एनवायरनमेंट वेरिएबल `HF_TOKEN` सेट करना होगा या `HfApiModel` के इनिशियलाइजेशन पर `token` वेरिएबल पास करना होगा।
41
+
42
+ ```python
43
+ from smolagents import CodeAgent, HfApiModel
44
+
45
+ model_id = "meta-llama/Llama-3.3-70B-Instruct"
46
+
47
+ model = HfApiModel(model_id=model_id, token="<YOUR_HUGGINGFACEHUB_API_TOKEN>")
48
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
49
+
50
+ agent.run(
51
+ "Could you give me the 118th number in the Fibonacci sequence?",
52
+ )
53
+ ```
54
+ </hfoption>
55
+ <hfoption id="Local Transformers Model">
56
+
57
+ ```python
58
+ from smolagents import CodeAgent, TransformersModel
59
+
60
+ model_id = "meta-llama/Llama-3.2-3B-Instruct"
61
+
62
+ model = TransformersModel(model_id=model_id)
63
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
64
+
65
+ agent.run(
66
+ "Could you give me the 118th number in the Fibonacci sequence?",
67
+ )
68
+ ```
69
+ </hfoption>
70
+ <hfoption id="OpenAI या Anthropic API">
71
+
72
+ `LiteLLMModel` का उपयोग करने के लिए, आपको एनवायरनमेंट वेरिएबल `ANTHROPIC_API_KEY` या `OPENAI_API_KEY` सेट करना होगा, या इनिशियलाइजेशन पर `api_key` वेरिएबल पास करना होगा।
73
+
74
+ ```python
75
+ from smolagents import CodeAgent, LiteLLMModel
76
+
77
+ model = LiteLLMModel(model_id="anthropic/claude-3-5-sonnet-latest", api_key="YOUR_ANTHROPIC_API_KEY") # Could use 'gpt-4o'
78
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
79
+
80
+ agent.run(
81
+ "Could you give me the 118th number in the Fibonacci sequence?",
82
+ )
83
+ ```
84
+ </hfoption>
85
+ <hfoption id="Ollama">
86
+
87
+ ```python
88
+ from smolagents import CodeAgent, LiteLLMModel
89
+
90
+ model = LiteLLMModel(
91
+ model_id="ollama_chat/llama3.2", # This model is a bit weak for agentic behaviours though
92
+ api_base="http://localhost:11434", # replace with 127.0.0.1:11434 or remote open-ai compatible server if necessary
93
+ api_key="YOUR_API_KEY" # replace with API key if necessary
94
+ num_ctx=8192 # ollama default is 2048 which will fail horribly. 8192 works for easy tasks, more is better. Check https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator to calculate how much VRAM this will need for the selected model.
95
+ )
96
+
97
+ agent = CodeAgent(tools=[], model=model, add_base_tools=True)
98
+
99
+ agent.run(
100
+ "Could you give me the 118th number in the Fibonacci sequence?",
101
+ )
102
+ ```
103
+ </hfoption>
104
+ </hfoptions>
105
+
106
+ #### CodeAgent और ToolCallingAgent
107
+
108
+ [`CodeAgent`] हमारा डिफ़ॉल्ट एजेंट है। यह हर स्टेप पर पायथन कोड स्निपेट्स लिखेगा और एक्जीक्यूट करेगा।
109
+
110
+ डिफ़ॉल्ट रूप से, एक्जीक्यूशन आपके लोकल एनवायरनमेंट में किया जाता है।
111
+ यह सुरक्षित होना चाहिए क्योंकि केवल वही फ़ंक्शंस कॉल किए जा सकते हैं जो आपने प्रदान किए हैं (विशेष रूप से यदि यह केवल Hugging Face टूल्स हैं) और पूर्व-परिभाषित सुरक्षित फ़ंक्शंस जैसे `print` या `math` मॉड्यूल से फ़ंक्शंस, इसलिए आप पहले से ही सीमित हैं कि क्या एक्जीक्यूट किया जा सकता है।
112
+
113
+ पायथन इंटरप्रेटर डिफ़ॉल्ट रूप से सेफ लिस्ट के बाहर इम्पोर्ट की अनुमति नहीं देता है, इसलिए सबसे स्पष्ट अटैक समस्या नहीं होनी चाहिए।
114
+ आप अपने [`CodeAgent`] के इनिशियलाइजेशन पर आर्ग्यूमेंट `additional_authorized_imports` में स्ट्रिंग्स की लिस्ट के रूप में अतिरिक्त मॉड्यूल्स को अधिकृत कर सकते हैं।
115
+
116
+ ```py
117
+ model = HfApiModel()
118
+ agent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4'])
119
+ agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")
120
+ ```
121
+
122
+ > [!WARNING]
123
+ > LLM आर्बिट्ररी कोड जनरेट कर सकता है जो फिर एक्जीक्यूट किया जाएगा: कोई असुरक्षित इम्पोर्ट न जोड़ें!
124
+
125
+ एक्जीक्यूशन किसी भी कोड पर रुक जाएगा जो एक अवैध ऑपरेशन करने का प्रयास करता है या यदि एजेंट द्वारा जनरेट किए गए कोड में एक रेगुलर पायथन एरर है।
126
+
127
+ आप [E2B कोड एक्जीक्यूटर](https://e2b.dev/docs#what-is-e2-b) या Docker का उपयोग लोकल पायथन इंटरप्रेटर के बजाय कर सकते हैं। E2B के लिए, पहले [`E2B_API_KEY` एनवायरनमेंट वेरिएबल सेट करें](https://e2b.dev/dashboard?tab=keys) और फिर एजेंट इनिशियलाइजेशन पर `executor_type="e2b"` पास करें। Docker के लिए, इनिशियलाइजेशन के दौरान `executor_type="docker"` पास करें।
128
+
129
+ > [!TIP]
130
+ > कोड एक्जीक्यूशन के बारे में और जानें [इस ट्यूटोरियल में](tutorials/secure_code_execution)।
131
+
132
+ हम JSON-जैसे ब्लॉब्स के रूप में एक्शन लिखने के व्यापक रूप से उपयोग किए जाने वाले तरीके का भी समर्थन करते हैं: यह [`ToolCallingAgent`] है, यह बहुत कुछ [`CodeAgent`] की तरह ही काम करता है, बेशक `additional_authorized_imports` के बिना क्योंकि यह कोड एक्जीक्यूट नहीं करता।
133
+
134
+ ```py
135
+ from smolagents import ToolCallingAgent
136
+
137
+ agent = ToolCallingAgent(tools=[], model=model)
138
+ agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")
139
+ ```
140
+
141
+ ### एजेंट रन का निरीक्षण
142
+
143
+ रन के बाद क्या हुआ यह जांचने के लिए यहाँ कुछ उपयोगी एट्रिब्यूट्स हैं:
144
+ - `agent.logs` एजेंट के फाइन-ग्रेन्ड लॉग्स को स्टोर करता है। एजेंट के रन के हर स्टेप पर, सब कुछ एक डिक्शनरी में स्टोर किया जाता है जो फिर `agent.logs` में जोड़ा जाता है।
145
+ - `agent.write_memory_to_messages()` चलाने से LLM के लिए एजेंट के लॉग्स की एक इनर मेमोरी बनती है, चैट मैसेज की लिस्ट के रूप में। यह मेथड लॉग के प्रत्येक स्टेप पर जाता है और केवल वही स्टोर करता है जिसमें यह एक मैसेज के रूप में रुचि रखता है: उदाहरण के लिए, यह सिस्टम प्रॉम्प्ट और टास्क को अलग-अलग मैसेज के रूप में सेव करेगा, फिर प्रत्येक स्टेप के लिए यह LLM आउटपुट को एक मैसेज के रूप में और टूल कॉल आउटपुट को दूसरे मैसेज के रूप में स्टोर करेगा।
146
+
147
+ ## टूल्स
148
+
149
+ टूल एक एटॉमिक फ़ंक्शन है जिसे एजेंट द्वारा उपयोग किया जाता है। LLM द्वारा उपयोग किए जाने के लिए, इसे कुछ एट्रिब्यूट्स की भी आवश्यकता होती है जो इसकी API बनाते हैं और LLM को यह बताने के लिए उपयोग किए जाएंगे कि इस टूल को कैसे कॉल करें:
150
+ - एक नाम
151
+ - एक विवरण
152
+ - इनपुट प्रकार और विवरण
153
+ - एक आउटपुट प्रकार
154
+
155
+ आप उदाहरण के लिए [`PythonInterpreterTool`] को चेक कर सकते हैं: इसमें एक नाम, विवरण, इनपुट विवरण, एक आउटपुट प्रकार, और एक्शन करने के लिए एक `forward` मेथड है।
156
+
157
+ जब एजेंट इनिशियलाइज़ किया जाता है, टूल एट्रिब्यूट्स का उपयोग एक टूल विवरण जनरेट करने के लिए किया जाता है जो एजेंट के सिस्टम प्रॉम्प्ट में बेक किया जाता है। यह एजेंट को बताता है कि वह कौन से टूल्स उपयोग कर सकता है और क्यों।
158
+
159
+ ### डिफ़ॉल्ट टूलबॉक्स
160
+
161
+ `smolagents` एजेंट्स को सशक्त बनाने के लिए एक डिफ़ॉल्ट टूलबॉक्स के साथ आता है, जिसे आप आर्ग्यूमेंट `add_base_tools = True` के साथ अपने एजेंट में इनिशियलाइजेशन पर जोड़ सकते हैं:
162
+
163
+ - **DuckDuckGo वेब सर्च**: DuckDuckGo ब्राउज़र का उपयोग करके वेब सर्च करता है।
164
+ - **पायथन कोड इंटरप्रेटर**: आपका LLM जनरेटेड पायथन कोड एक सुरक्षित एनवायरनमेंट में चलाता है। यह टूल [`ToolCallingAgent`] में केवल तभी जोड़ा जाएगा जब आप इसे `add_base_tools=True` के साथ इनिशियलाइज़ करते हैं, क्योंकि कोड-बेस्ड एजेंट पहले से ही नेटिव रूप से पायथन कोड एक्जीक्यूट कर सकता है
165
+ - **ट्रांसक्राइबर**: Whisper-Turbo पर बनाया गया एक स्पीच-टू-टेक्स्ट पाइपलाइन जो ऑडियो को टेक्स्ट में ट्रांसक्राइब करता है।
166
+
167
+ आप मैन्युअल रूप से एक टूल का उपयोग उसके आर्ग्यूमेंट्स के साथ कॉल करके कर सकते हैं।
168
+
169
+ ```python
170
+ from smolagents import DuckDuckGoSearchTool
171
+
172
+ search_tool = DuckDuckGoSearchTool()
173
+ print(search_tool("Who's the current president of Russia?"))
174
+ ```
175
+
176
+ ### अपने कस्टम टूल बनाएं
177
+
178
+ आप ऐसे उपयोग के मामलों के लिए अपने खुद के टूल बना सकते हैं जो Hugging Face के डिफ़ॉल्ट टूल्स द्वारा कवर नहीं किए गए हैं।
179
+ उदाहरण के लिए, चलिए एक टूल बनाते हैं जो दिए गए कार्य (task) के लिए हब से सबसे अधिक डाउनलोड किए गए मॉडल को रिटर्न करता है।
180
+
181
+ आप नीचे दिए गए कोड से शुरुआत करेंगे।
182
+
183
+ ```python
184
+ from huggingface_hub import list_models
185
+
186
+ task = "text-classification"
187
+
188
+ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
189
+ print(most_downloaded_model.id)
190
+ ```
191
+
192
+ यह कोड आसानी से टूल में बदला जा सकता है, बस इसे एक फ़ंक्शन में रैप करें और `tool` डेकोरेटर जोड़ें:
193
+ यह टूल बनाने का एकमात्र तरीका नहीं है: आप इसे सीधे [`Tool`] का सबक्लास बनाकर भी परिभाषित कर सकते हैं, जो आपको अधिक लचीलापन प्रदान करता है, जैसे भारी क्लास एट्रिब्यूट्स को इनिशियलाइज़ करने की संभावना।
194
+
195
+ चलो देखते हैं कि यह दोनों विकल्पों के लिए कैसे काम करता है:
196
+
197
+ <hfoptions id="build-a-tool">
198
+ <hfoption id="@tool के साथ एक फ़ंक्शन को डेकोरेट करें">
199
+
200
+ ```py
201
+ from smolagents import tool
202
+
203
+ @tool
204
+ def model_download_tool(task: str) -> str:
205
+ """
206
+ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.
207
+ It returns the name of the checkpoint.
208
+
209
+ Args:
210
+ task: The task for which to get the download count.
211
+ """
212
+ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
213
+ return most_downloaded_model.id
214
+ ```
215
+
216
+ फ़ंक्शन को चाहिए:
217
+ - एक स्पष्ट नाम: नाम टूल के कार्य को स्पष्ट रूप से बताने वाला होना चाहिए ताकि इसे चलाने वाले LLM को आसानी हो। चूंकि यह टूल कार्य के लिए सबसे अधिक डाउनलोड किए गए मॉडल को लौटाता है, इसका नाम `model_download_tool` रखा गया है।
218
+ - इनपुट और आउटपुट पर टाइप हिंट्स।
219
+ - एक विवरण: इसमें 'Args:' भाग शामिल होना चाहिए, जिसमें प्रत्येक आर्ग्युमेंट का वर्णन (बिना टाइप संकेत के) किया गया हो। यह विवरण एक निर्देश मैनुअल की तरह होता है जो LLM को टूल चलाने में मदद करता है। इसे अन���ेखा न करें।
220
+ इन सभी तत्वों को एजेंट की सिस्टम प्रॉम्प्ट में स्वचालित रूप से शामिल किया जाएगा: इसलिए इन्हें यथासंभव स्पष्ट बनाने का प्रयास करें!
221
+
222
+ > [!TIP]
223
+ > यह परिभाषा प्रारूप `apply_chat_template` में उपयोग की गई टूल स्कीमा जैसा ही है, केवल अतिरिक्त `tool` डेकोरेटर जोड़ा गया है: हमारे टूल उपयोग API के बारे में अधिक पढ़ें [यहाँ](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template)।
224
+ </hfoption>
225
+ <hfoption id="सबक्लास टूल">
226
+
227
+ ```py
228
+ from smolagents import Tool
229
+
230
+ class ModelDownloadTool(Tool):
231
+ name = "model_download_tool"
232
+ description = "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint."
233
+ inputs = {"task": {"type": "string", "description": "The task for which to get the download count."}}
234
+ output_type = "string"
235
+
236
+ def forward(self, task: str) -> str:
237
+ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))
238
+ return most_downloaded_model.id
239
+ ```
240
+
241
+ सबक्लास को निम्नलिखित एट्रिब्यूट्स की आवश्यकता होती है:
242
+ - एक स्पष्ट `name`: नाम टूल के कार्य को स्पष्ट रूप से बताने वाला होना चाहिए।
243
+ - एक `description`: यह भी LLM के लिए निर्देश मैनुअल की तरह काम करता है।
244
+ - इनपुट प्रकार और उनके विवरण।
245
+ - आउटपुट प्रकार।
246
+ इन सभी एट्रिब्यूट्स को एजेंट की सिस्टम प्रॉम्प्ट में स्वचालित रूप से शामिल किया जाएगा, इन्हें स्पष्ट और विस्तृत बनाएं।
247
+ </hfoption>
248
+ </hfoptions>
249
+
250
+
251
+ आप सीधे अपने एजेंट को इनिशियलाइज़ कर सकते हैं:
252
+ ```py
253
+ from smolagents import CodeAgent, HfApiModel
254
+ agent = CodeAgent(tools=[model_download_tool], model=HfApiModel())
255
+ agent.run(
256
+ "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"
257
+ )
258
+ ```
259
+
260
+ लॉग्स इस प्रकार होंगे:
261
+ ```text
262
+ ╭──────────────────────────────────────── New run ─────────────────────────────────────────╮
263
+ │ │
264
+ │ Can you give me the name of the model that has the most downloads in the 'text-to-video' │
265
+ │ task on the Hugging Face Hub? │
266
+ │ │
267
+ ╰─ HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct ───────────────────────────────────────────╯
268
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
269
+ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮
270
+ │ 1 model_name = model_download_tool(task="text-to-video") │
271
+ │ 2 print(model_name) │
272
+ ╰──────────────────────────────────────────────────────────────────────────────────────────╯
273
+ Execution logs:
274
+ ByteDance/AnimateDiff-Lightning
275
+
276
+ Out: None
277
+ [Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60]
278
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
279
+ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮
280
+ │ 1 final_answer("ByteDance/AnimateDiff-Lightning") │
281
+ ╰──────────────────────────────────────────────────────────────────────────────────────────╯
282
+ Out - Final answer: ByteDance/AnimateDiff-Lightning
283
+ [Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148]
284
+ Out[20]: 'ByteDance/AnimateDiff-Lightning'
285
+ ```
286
+
287
+ [!TIP]
288
+ > टूल्स के बारे में अधिक पढ़ें [dedicated tutorial](./tutorials/tools#टूल-क्या-है-और-इसे-कैसे-बनाएं) में।
289
+
290
+ ## मल्टी-एजेंट्स
291
+
292
+ Microsoft के फ्रेमवर्क [Autogen](https://huggingface.co/papers/2308.08155) के साथ मल्टी-एजेंट सिस्टम्स की शुरुआत हुई।
293
+
294
+ इस प्रकार के फ्रेमवर्क में, आपके कार्य को हल करने के लिए कई एजेंट्स एक साथ काम करते हैं, न कि केवल एक।
295
+ यह अधिकांश बेंचमार्क्स पर बेहतर प्रदर्शन देता है। इसका कारण यह है कि कई कार्यों के लिए, एक सर्व-समावेशी प्रणाली के बजाय, आप उप-कार्यों पर विशेषज्ञता रखने वाली इकाइयों को पसंद करेंगे। इस तरह, अलग-अलग टूल सेट्स और मेमोरी वाले एजेंट्स के पास विशेषकरण की अधिक कुशलता होती है। उदाहरण के लिए, कोड उत्पन्न करने वाले एजेंट की मेमोरी को वेब सर्च एजेंट द्वारा देखे गए वेबपेजों की सभी सामग्री से क्यों भरें? इन्हें अलग रखना बेहतर है।
296
+
297
+ आप `smolagents` का उपयोग करके आसानी से श्रेणीबद्ध मल्टी-एजेंट सिस्टम्स बना सकते हैं।
298
+
299
+ ऐसा करने के लिए, एजेंट को [`ManagedAgent`] ऑब्जेक्ट में समाहित करें। यह ऑब्जेक्ट `agent`, `name`, और एक `description` जैसे तर्कों की आवश्यकता होती है, जो फिर मैनेजर एजेंट की सिस्टम प्रॉम्प्ट में एम्बेड किया जाता है
300
+
301
+ यहां एक एजेंट बनाने का उदाहरण दिया गया है जो हमारे [`DuckDuckGoSearchTool`] का उपयोग करके एक विशिष्ट वेब खोज एजेंट को प्रबंधित करता है।
302
+
303
+ ```py
304
+ from smolagents import CodeAgent, HfApiModel, DuckDuckGoSearchTool, ManagedAgent
305
+
306
+ model = HfApiModel()
307
+
308
+ web_agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=model)
309
+
310
+ managed_web_agent = ManagedAgent(
311
+ agent=web_agent,
312
+ name="web_search",
313
+ description="Runs web searches for you. Give it your query as an argument."
314
+ )
315
+
316
+ manager_agent = CodeAgent(
317
+ tools=[], model=model, managed_agents=[managed_web_agent]
318
+ )
319
+
320
+ manager_agent.run("Who is the CEO of Hugging Face?")
321
+ ```
322
+
323
+ > [!TIP]
324
+ > कुशल मल्टी-एजेंट इंप्लीमेंटेशन का एक विस्तृत उदाहरण देखने के लिए, [कैसे हमने अपने मल्टी-एजेंट सिस्टम को GAIA लीडरबोर्ड के शीर्ष पर पहुंचाया](https://huggingface.co/blog/beating-gaia) पर जाएं।
325
+
326
+
327
+ ## अपने एजेंट से बात करें और उसके विचारों को एक शानदार Gradio इंटरफेस में विज़ुअलाइज़ करें
328
+
329
+ आप `GradioUI` का उपयोग करके अपने एजेंट को इंटरैक्टिव तरीके से कार्य सौंप सकते हैं और उसके सोचने और निष्पादन की प्रक्रिया को देख सकते हैं। नीचे एक उदाहरण दिया गया है:
330
+
331
+ ```py
332
+ from smolagents import (
333
+ load_tool,
334
+ CodeAgent,
335
+ HfApiModel,
336
+ GradioUI
337
+ )
338
+
339
+ # Import tool from Hub
340
+ image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True)
341
+
342
+ model = HfApiModel(model_id)
343
+
344
+ # Initialize the agent with the image generation tool
345
+ agent = CodeAgent(tools=[image_generation_tool], model=model)
346
+
347
+ GradioUI(agent).launch()
348
+ ```
349
+
350
+ अंदरूनी तौर पर, जब यूजर एक नया उत्तर टाइप करता है, तो एजेंट को `agent.run(user_request, reset=False)` के साथ लॉन्च किया जाता है।
351
+ यहाँ `reset=False` फ्लैग का मतलब है कि एजेंट की मेमोरी इस नए कार्य को लॉन्च करने से पहले क्लियर नहीं होती, जिससे बातचीत जारी रहती है।
352
+
353
+ आप इस `reset=False` आर्ग्युमेंट का उपयोग किसी भी अन्य एजेंटिक एप्लिकेशन में बातचीत जारी रखने के लिए कर सकते हैं।
354
+
355
+ ## अगले कदम
356
+
357
+ अधिक गहन उपयोग के लिए, आप हमारे ट्यूटोरियल्स देख सकते हैं:
358
+ - [हमारे कोड एजेंट्स कैसे काम करते हैं इसका विवरण](./tutorials/secure_code_execution)
359
+ - [अच्छे एजेंट्स बनाने के लिए यह गाइड](./tutorials/building_good_agents)
360
+ - [टूल उपयोग के लिए इन-डेप्थ गाइड ](./tutorials/building_good_agents)।
docs/source/hi/index.mdx ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+ -->
15
+
16
+ # `smolagents`
17
+
18
+ <div class="flex justify-center">
19
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/license_to_call.png" width=100%/>
20
+ </div>
21
+
22
+ यह लाइब्रेरी पावरफुल एजेंट्स बनाने के लिए सबसे सरल फ्रेमवर्क है! वैसे, "एजेंट्स" हैं क्या? हम अपनी परिभाषा [इस पेज पर](conceptual_guides/intro_agents) प्रदान करते हैं, जहाँ आपको यह भी पता चलेगा कि इन्हें कब उपयोग करें या न करें (स्पॉइलर: आप अक्सर एजेंट्स के बिना बेहतर काम कर सकते हैं)।
23
+
24
+ यह लाइब्रेरी प्रदान करती है:
25
+
26
+ ✨ **सरलता**: Agents का लॉजिक लगभग एक हजार लाइन्स ऑफ़ कोड में समाहित है। हमने रॉ कोड के ऊपर एब्स्ट्रैक्शन को न्यूनतम आकार में रखा है!
27
+
28
+ 🌐 **सभी LLM के लिए सपोर्ट**: यह हब पर होस्ट किए गए मॉडल्स को उनके `transformers` वर्जन में या हमारे इन्फरेंस API के माध्यम से सपोर्ट करता है, साथ ही OpenAI, Anthropic से भी... किसी भी LLM से एजेंट को पावर करना वास्तव में आसान है।
29
+
30
+ 🧑‍💻 **कोड Agents के लिए फर्स्ट-क्लास सपोर्ट**, यानी ऐसे एजेंट्स जो अपनी एक्शन्स को कोड में लिखते हैं (कोड लिखने के लिए उपयोग किए जाने वाले एजेंट्स के विपरीत), [यहाँ और पढ़ें](tutorials/secure_code_execution)।
31
+
32
+ 🤗 **हब इंटीग्रेशन**: आप टूल्स को हब पर शेयर और लोड कर सकते हैं, और आगे और भी बहुत कुछ आने वाला है!
33
+ !
34
+
35
+ <div class="mt-10">
36
+ <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
37
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guided_tour"
38
+ ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">गाइडेड टूर</div>
39
+ <p class="text-gray-700">बेसिक्स सीखें और एजेंट्स का उपयोग करने में परिचित हों। यदि आप पहली बार एजेंट्स का उपयोग कर रहे हैं तो यहाँ से शुरू करें!</p>
40
+ </a>
41
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./examples/text_to_sql"
42
+ ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">हाउ-टू गाइड्स</div>
43
+ <p class="text-gray-700">एक विशिष्ट लक्ष्य प्राप्त करने में मदद के लिए गाइड: SQL क्वेरी जनरेट और टेस्ट करने के लिए एजेंट बनाएं!</p>
44
+ </a>
45
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/intro_agents"
46
+ ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">कॉन्सेप्चुअल गाइड्स</div>
47
+ <p class="text-gray-700">महत्वपूर्ण विषयों की बेहतर समझ बनाने के लिए उच्च-स्तरीय व्याख्याएं।</p>
48
+ </a>
49
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/building_good_agents"
50
+ ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">ट्यूटोरियल्स</div>
51
+ <p class="text-gray-700">एजेंट्स बनाने के महत्वपूर्ण पहलुओं को कवर करने वाले क्ट्यूटोरियल्स।</p>
52
+ </a>
53
+ </div>
54
+ </div>
docs/source/hi/reference/agents.mdx ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+ # Agents
17
+
18
+ <Tip warning={true}>
19
+
20
+ Smolagents एक experimental API है जो किसी भी समय बदल सकता है। एजेंट्स द्वारा लौटाए गए परिणाम भिन्न हो सकते हैं क्योंकि APIs या underlying मॉडल बदलने की संभावना रखते हैं।
21
+
22
+ </Tip>
23
+
24
+ Agents और tools के बारे में अधिक जानने के लिए [introductory guide](../index) पढ़ना सुनिश्चित करें।
25
+ यह पेज underlying क्लासेज के लिए API docs को शामिल करता है।
26
+
27
+ ## Agents
28
+
29
+ हमारे एजेंट्स [`MultiStepAgent`] से इनहेरिट करते हैं, जिसका अर्थ है कि वे कई चरणों में कार्य कर सकते हैं, प्रत्येक चरण में एक विचार, फिर एक टूल कॉल और एक्जीक्यूशन शामिल होता है। [इस कॉन्सेप्चुअल गाइड](../conceptual_guides/react) में अधिक पढ़ें।
30
+
31
+ हम मुख्य [`Agent`] क्लास पर आधारित दो प्रकार के एजेंट्स प्रदान करते हैं।
32
+ - [`CodeAgent`] डिफ़ॉल्ट एजेंट है, यह अपने टूल कॉल्स को Python कोड में लिखता है।
33
+ - [`ToolCallingAgent`] अपने टूल कॉल्स को JSON में लिखता है।
34
+
35
+ दोनों को इनिशियलाइजेशन पर `model` और टूल्स की सूची `tools` आर्गुमेंट्स की आवश्यकता होती है।
36
+
37
+ ### Agents की क्लासेज
38
+
39
+ [[autodoc]] MultiStepAgent
40
+
41
+ [[autodoc]] CodeAgent
42
+
43
+ [[autodoc]] ToolCallingAgent
44
+
45
+ ### ManagedAgent
46
+
47
+ _This class is deprecated since 1.8.0: now you just need to pass name and description attributes to an agent to directly use it as previously done with a ManagedAgent._
48
+
49
+ ### stream_to_gradio
50
+
51
+ [[autodoc]] stream_to_gradio
52
+
53
+ ### GradioUI
54
+
55
+ [[autodoc]] GradioUI
56
+
57
+ ## मॉडल्स
58
+
59
+ आप स्वतंत्र रूप से अपने स्वयं के मॉडल बना सकते हैं और उनका उपयोग कर सकते हैं।
60
+
61
+ आप अपने एजेंट के लिए कोई भी `model` कॉल करने योग्य उपयोग कर सकते हैं, जब तक कि:
62
+ 1. यह अपने इनपुट `messages` के लिए [messages format](./chat_templating) (`List[Dict[str, str]]`) का पालन करता है, और यह एक `str` लौटाता है।
63
+ 2. यह आर्गुमेंट `stop_sequences` में पास किए गए सीक्वेंस से *पहले* आउटपुट जनरेट करना बंद कर देता है।
64
+
65
+ अपने LLM को परिभाषित करने के लिए, आप एक `custom_model` मेथड बना सकते हैं जो [messages](./chat_templating) की एक सूची स्वीकार करता है और टेक्स्ट युक्त .content विशेषता वाला एक ऑब्जेक्ट लौटाता है। इस कॉलेबल को एक `stop_sequences` आर्गुमेंट भी स्वीकार करने की आवश्यकता होती है जो बताता है कि कब जनरेट करना और बंद करना है।
66
+
67
+ ```python
68
+ from huggingface_hub import login, InferenceClient
69
+
70
+ login("<YOUR_HUGGINGFACEHUB_API_TOKEN>")
71
+
72
+ model_id = "meta-llama/Llama-3.3-70B-Instruct"
73
+
74
+ client = InferenceClient(model=model_id)
75
+
76
+ def custom_model(messages, stop_sequences=["Task"]):
77
+ response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000)
78
+ answer = response.choices[0].message
79
+ return answer
80
+ ```
81
+
82
+ इसके अतिरिक्त, `custom_model` एक `grammar` आर्गुमेंट भी ले सकता है। जिस स्थिति में आप एजेंट इनिशियलाइजेशन पर एक `grammar` निर्दिष्ट करते हैं, यह आर्गुमेंट मॉडल के कॉल्स को आपके द्वारा इनिशियलाइजेशन पर परिभाषित `grammar` के साथ पास किया जाएगा, ताकि [constrained generation](https://huggingface.co/docs/text-generation-inference/conceptual/guidance) की अनुमति मिल सके जिससे उचित-फॉर्मेटेड एजेंट आउटपुट को फोर्स किया जा सके।
83
+
84
+ ### TransformersModel
85
+
86
+ सुविधा के लिए, हमने एक `TransformersModel` जोड़ा है जो इनिशियलाइजेशन पर दिए गए model_id के लिए एक लोकल `transformers` पाइपलाइन बनाकर ऊपर के बिंदुओं को लागू करता है।
87
+
88
+ ```python
89
+ from smolagents import TransformersModel
90
+
91
+ model = TransformersModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct")
92
+
93
+ print(model([{"role": "user", "content": "Ok!"}], stop_sequences=["great"]))
94
+ ```
95
+ ```text
96
+ >>> What a
97
+ ```
98
+
99
+ [[autodoc]] TransformersModel
100
+
101
+ ### HfApiModel
102
+
103
+ `HfApiModel` LLM के एक्जीक्यूशन के लिए [HF Inference API](https://huggingface.co/docs/api-inference/index) क्लाइंट को रैप करता है।
104
+
105
+ ```python
106
+ from smolagents import HfApiModel
107
+
108
+ messages = [
109
+ {"role": "user", "content": "Hello, how are you?"},
110
+ {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
111
+ {"role": "user", "content": "No need to help, take it easy."},
112
+ ]
113
+
114
+ model = HfApiModel()
115
+ print(model(messages))
116
+ ```
117
+ ```text
118
+ >>> Of course! If you change your mind, feel free to reach out. Take care!
119
+ ```
120
+ [[autodoc]] HfApiModel
121
+
122
+ ### LiteLLMModel
123
+
124
+ `LiteLLMModel` विभिन्न प्रदाताओं से 100+ LLMs को सपोर्ट करने के लिए [LiteLLM](https://www.litellm.ai/) का लाभ उठाता है।
125
+ आप मॉडल इनिशियलाइजेशन पर kwargs पास कर सकते हैं जो तब मॉडल का उपयोग करते समय प्रयोग किए जाएंगे, उदाहरण के लिए नीचे हम `temperature` पास करते हैं।
126
+
127
+ ```python
128
+ from smolagents import LiteLLMModel
129
+
130
+ messages = [
131
+ {"role": "user", "content": "Hello, how are you?"},
132
+ {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
133
+ {"role": "user", "content": "No need to help, take it easy."},
134
+ ]
135
+
136
+ model = LiteLLMModel("anthropic/claude-3-5-sonnet-latest", temperature=0.2, max_tokens=10)
137
+ print(model(messages))
138
+ ```
139
+
140
+ [[autodoc]] LiteLLMModel
141
+
142
+ ### OpenAiServerModel
143
+
144
+
145
+ यह क्लास आपको किसी भी OpenAIServer कम्पैटिबल मॉडल को कॉल करने देती है।
146
+ यहाँ बताया गया है कि आप इसे कैसे सेट कर सकते हैं (आप दूसरे सर्वर को पॉइंट करने के लिए `api_base` url को कस्टमाइज़ कर सकते हैं):
147
+ ```py
148
+ import os
149
+ from smolagents import OpenAIServerModel
150
+
151
+ model = OpenAIServerModel(
152
+ model_id="gpt-4o",
153
+ api_base="https://api.openai.com/v1",
154
+ api_key=os.environ["OPENAI_API_KEY"],
155
+ )
156
+ ```
157
+
158
+ ## Prompts
159
+
160
+ [[autodoc]] smolagents.agents.PromptTemplates
161
+
162
+ [[autodoc]] smolagents.agents.PlanningPromptTemplate
163
+
164
+ [[autodoc]] smolagents.agents.ManagedAgentPromptTemplate
165
+
166
+ [[autodoc]] smolagents.agents.FinalAnswerPromptTemplate