source
stringclasses 40
values | url
stringlengths 53
184
| file_type
stringclasses 1
value | chunk
stringlengths 3
512
| chunk_id
stringlengths 5
8
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookendpoint
|
.md
|
```python
Decorator to start a [`WebhooksServer`] and register the decorated function as a webhook endpoint.
This is a helper to get started quickly. If you need more flexibility (custom landing page or webhook secret),
you can use [`WebhooksServer`] directly. You can register multiple webhook endpoints (to the same server) by using
this decorator multiple times.
Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your
server and deploy it on a Space.
|
17_4_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookendpoint
|
.md
|
<Tip warning={true}>
`webhook_endpoint` is experimental. Its API is subject to change in the future.
</Tip>
<Tip warning={true}>
You must have `gradio` installed to use `webhook_endpoint` (`pip install --upgrade gradio`).
</Tip>
Args:
path (`str`, optional):
The URL path to register the webhook function. If not provided, the function name will be used as the path.
In any case, all webhooks are registered under `/webhooks`.
|
17_4_1
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookendpoint
|
.md
|
Examples:
The default usage is to register a function as a webhook endpoint. The function name will be used as the path.
The server will be started automatically at exit (i.e. at the end of the script).
```python
from huggingface_hub import webhook_endpoint, WebhookPayload
@webhook_endpoint
async def trigger_training(payload: WebhookPayload):
if payload.repo.type == "dataset" and payload.event.action == "update":
|
17_4_2
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#trigger-a-training-job-if-a-dataset-is-updated
|
.md
|
...
|
17_5_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#server-is-automatically-started-at-the-end-of-the-script
|
.md
|
```
Advanced usage: register a function as a webhook endpoint and start the server manually. This is useful if you
are running it in a notebook.
```python
from huggingface_hub import webhook_endpoint, WebhookPayload
@webhook_endpoint
async def trigger_training(payload: WebhookPayload):
if payload.repo.type == "dataset" and payload.event.action == "update":
|
17_6_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#trigger-a-training-job-if-a-dataset-is-updated
|
.md
|
...
|
17_7_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#start-the-server-manually
|
.md
|
trigger_training.launch()
```
```
|
17_8_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#payload
|
.md
|
[`WebhookPayload`] is the main data structure that contains the payload from Webhooks. This is
a `pydantic` class which makes it very easy to use with FastAPI. If you pass it as a parameter to a webhook endpoint, it
will be automatically validated and parsed as a Python object.
For more information about webhooks payload, you can refer to the Webhooks Payload [guide](https://huggingface.co/docs/hub/webhooks#webhook-payloads).
|
17_9_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayload
|
.md
|
No docstring found for huggingface_hub.WebhookPayload
No docstring found for huggingface_hub.WebhookPayload
|
17_10_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadcomment
|
.md
|
No docstring found for huggingface_hub.WebhookPayloadComment
|
17_11_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloaddiscussion
|
.md
|
No docstring found for huggingface_hub.WebhookPayloadDiscussion
|
17_12_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloaddiscussionchanges
|
.md
|
No docstring found for huggingface_hub.WebhookPayloadDiscussionChanges
|
17_13_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadevent
|
.md
|
No docstring found for huggingface_hub.WebhookPayloadEvent
|
17_14_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadmovedto
|
.md
|
No docstring found for huggingface_hub.WebhookPayloadMovedTo
|
17_15_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadrepo
|
.md
|
No docstring found for huggingface_hub.WebhookPayloadRepo
|
17_16_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadurl
|
.md
|
No docstring found for huggingface_hub.WebhookPayloadUrl
|
17_17_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadwebhook
|
.md
|
No docstring found for huggingface_hub.WebhookPayloadWebhook
|
17_18_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/
|
.md
|
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
18_0_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#mixins
|
.md
|
The `huggingface_hub` library offers a range of mixins that can be used as a parent class for your objects, in order to
provide simple uploading and downloading functions. Check out our [integration guide](../guides/integrations) to learn
how to integrate any ML framework with the Hub.
|
18_1_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#modelhubmixin
|
.md
|
```python
A generic mixin to integrate ANY machine learning framework with the Hub.
To integrate your framework, your model class must inherit from this class. Custom logic for saving/loading models
have to be overwritten in [`_from_pretrained`] and [`_save_pretrained`]. [`PyTorchModelHubMixin`] is a good example
of mixin integration with the Hub. Check out our [integration guide](../guides/integrations) for more instructions.
|
18_2_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#modelhubmixin
|
.md
|
When inheriting from [`ModelHubMixin`], you can define class-level attributes. These attributes are not passed to
`__init__` but to the class definition itself. This is useful to define metadata about the library integrating
[`ModelHubMixin`].
For more details on how to integrate the mixin with your library, checkout the [integration guide](../guides/integrations).
|
18_2_1
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#modelhubmixin
|
.md
|
Args:
repo_url (`str`, *optional*):
URL of the library repository. Used to generate model card.
docs_url (`str`, *optional*):
URL of the library documentation. Used to generate model card.
model_card_template (`str`, *optional*):
Template of the model card. Used to generate model card. Defaults to a generic template.
language (`str` or `List[str]`, *optional*):
Language supported by the library. Used to generate model card.
library_name (`str`, *optional*):
|
18_2_2
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#modelhubmixin
|
.md
|
Language supported by the library. Used to generate model card.
library_name (`str`, *optional*):
Name of the library integrating ModelHubMixin. Used to generate model card.
license (`str`, *optional*):
License of the library integrating ModelHubMixin. Used to generate model card.
E.g: "apache-2.0"
license_name (`str`, *optional*):
Name of the library integrating ModelHubMixin. Used to generate model card.
Only used if `license` is set to `other`.
E.g: "coqui-public-model-license".
|
18_2_3
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#modelhubmixin
|
.md
|
Only used if `license` is set to `other`.
E.g: "coqui-public-model-license".
license_link (`str`, *optional*):
URL to the license of the library integrating ModelHubMixin. Used to generate model card.
Only used if `license` is set to `other` and `license_name` is set.
E.g: "https://coqui.ai/cpml".
pipeline_tag (`str`, *optional*):
Tag of the pipeline. Used to generate model card. E.g. "text-classification".
tags (`List[str]`, *optional*):
|
18_2_4
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#modelhubmixin
|
.md
|
Tag of the pipeline. Used to generate model card. E.g. "text-classification".
tags (`List[str]`, *optional*):
Tags to be added to the model card. Used to generate model card. E.g. ["x-custom-tag", "arxiv:2304.12244"]
coders (`Dict[Type, Tuple[Callable, Callable]]`, *optional*):
Dictionary of custom types and their encoders/decoders. Used to encode/decode arguments that are not
jsonable by default. E.g dataclasses, argparse.Namespace, OmegaConf, etc.
|
18_2_5
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#modelhubmixin
|
.md
|
Example:
```python
>>> from huggingface_hub import ModelHubMixin
|
18_2_6
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#inherit-from-modelhubmixin
|
.md
|
>>> class MyCustomModel(
... ModelHubMixin,
... library_name="my-library",
... tags=["x-custom-tag", "arxiv:2304.12244"],
... repo_url="https://github.com/huggingface/my-cool-library",
... docs_url="https://huggingface.co/docs/my-cool-library",
... # ^ optional metadata to generate model card
... ):
... def __init__(self, size: int = 512, device: str = "cpu"):
... # define how to initialize your model
... super().__init__()
|
18_3_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#inherit-from-modelhubmixin
|
.md
|
... # define how to initialize your model
... super().__init__()
... ...
...
... def _save_pretrained(self, save_directory: Path) -> None:
... # define how to serialize your model
... ...
...
... @classmethod
... def from_pretrained(
... cls: Type[T],
... pretrained_model_name_or_path: Union[str, Path],
... *,
... force_download: bool = False,
... resume_download: Optional[bool] = None,
|
18_3_1
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#inherit-from-modelhubmixin
|
.md
|
... *,
... force_download: bool = False,
... resume_download: Optional[bool] = None,
... proxies: Optional[Dict] = None,
... token: Optional[Union[str, bool]] = None,
... cache_dir: Optional[Union[str, Path]] = None,
... local_files_only: bool = False,
... revision: Optional[str] = None,
... **model_kwargs,
... ) -> T:
... # define how to deserialize your model
... ...
|
18_3_2
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#inherit-from-modelhubmixin
|
.md
|
... **model_kwargs,
... ) -> T:
... # define how to deserialize your model
... ...
>>> model = MyCustomModel(size=256, device="gpu")
|
18_3_3
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#save-model-weights-to-local-directory
|
.md
|
>>> model.save_pretrained("my-awesome-model")
|
18_4_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#push-model-weights-to-the-hub
|
.md
|
>>> model.push_to_hub("my-awesome-model")
|
18_5_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#download-and-initialize-weights-from-the-hub
|
.md
|
>>> reloaded_model = MyCustomModel.from_pretrained("username/my-awesome-model")
>>> reloaded_model.size
256
|
18_6_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#model-card-has-been-correctly-populated
|
.md
|
>>> from huggingface_hub import ModelCard
>>> card = ModelCard.load("username/my-awesome-model")
>>> card.data.tags
["x-custom-tag", "pytorch_model_hub_mixin", "model_hub_mixin"]
>>> card.data.library_name
"my-library"
```
```
- all
- _save_pretrained
- _from_pretrained
|
18_7_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pytorchmodelhubmixin
|
.md
|
```python
Implementation of [`ModelHubMixin`] to provide model Hub upload/download capabilities to PyTorch models. The model
is set in evaluation mode by default using `model.eval()` (dropout modules are deactivated). To train the model,
you should first set it back in training mode with `model.train()`.
See [`ModelHubMixin`] for more details on how to use the mixin.
Example:
|
18_8_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pytorchmodelhubmixin
|
.md
|
```python
>>> import torch
>>> import torch.nn as nn
>>> from huggingface_hub import PyTorchModelHubMixin
>>> class MyModel(
... nn.Module,
... PyTorchModelHubMixin,
... library_name="keras-nlp",
... repo_url="https://github.com/keras-team/keras-nlp",
... docs_url="https://keras.io/keras_nlp/",
... # ^ optional metadata to generate model card
... ):
... def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4):
|
18_8_1
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pytorchmodelhubmixin
|
.md
|
... ):
... def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4):
... super().__init__()
... self.param = nn.Parameter(torch.rand(hidden_size, vocab_size))
... self.linear = nn.Linear(output_size, vocab_size)
... def forward(self, x):
... return self.linear(x + self.param)
>>> model = MyModel(hidden_size=256)
|
18_8_2
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#save-model-weights-to-local-directory
|
.md
|
>>> model.save_pretrained("my-awesome-model")
|
18_9_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#push-model-weights-to-the-hub
|
.md
|
>>> model.push_to_hub("my-awesome-model")
|
18_10_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#download-and-initialize-weights-from-the-hub
|
.md
|
>>> model = MyModel.from_pretrained("username/my-awesome-model")
>>> model.hidden_size
256
```
```
|
18_11_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#kerasmodelhubmixin
|
.md
|
```python
Implementation of [`ModelHubMixin`] to provide model Hub upload/download
capabilities to Keras models.
|
18_12_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#kerasmodelhubmixin
|
.md
|
```python
>>> import tensorflow as tf
>>> from huggingface_hub import KerasModelHubMixin
>>> class MyModel(tf.keras.Model, KerasModelHubMixin):
... def __init__(self, **kwargs):
... super().__init__()
... self.config = kwargs.pop("config", None)
... self.dummy_inputs = ...
... self.layer = ...
... def call(self, *args):
... return ...
>>> # Initialize and compile the model as you normally would
>>> model = MyModel()
>>> model.compile(...)
|
18_12_1
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#kerasmodelhubmixin
|
.md
|
>>> # Initialize and compile the model as you normally would
>>> model = MyModel()
>>> model.compile(...)
>>> # Build the graph by training it or passing dummy inputs
>>> _ = model(model.dummy_inputs)
>>> # Save model weights to local directory
>>> model.save_pretrained("my-awesome-model")
>>> # Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")
>>> # Download and initialize weights from the Hub
>>> model = MyModel.from_pretrained("username/super-cool-model")
```
```
|
18_12_2
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedkeras
|
.md
|
```python
Instantiate a pretrained Keras model from a pre-trained model from the Hub.
The model is expected to be in `SavedModel` format.
|
18_13_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedkeras
|
.md
|
Args:
pretrained_model_name_or_path (`str` or `os.PathLike`):
Can be either:
- A string, the `model id` of a pretrained model hosted inside a
model repo on huggingface.co. Valid model ids can be located
at the root-level, like `bert-base-uncased`, or namespaced
under a user or organization name, like
`dbmdz/bert-base-german-cased`.
- You can add `revision` by appending `@` at the end of model_id
simply like this: `dbmdz/bert-base-german-cased@main` Revision
|
18_13_1
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedkeras
|
.md
|
- You can add `revision` by appending `@` at the end of model_id
simply like this: `dbmdz/bert-base-german-cased@main` Revision
is the specific model version to use. It can be a branch name,
a tag name, or a commit id, since we use a git-based system
for storing models and other artifacts on huggingface.co, so
`revision` can be any identifier allowed by git.
- A path to a `directory` containing model weights saved using
[`~transformers.PreTrainedModel.save_pretrained`], e.g.,
`./my_model_directory/`.
|
18_13_2
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedkeras
|
.md
|
[`~transformers.PreTrainedModel.save_pretrained`], e.g.,
`./my_model_directory/`.
- `None` if you are both providing the configuration and state
dictionary (resp. with keyword arguments `config` and
`state_dict`).
force_download (`bool`, *optional*, defaults to `False`):
Whether to force the (re-)download of the model weights and
configuration files, overriding the cached versions if they exist.
proxies (`Dict[str, str]`, *optional*):
A dictionary of proxy servers to use by protocol or endpoint, e.g.,
|
18_13_3
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedkeras
|
.md
|
proxies (`Dict[str, str]`, *optional*):
A dictionary of proxy servers to use by protocol or endpoint, e.g.,
`{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The
proxies are used on each request.
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If
`True`, will use the token generated when running `transformers-cli
login` (stored in `~/.huggingface`).
cache_dir (`Union[str, os.PathLike]`, *optional*):
|
18_13_4
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedkeras
|
.md
|
login` (stored in `~/.huggingface`).
cache_dir (`Union[str, os.PathLike]`, *optional*):
Path to a directory in which a downloaded pretrained model
configuration should be cached if the standard cache should not be
used.
local_files_only(`bool`, *optional*, defaults to `False`):
Whether to only look at local files (i.e., do not try to download
the model).
model_kwargs (`Dict`, *optional*):
model_kwargs will be passed to the model during initialization
|
18_13_5
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedkeras
|
.md
|
<Tip>
Passing `token=True` is required when you want to use a private
model.
</Tip>
```
|
18_13_6
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubkeras
|
.md
|
```python
Upload model checkpoint to the Hub.
Use `allow_patterns` and `ignore_patterns` to precisely filter which files should be pushed to the hub. Use
`delete_patterns` to delete existing remote files in the same commit. See [`upload_folder`] reference for more
details.
|
18_14_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubkeras
|
.md
|
Args:
model (`Keras.Model`):
The [Keras model](`https://www.tensorflow.org/api_docs/python/tf/keras/Model`) you'd like to push to the
Hub. The model must be compiled and built.
repo_id (`str`):
ID of the repository to push to (example: `"username/my-model"`).
commit_message (`str`, *optional*, defaults to "Add Keras model"):
Message to commit while pushing.
private (`bool`, *optional*):
Whether the repository created should be private.
|
18_14_1
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubkeras
|
.md
|
Message to commit while pushing.
private (`bool`, *optional*):
Whether the repository created should be private.
If `None` (default), the repo will be public unless the organization's default is private.
api_endpoint (`str`, *optional*):
The API endpoint to use when pushing the model to the hub.
token (`str`, *optional*):
The token to use as HTTP bearer authorization for remote files. If
not set, will use the token set when logging in with
`huggingface-cli login` (stored in `~/.huggingface`).
|
18_14_2
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubkeras
|
.md
|
not set, will use the token set when logging in with
`huggingface-cli login` (stored in `~/.huggingface`).
branch (`str`, *optional*):
The git branch on which to push the model. This defaults to
the default branch as specified in your repository, which
defaults to `"main"`.
create_pr (`boolean`, *optional*):
Whether or not to create a Pull Request from `branch` with that commit.
Defaults to `False`.
config (`dict`, *optional*):
Configuration object to be saved alongside the model weights.
|
18_14_3
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubkeras
|
.md
|
Defaults to `False`.
config (`dict`, *optional*):
Configuration object to be saved alongside the model weights.
allow_patterns (`List[str]` or `str`, *optional*):
If provided, only files matching at least one pattern are pushed.
ignore_patterns (`List[str]` or `str`, *optional*):
If provided, files matching any of the patterns are not pushed.
delete_patterns (`List[str]` or `str`, *optional*):
If provided, remote files matching any of the patterns will be deleted from the repo.
log_dir (`str`, *optional*):
|
18_14_4
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubkeras
|
.md
|
If provided, remote files matching any of the patterns will be deleted from the repo.
log_dir (`str`, *optional*):
TensorBoard logging directory to be pushed. The Hub automatically
hosts and displays a TensorBoard instance if log files are included
in the repository.
include_optimizer (`bool`, *optional*, defaults to `False`):
Whether or not to include optimizer during serialization.
tags (Union[`list`, `str`], *optional*):
List of tags that are related to model or string of a single tag. See example tags
|
18_14_5
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubkeras
|
.md
|
tags (Union[`list`, `str`], *optional*):
List of tags that are related to model or string of a single tag. See example tags
[here](https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1).
plot_model (`bool`, *optional*, defaults to `True`):
Setting this to `True` will plot the model and put it in the model
card. Requires graphviz and pydot to be installed.
model_save_kwargs(`dict`, *optional*):
model_save_kwargs will be passed to
|
18_14_6
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubkeras
|
.md
|
card. Requires graphviz and pydot to be installed.
model_save_kwargs(`dict`, *optional*):
model_save_kwargs will be passed to
[`tf.keras.models.save_model()`](https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model).
|
18_14_7
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubkeras
|
.md
|
Returns:
The url of the commit of your model in the given repository.
```
|
18_14_8
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#savepretrainedkeras
|
.md
|
```python
Saves a Keras model to save_directory in SavedModel format. Use this if
you're using the Functional or Sequential APIs.
|
18_15_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#savepretrainedkeras
|
.md
|
Args:
model (`Keras.Model`):
The [Keras
model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)
you'd like to save. The model must be compiled and built.
save_directory (`str` or `Path`):
Specify directory in which you want to save the Keras model.
config (`dict`, *optional*):
Configuration object to be saved alongside the model weights.
include_optimizer(`bool`, *optional*, defaults to `False`):
Whether or not to include optimizer in serialization.
|
18_15_1
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#savepretrainedkeras
|
.md
|
include_optimizer(`bool`, *optional*, defaults to `False`):
Whether or not to include optimizer in serialization.
plot_model (`bool`, *optional*, defaults to `True`):
Setting this to `True` will plot the model and put it in the model
card. Requires graphviz and pydot to be installed.
tags (Union[`str`,`list`], *optional*):
List of tags that are related to model or string of a single tag. See example tags
[here](https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1).
|
18_15_2
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#savepretrainedkeras
|
.md
|
[here](https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1).
model_save_kwargs(`dict`, *optional*):
model_save_kwargs will be passed to
[`tf.keras.models.save_model()`](https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model).
```
|
18_15_3
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedfastai
|
.md
|
```python
Load pretrained fastai model from the Hub or from a local directory.
|
18_16_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedfastai
|
.md
|
Args:
repo_id (`str`):
The location where the pickled fastai.Learner is. It can be either of the two:
- Hosted on the Hugging Face Hub. E.g.: 'espejelomar/fatai-pet-breeds-classification' or 'distilgpt2'.
You can add a `revision` by appending `@` at the end of `repo_id`. E.g.: `dbmdz/bert-base-german-cased@main`.
Revision is the specific model version to use. Since we use a git-based system for storing models and other
artifacts on the Hugging Face Hub, it can be a branch name, a tag name, or a commit id.
|
18_16_1
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedfastai
|
.md
|
artifacts on the Hugging Face Hub, it can be a branch name, a tag name, or a commit id.
- Hosted locally. `repo_id` would be a directory containing the pickle and a pyproject.toml
indicating the fastai and fastcore versions used to build the `fastai.Learner`. E.g.: `./my_model_directory/`.
revision (`str`, *optional*):
Revision at which the repo's files are downloaded. See documentation of `snapshot_download`.
|
18_16_2
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedfastai
|
.md
|
Returns:
The `fastai.Learner` model in the `repo_id` repo.
```
|
18_16_3
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubfastai
|
.md
|
```python
Upload learner checkpoint files to the Hub.
Use `allow_patterns` and `ignore_patterns` to precisely filter which files should be pushed to the hub. Use
`delete_patterns` to delete existing remote files in the same commit. See [`upload_folder`] reference for more
details.
|
18_17_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubfastai
|
.md
|
Args:
learner (`Learner`):
The `fastai.Learner' you'd like to push to the Hub.
repo_id (`str`):
The repository id for your model in Hub in the format of "namespace/repo_name". The namespace can be your individual account or an organization to which you have write access (for example, 'stanfordnlp/stanza-de').
commit_message (`str`, *optional*):
Message to commit while pushing. Will default to :obj:`"add model"`.
private (`bool`, *optional*):
Whether or not the repository created should be private.
|
18_17_1
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubfastai
|
.md
|
private (`bool`, *optional*):
Whether or not the repository created should be private.
If `None` (default), will default to been public except if the organization's default is private.
token (`str`, *optional*):
The Hugging Face account token to use as HTTP bearer authorization for remote files. If :obj:`None`, the token will be asked by a prompt.
config (`dict`, *optional*):
Configuration object to be saved alongside the model weights.
branch (`str`, *optional*):
|
18_17_2
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubfastai
|
.md
|
config (`dict`, *optional*):
Configuration object to be saved alongside the model weights.
branch (`str`, *optional*):
The git branch on which to push the model. This defaults to
the default branch as specified in your repository, which
defaults to `"main"`.
create_pr (`boolean`, *optional*):
Whether or not to create a Pull Request from `branch` with that commit.
Defaults to `False`.
api_endpoint (`str`, *optional*):
The API endpoint to use when pushing the model to the hub.
|
18_17_3
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubfastai
|
.md
|
Defaults to `False`.
api_endpoint (`str`, *optional*):
The API endpoint to use when pushing the model to the hub.
allow_patterns (`List[str]` or `str`, *optional*):
If provided, only files matching at least one pattern are pushed.
ignore_patterns (`List[str]` or `str`, *optional*):
If provided, files matching any of the patterns are not pushed.
delete_patterns (`List[str]` or `str`, *optional*):
If provided, remote files matching any of the patterns will be deleted from the repo.
|
18_17_4
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubfastai
|
.md
|
Returns:
The url of the commit of your model in the given repository.
<Tip>
Raises the following error:
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
if the user is not log on to the Hugging Face Hub.
</Tip>
```
|
18_17_5
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/
|
.md
|
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
<!--⚠️ Note that this file is auto-generated by `utils/generate_inference_types.py`. Do not modify it manually.-->
|
19_0_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#inference-types
|
.md
|
This page lists the types (e.g. dataclasses) available for each task supported on the Hugging Face Hub.
Each task is specified using a JSON schema, and the types are generated from these schemas - with some customization
due to Python requirements.
Visit [@huggingface.js/tasks](https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks)
to find the JSON schemas for each task.
This part of the lib is still under development and will be improved in future releases.
|
19_1_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubaudioclassificationinput
|
.md
|
```python
Inputs for Audio Classification inference
```
|
19_2_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubaudioclassificationoutputelement
|
.md
|
```python
Outputs for Audio Classification inference
```
|
19_3_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubaudioclassificationparameters
|
.md
|
```python
Additional inference parameters for Audio Classification
```
|
19_4_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubaudiotoaudioinput
|
.md
|
```python
Inputs for Audio to Audio inference
```
|
19_5_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubaudiotoaudiooutputelement
|
.md
|
```python
Outputs of inference for the Audio To Audio task
A generated audio file with its label.
```
|
19_6_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubautomaticspeechrecognitiongenerationparameters
|
.md
|
```python
Parametrization of the text generation process
```
|
19_7_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubautomaticspeechrecognitioninput
|
.md
|
```python
Inputs for Automatic Speech Recognition inference
```
|
19_8_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubautomaticspeechrecognitionoutput
|
.md
|
```python
Outputs of inference for the Automatic Speech Recognition task
```
|
19_9_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubautomaticspeechrecognitionoutputchunk
|
.md
|
```python
AutomaticSpeechRecognitionOutputChunk(text: str, timestamps: List[float])
```
|
19_10_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubautomaticspeechrecognitionparameters
|
.md
|
```python
Additional inference parameters for Automatic Speech Recognition
```
|
19_11_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninput
|
.md
|
```python
Chat Completion Input.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
```
|
19_12_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputfunctiondefinition
|
.md
|
```python
ChatCompletionInputFunctionDefinition(arguments: Any, name: str, description: Optional[str] = None)
```
|
19_13_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputfunctionname
|
.md
|
```python
ChatCompletionInputFunctionName(name: str)
```
|
19_14_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputgrammartype
|
.md
|
```python
ChatCompletionInputGrammarType(type: 'ChatCompletionInputGrammarTypeType', value: Any)
```
|
19_15_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputmessage
|
.md
|
```python
ChatCompletionInputMessage(content: Union[List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessageChunk], str], role: str, name: Optional[str] = None)
```
|
19_16_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputmessagechunk
|
.md
|
```python
ChatCompletionInputMessageChunk(type: 'ChatCompletionInputMessageChunkType', image_url: Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputURL] = None, text: Optional[str] = None)
```
|
19_17_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputstreamoptions
|
.md
|
```python
ChatCompletionInputStreamOptions(include_usage: bool)
```
|
19_18_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputtool
|
.md
|
```python
ChatCompletionInputTool(function: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputFunctionDefinition, type: str)
```
|
19_19_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputtoolchoiceclass
|
.md
|
```python
ChatCompletionInputToolChoiceClass(function: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputFunctionName)
```
|
19_20_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputurl
|
.md
|
```python
ChatCompletionInputURL(url: str)
```
|
19_21_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutput
|
.md
|
```python
Chat Completion Output.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
```
|
19_22_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputcomplete
|
.md
|
```python
ChatCompletionOutputComplete(finish_reason: str, index: int, message: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputMessage, logprobs: Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprobs] = None)
```
|
19_23_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputfunctiondefinition
|
.md
|
```python
ChatCompletionOutputFunctionDefinition(arguments: Any, name: str, description: Optional[str] = None)
```
|
19_24_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputlogprob
|
.md
|
```python
ChatCompletionOutputLogprob(logprob: float, token: str, top_logprobs: List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputTopLogprob])
```
|
19_25_0
|
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md
|
https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputlogprobs
|
.md
|
```python
ChatCompletionOutputLogprobs(content: List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprob])
```
|
19_26_0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.