Skip to content

Commit

Permalink
Iterate over inference API naming (#501)
Browse files Browse the repository at this point in the history
Co-authored-by: Julien Chaumond <[email protected]>
  • Loading branch information
SBrandeis and julien-c authored Feb 26, 2024
1 parent f5404ba commit e1153b5
Show file tree
Hide file tree
Showing 9 changed files with 21 additions and 21 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ await inference.textToImage({

This is a collection of JS libraries to interact with the Hugging Face API, with TS types included.

- [@huggingface/inference](packages/inference/README.md): Use Inference Endpoints (serverless or dedicated) to make calls to 100,000+ Machine Learning models
- [@huggingface/inference](packages/inference/README.md): Use Inference Endpoints (dedicated) and Inference API (serverless) to make calls to 100,000+ Machine Learning models
- [@huggingface/hub](packages/hub/README.md): Interact with huggingface.co to create or delete repos and commit / download files
- [@huggingface/agents](packages/agents/README.md): Interact with HF models through a natural language interface

Expand Down
2 changes: 1 addition & 1 deletion packages/inference/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# 🤗 Hugging Face Inference Endpoints

A Typescript powered wrapper for the Hugging Face Inference Endpoints API. Learn more about Inference Endpoints at [Hugging Face](https://huggingface.co/inference-endpoints).
It works with both [serverless](https://huggingface.co/docs/api-inference/index) and [dedicated](https://huggingface.co/docs/inference-endpoints/index) Endpoints.
It works with both [Inference API (serverless)](https://huggingface.co/docs/api-inference/index) and [Inference Endpoints (dedicated)](https://huggingface.co/docs/inference-endpoints/index).

Check out the [full documentation](https://huggingface.co/docs/huggingface.js/inference/README).

Expand Down
2 changes: 1 addition & 1 deletion packages/inference/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"packageManager": "[email protected]",
"license": "MIT",
"author": "Tim Mikeladze <[email protected]>",
"description": "Typescript wrapper for the Hugging Face Inference Endpoints API",
"description": "Typescript wrapper for the Hugging Face Inference Endpoints & Inference API",
"repository": {
"type": "git",
"url": "https://github.com/huggingface/huggingface.js.git"
Expand Down
2 changes: 1 addition & 1 deletion packages/inference/src/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ export interface Options {
*/
retry_on_error?: boolean;
/**
* (Default: true). Boolean. There is a cache layer on Inference Endpoints (serverless) to speedup requests we have already seen. Most models can use those results as is as models are deterministic (meaning the results will be the same anyway). However if you use a non deterministic model, you can set this parameter to prevent the caching mechanism from being used resulting in a real new query.
* (Default: true). Boolean. There is a cache layer on Inference API (serverless) to speedup requests we have already seen. Most models can use those results as is as models are deterministic (meaning the results will be the same anyway). However if you use a non deterministic model, you can set this parameter to prevent the caching mechanism from being used resulting in a real new query.
*/
use_cache?: boolean;
/**
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/model-data.ts
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ export interface ModelData {
*/
widgetData?: WidgetExample[] | undefined;
/**
* Parameters that will be used by the widget when calling Inference Endpoints (serverless)
* Parameters that will be used by the widget when calling Inference API (serverless)
* https://huggingface.co/docs/api-inference/detailed_parameters
*
* can be set in the model card metadata (under `inference/parameters`)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
<div class="flex items-center text-lg">
{#if !isDisabled}
<IconLightning classNames="-ml-1 mr-1 text-yellow-500" />
Inference Endpoints (serverless)
Inference API
{:else}
Inference Examples
{/if}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,18 +17,18 @@
$: modelTooBig = $modelLoadStates[model.id]?.state === "TooBig";
const state = {
[LoadState.Loadable]: "This model can be loaded on Inference Endpoints (serverless).",
[LoadState.Loaded]: "This model is currently loaded and running on Inference Endpoints (serverless).",
[LoadState.Loadable]: "This model can be loaded on Inference API (serverless).",
[LoadState.Loaded]: "This model is currently loaded and running on Inference API (serverless).",
[LoadState.TooBig]:
"Model is too large to load onto on Inference Endpoints (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.",
[LoadState.Error]: "⚠️ This model could not be loaded on Inference Endpoints (serverless). ⚠️",
"Model is too large to load onto on Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.",
[LoadState.Error]: "⚠️ This model could not be loaded on Inference API (serverless). ⚠️",
} as const;
const azureState = {
[LoadState.Loadable]: "This model can be loaded loaded on AzureML Managed Endpoint",
[LoadState.Loaded]: "This model is loaded and running on AzureML Managed Endpoint",
[LoadState.TooBig]:
"Model is too large to load onto on Inference Endpoints (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.",
"Model is too large to load onto on Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.",
[LoadState.Error]: "⚠️ This model could not be loaded.",
} as const;
Expand Down Expand Up @@ -62,10 +62,10 @@
{:else if (model.inference === InferenceDisplayability.Yes || model.pipeline_tag === "reinforcement-learning") && !modelTooBig}
{@html getStatusReport($modelLoadStates[model.id], state)}
{:else if model.inference === InferenceDisplayability.ExplicitOptOut}
<span class="text-sm text-gray-500">Inference Endpoints (serverless) has been turned off for this model.</span>
<span class="text-sm text-gray-500">Inference API (serverless) has been turned off for this model.</span>
{:else if model.inference === InferenceDisplayability.CustomCode}
<span class="text-sm text-gray-500"
>Inference Endpoints (serverless) does not yet support model repos that contain custom code.</span
>Inference API (serverless) does not yet support model repos that contain custom code.</span
>
{:else if model.inference === InferenceDisplayability.LibraryNotDetected}
<span class="text-sm text-gray-500">
Expand All @@ -83,11 +83,11 @@
</span>
{:else if model.inference === InferenceDisplayability.PipelineLibraryPairNotSupported}
<span class="text-sm text-gray-500">
Inference Endpoints (serverless) does not yet support {model.library_name} models for this pipeline type.
Inference API (serverless) does not yet support {model.library_name} models for this pipeline type.
</span>
{:else if modelTooBig}
<span class="text-sm text-gray-500">
Model is too large to load in Inference Endpoints (serverless). To try the model, launch it on <a
Model is too large to load in Inference API (serverless). To try the model, launch it on <a
class="underline"
href="https://ui.endpoints.huggingface.co/new?repository={encodeURIComponent(model.id)}"
>Inference Endpoints (dedicated)</a
Expand All @@ -97,7 +97,7 @@
{:else}
<!-- added as a failsafe but this case cannot currently happen -->
<span class="text-sm text-gray-500">
Inference Endpoints (serverless) is disabled for an unknown reason. Please open a
Inference API (serverless) is disabled for an unknown reason. Please open a
<a class="color-inherit underline" href="/{model.id}/discussions/new">Discussion in the Community tab</a>.
</span>
{/if}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@
<div class="blankslate">
<div class="subtitle text-xs text-gray-500">
<div class="loaded mt-2 {currentState !== 'loaded' ? 'hidden' : ''}">
This model is currently loaded and running on Inference Endpoints (serverless).
This model is currently loaded and running on Inference API (serverless).
</div>
<div class="error mt-2 {currentState !== 'error' ? 'hidden' : ''}">
⚠️ This model could not be loaded in Inference Endpoints (serverless). ⚠️
⚠️ This model could not be loaded in Inference API (serverless). ⚠️
</div>
<div class="unknown mt-2 {currentState !== 'unknown' ? 'hidden' : ''}">
This model can be loaded in Inference Endpoints (serverless).
This model can be loaded in Inference API (serverless).
</div>
</div>
</div>
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ export async function callInferenceApi<T>(
requestBody: Record<string, unknown>,
apiToken = "",
outputParsingFn: (x: unknown) => T,
waitForModel = false, // If true, the server will only respond once the model has been loaded on Inference Endpoints (serverless)
waitForModel = false, // If true, the server will only respond once the model has been loaded on Inference API (serverless)
includeCredentials = false,
isOnLoadCall = false, // If true, the server will try to answer from cache and not do anything if not
useCache = true
Expand Down Expand Up @@ -184,7 +184,7 @@ export async function getModelLoadInfo(
}
}

// Extend requestBody with user supplied parameters for Inference Endpoints (serverless)
// Extend requestBody with user supplied parameters for Inference API (serverless)
export function addInferenceParameters(requestBody: Record<string, unknown>, model: ModelData): void {
const inference = model?.cardData?.inference;
if (typeof inference === "object") {
Expand Down

0 comments on commit e1153b5

Please sign in to comment.