Python
TypeScript
ComputeText( prompt="Who is Don Quixote?", temperature=0.4, max_tokens=800,)
Output
{ "text": "Don Quixote is a fictional character in the novel of the same name by Miguel de Cervantes."}
Compute text using a language model.
promptstringInput prompt.
Image prompts.
Sampling temperature to use. Higher values make the output more random, lower values make the output more deterministic.
0.4Maximum number of tokens to generate.
Selected model. Firellava13B is automatically selected when image_uris is provided.
Mixtral8x7BInstructLlama3Instruct8BLlama3Instruct70BLlama3Instruct405BFirellava13Bgpt-4ogpt-4o-miniclaude-3-5-sonnet-20240620Llama3Instruct8BGenerate multiple text choices using a language model.
promptstringInput prompt.
num_choicesinteger[1..8]Number of choices to generate.
1Sampling temperature to use. Higher values make the output more random, lower values make the output more deterministic.
0.4Maximum number of tokens to generate.
Selected model.
Mixtral8x7BInstructLlama3Instruct8BLlama3Instruct70BLlama3Instruct8BCompute text for multiple prompts in batch using a language model.
promptsarray[string]Batch input prompts.
Sampling temperature to use. Higher values make the output more random, lower values make the output more deterministic.
0.4Maximum number of tokens to generate.
Selected model.
Llama3Instruct8BCompute JSON using a language model.
promptstringInput prompt.
json_schemaobjectJSON schema to guide json_object response.
Sampling temperature to use. Higher values make the output more random, lower values make the output more deterministic.
0.4Maximum number of tokens to generate.
Selected model.
Mixtral8x7BInstructLlama3Instruct8BLlama3Instruct70Bgpt-4oLlama3Instruct8BCompute multiple JSON choices using a language model.
promptstringInput prompt.
json_schemaobjectJSON schema to guide json_object response.
num_choicesinteger[1..8]Number of choices to generate.
2Sampling temperature to use. Higher values make the output more random, lower values make the output more deterministic.
0.4Maximum number of tokens to generate.
Selected model.
Mixtral8x7BInstructLlama3Instruct8BLlama3Instruct8BCompute JSON for multiple prompts in batch using a language model.
promptsarray[string]Batch input prompts.
json_schemaobjectJSON schema to guide json_object response.
Sampling temperature to use. Higher values make the output more random, lower values make the output more deterministic.
0.4Maximum number of tokens to generate.
Selected model.
Llama3Instruct8BCompute text using Mistral 7B Instruct.
promptstringInput prompt.
System prompt.
Number of choices to generate.
1JSON schema to guide response.
Higher values make the output more random, lower values make the output more deterministic.
Higher values decrease the likelihood of repeating previous tokens.
0Higher values decrease the likelihood of repeated sequences.
1Higher values increase the likelihood of new topics appearing.
1.1Probability below which less likely tokens are filtered out.
0.95Maximum number of tokens to generate.
Compute text using instruct-tuned Mixtral 8x7B.
promptstringInput prompt.
System prompt.
Number of choices to generate.
1JSON schema to guide response.
Higher values make the output more random, lower values make the output more deterministic.
Higher values decrease the likelihood of repeating previous tokens.
0Higher values decrease the likelihood of repeated sequences.
1Higher values increase the likelihood of new topics appearing.
1.1Probability below which less likely tokens are filtered out.
0.95Maximum number of tokens to generate.
Compute text using instruct-tuned Llama 3 8B.
promptstringInput prompt.
System prompt.
Number of choices to generate.
1Higher values make the output more random, lower values make the output more deterministic.
Higher values decrease the likelihood of repeating previous tokens.
0Higher values decrease the likelihood of repeated sequences.
1Higher values increase the likelihood of new topics appearing.
1.1Probability below which less likely tokens are filtered out.
0.95Maximum number of tokens to generate.
JSON schema to guide response.
Compute text using instruct-tuned Llama 3 70B.
promptstringInput prompt.
System prompt.
Number of choices to generate.
1Higher values make the output more random, lower values make the output more deterministic.
Higher values decrease the likelihood of repeating previous tokens.
0Higher values decrease the likelihood of repeated sequences.
1Higher values increase the likelihood of new topics appearing.
1.1Probability below which less likely tokens are filtered out.
0.95Maximum number of tokens to generate.
Compute text with image input using FireLLaVA 13B.
Generate an image.
promptstringText prompt.
Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Generate multiple images.
promptstringText prompt.
num_imagesinteger[1..8]Number of images to generate.
2Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Edit an image using image generation inside part of the image or the full image.
image_uristringOriginal image.
promptstringText prompt.
Mask image that controls which pixels are inpainted. If unset, the entire image is edited (image-to-image).
Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Edit multiple images using image generation.
image_uristringOriginal image.
promptstringText prompt.
Mask image that controls which pixels are edited (inpainting). If unset, the entire image is edited (image-to-image).
num_imagesinteger[1..8]Number of images to generate.
2Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Upscale an image using image generation.
Prompt to guide model on the content of image to upscale.
image_uristringInput image.
Resolution of the output image, in pixels.
1024Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Erase the masked part of an image, e.g. to remove an object by inpainting.
image_uristringInput image.
mask_image_uristringMask image that controls which pixels are inpainted.
Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Generates a interpolation frames between each adjacent frames.
frame_urisarray[string]Frames.
Use "hosted" to return a video URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the video data will be returned as a base64-encoded string.
Output video format.
gifwebpmp4framesgifFrames per second of the generated video. Ignored if output format is frames.
7Number of interpolation steps. Each step adds an interpolated frame between adjacent frames. For example, 2 steps over 2 frames produces 5 frames.
2Generate an image using Stable Diffusion XL Lightning.
promptstringText prompt.
Negative input prompt.
Number of images to generate.
1Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Height of output image, in pixels.
1024Width of output image, in pixels.
1024Seeds for deterministic generation. Default is a random seed.
Edit an image using Stable Diffusion XL. Supports inpainting (edit part of the image with a mask) and image-to-image (edit the full image).
image_uristringOriginal image.
promptstringText prompt.
Mask image that controls which pixels are edited (inpainting). If unset, the entire image is edited (image-to-image).
num_imagesinteger[1..8]Number of images to generate.
1Resolution of the output image, in pixels.
1024Negative input prompt.
Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Controls the strength of the generation process.
0.8Random noise seeds. Default is random seeds for each generation.
Generate an image with generation structured by an input image, using Stable Diffusion XL with ControlNet.
image_uristringInput image.
control_methodstringStrategy to control generation using the input image.
edgedepthillusiontilepromptstringText prompt.
num_imagesinteger[1..8]Number of images to generate.
1Resolution of the output image, in pixels.
1024Negative input prompt.
Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Controls the influence of the input image on the generated output.
0.5Controls how much to transform the input image.
0.5Random noise seeds. Default is random seeds for each generation.
Generates a video using a still image as conditioning frame.
image_uristringOriginal image.
Use "hosted" to return a video URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the video data will be returned as a base64-encoded string.
Output video format.
gifwebpmp4framesgifSeed for deterministic generation. Default is a random seed.
Frames per second of the generated video. Ignored if output format is frames.
7The motion bucket id to use for the generated video. This can be used to control the motion of the generated video. Increasing the motion bucket id increases the motion of the generated video.
180The amount of noise added to the conditioning image. The higher the values the less the video resembles the conditioning image. Increasing this value also increases the motion of the generated video.
0.1Remove the background from an image and return the foreground segment as a cut-out or a mask.
image_uristringInput image.
Return a mask image instead of the original content.
falseInvert the mask image. Only takes effect if return_mask is true.
falseHex value background color. Transparent if unset.
Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Segment an image under a point and return the segment.
image_uristringInput image.
pointPointPoint prompt.
xintegerX position.
yintegerY position.
Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Segment an image using SegmentAnything.
image_uristringInput image.
Point prompts, to detect a segment under the point. One of point_prompts or box_prompts must be set.
xintegerX position.
yintegerY position.
Box prompts, to detect a segment within the bounding box. One of point_prompts or box_prompts must be set.
x1floatTop left corner x.
y1floatTop left corner y.
x2floatBottom right corner x.
y2floatBottom right corner y.
Use "hosted" to return an image URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the image data will be returned as a base64-encoded string.
Split document into text segments.
uristringURI of the document.
Document ID.
Document metadata.
Maximum number of units per chunk. Defaults to 1024 tokens for text or 40 lines for code.
Number of units to overlap between chunks. Defaults to 200 tokens for text or 15 lines for code.
Generate embedding for a text document.
textstringText to embed.
Vector store name.
Metadata that can be used to query the vector store. Ignored if collection_name is unset.
Choose keys from metadata to embed with text.
Vector store document ID. Ignored if store is unset.
Selected embedding model.
jina-v2clipjina-v2Generate embeddings for multiple text documents.
itemsarray[EmbedTextItem]Items to embed.
textstringText to embed.
metadataobjectMetadata that can be used to query the vector store. Ignored if collection_name is unset.
doc_idstringVector store document ID. Ignored if collection_name is unset.
Vector store name.
Choose keys from metadata to embed with text.
Selected embedding model.
jina-v2clipjina-v2Generate embedding for an image.
image_uristringImage to embed.
Vector store name.
Vector store document ID. Ignored if collection_name is unset.
Selected embedding model.
clipGenerate embeddings for multiple images.
itemsarray[EmbedImageItem]Items to embed.
image_uristringImage to embed.
doc_idstringVector store document ID. Ignored if collection_name is unset.
Vector store name.
Selected embedding model.
clipGenerate embeddings for multiple text documents using Jina Embeddings 2.
itemsarray[EmbedTextItem]Items to embed.
textstringText to embed.
metadataobjectMetadata that can be used to query the vector store. Ignored if collection_name is unset.
doc_idstringVector store document ID. Ignored if collection_name is unset.
Vector store name.
Choose keys from metadata to embed with text.
Generate embeddings for text or images using CLIP.
itemsarray[EmbedTextOrImageItem]Items to embed.
image_uristringImage to embed.
textstringText to embed.
metadataobjectMetadata that can be used to query the vector store. Ignored if collection_name is unset.
doc_idstringVector store document ID. Ignored if collection_name is unset.
Vector store name.
Choose keys from metadata to embed with text. Only applies to text items.
Find a vector store matching the given collection name, or create a new vector store.
collection_namestringVector store name.
modelstringSelected embedding model.
jina-v2clipDelete a vector store.
collection_namestringVector store name.
modelstringSelected embedding model.
jina-v2clipQuery a vector store for similar vectors.
collection_namestringVector store to query against.
modelstringSelected embedding model.
jina-v2clipTexts to embed and use for the query.
Image URIs to embed and use for the query.
Vectors to use for the query.
Document IDs to use for the query.
Number of results to return.
10The size of the dynamic candidate list for searching the index graph.
40The number of leaves in the index tree to search.
40Include the values of the vectors in the response.
falseInclude the metadata of the vectors in the response.
falseFilter metadata by key-value pairs.
Fetch vectors from a vector store.
collection_namestringVector store name.
modelstringSelected embedding model.
jina-v2clipidsarray[string]Document IDs to retrieve.
Update vectors in a vector store.
collection_namestringVector store name.
modelstringSelected embedding model.
jina-v2clipvectorsarray[UpdateVectorParams]Vectors to upsert.
idstringDocument ID.
vectorarray[number]Embedding vector.
metadataobjectDocument metadata.
Delete vectors in a vector store.
collection_namestringVector store name.
modelstringSelected embedding model.
jina-v2clipidsarray[string]Document IDs to delete.
Transcribe speech in an audio or video file.
audio_uristringInput audio.
Prompt to guide model on the content and context of input audio.
(Deprecated) Segment the text into sentences with approximate timestamps.
falseAlign transcription to produce more accurate sentence-level timestamps and word-level timestamps. An array of word segments will be included in each sentence segment.
falseIdentify speakers for each segment. Speaker IDs will be included in each segment.
falseSuggest automatic chapter markers.
falseGenerate speech from text.
textstringInput text.
Use "hosted" to return an audio URL hosted on Substrate. You can also provide a URL to a registered file store. If unset, the audio data will be returned as a base64-encoded string.
Return one of two options based on a condition.