Deploy Your Vision Model
Use the table below to receive deployment instructions for the model you want to deploy.
What kind of model do you want to deploy? | Foundation Large, general models that are ready to use for a variety of tasks. Foundation models are less accurate than fine-tuned models, but are more general and can be used for a wide variety of tasks. | Fine-Tuned Models you have trained with, or uploaded to, Roboflow. Fine-tuned models are more accurate than foundation models, but are also more specific to the task they are trained on. | BLANK |
What kind of foundation model? | CLIP Classification, Embeddings | Grounding DINO Object Detection | YOLO-World Object Detection |
DocTR OCR | CogVLM Multimodal Model | Gaze Detection | |
Segment Anything |
|||
Where do you want to deploy your model? | On Edge Devices | In My Cloud | In the Roboflow Cloud |
What device do you want to use? | CPU | NVIDIA GPU | GPU (NVIDIA Jetson) |
Raspberry Pi |
|||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image | Recorded Video Files |
|
Where do you want to deploy your model? | On Edge Devices | In My Cloud |
|
What device do you want to use? | CPU | NVIDIA GPU | GPU (NVIDIA Jetson) |
Raspberry Pi |
|||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Where do you want to deploy your model? | On Edge Devices | In My Cloud |
|
What device do you want to use? | CPU | NVIDIA GPU | GPU (NVIDIA Jetson) |
Raspberry Pi |
|||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Where do you want to deploy your model? | On Edge Devices | In My Cloud | In the Roboflow Cloud |
What device do you want to use? | CPU | NVIDIA GPU | GPU (NVIDIA Jetson) |
Raspberry Pi |
|||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image | Recorded Video Files |
|
Where do you want to deploy your model? | On Edge Devices | In My Cloud |
|
What device do you want to use? | NVIDIA GPU |
||
What is your inference input? | Image |
||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | GPU |
||
What is your inference input? | Image |
||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | GPU |
||
What is your inference input? | Image |
||
Where do you want to deploy your model? | On Edge Devices | In My Cloud | In the Roboflow Cloud |
What device do you want to use? | CPU | NVIDIA GPU | GPU (NVIDIA Jetson) |
Raspberry Pi |
|||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Recorded Video Files |
||
Where do you want to deploy your model? | On Edge Devices | In My Cloud |
|
What device do you want to use? | CPU | NVIDIA GPU | GPU (NVIDIA Jetson) |
Raspberry Pi |
|||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What kind of fine-tuned model? | Object Detection Identify objects and their positions with bounding boxes. | Classification Assign labels to the entire image. | Image Segmentation Detect multiple objects and their actual shape. |
Keypoint Detection Identify keypoints ('skeletons') to subjects. | Semantic Segmentation Assign every pixel to a label. Often inferior to instance segmentation. |
||
Where do you want to deploy your model? | On Edge Devices | In My Cloud | In the Roboflow Cloud |
What device do you want to use? | CPU | NVIDIA GPU | GPU (NVIDIA Jetson) |
Raspberry Pi | Snap Lens |
||
What is your inference input? | Image | Recorded Video Files | Live Video Stream (RSTP, Webcam) |
Live Video Stream (UDP) |
|||
What is your inference input? | Image | Recorded Video Files | Live Video Stream (RSTP, Webcam) |
Live Video Stream (UDP) |
|||
What is your inference input? | Image | Recorded Video Files | Live Video Stream (RSTP, Webcam) |
Live Video Stream (UDP) |
|||
What is your inference input? | Image | Recorded Video Files | Live Video Stream (RSTP, Webcam) |
Live Video Stream (UDP) |
|||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image | Recorded Video Files | Live Video Stream (RSTP, Webcam) |
Live Video Stream (UDP) |
|||
What is your inference input? | Image | Recorded Video Files | Live Video Stream (RSTP, Webcam) |
Live Video Stream (UDP) |
|||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image | Recorded Video Files | Live Video Stream (RSTP, Webcam) |
Live Video Stream (UDP) |
|||
What is your inference input? | Image | Recorded Video Files | Live Video Stream (RSTP, Webcam) |
Live Video Stream (UDP) |
|||
What is your inference input? | Image | Recorded Video Files |
|
What programming language do you want to use? | Python | cURL | JavaScript |
Swift |
|||
Where do you want to deploy your model? | On Edge Devices | In My Cloud | In the Roboflow Cloud |
What device do you want to use? | CPU | NVIDIA GPU | GPU (NVIDIA Jetson) |
Raspberry Pi |
|||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image | Recorded Video Files |
|
What programming language do you want to use? | Python | cURL | JavaScript |
Swift |
|||
Where do you want to deploy your model? | On Edge Devices | In My Cloud | In the Roboflow Cloud |
What device do you want to use? | CPU | NVIDIA GPU | GPU (NVIDIA Jetson) |
Raspberry Pi |
|||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image | Recorded Video Files |
|
What programming language do you want to use? | Python | cURL | JavaScript |
Swift |
|||
Where do you want to deploy your model? | On Edge Devices | In My Cloud | In the Roboflow Cloud |
What device do you want to use? | CPU | NVIDIA GPU | GPU (NVIDIA Jetson) |
Raspberry Pi |
|||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What programming language do you want to use? | Python | cURL | JavaScript |
Swift |
|||
Where do you want to deploy your model? | On Edge Devices | In My Cloud | In the Roboflow Cloud |
What device do you want to use? | CPU | NVIDIA GPU | GPU (NVIDIA Jetson) |
Raspberry Pi |
|||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What cloud platform do you want to use? | Amazon Web Services | Google Cloud Platform | Microsoft Azure |
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
Deploy on a single machine or a cluster? | Single Machine | Kubernetes |
|
Do you want to use a CPU or GPU? | CPU | GPU |
|
What is your inference input? | Image |
||
What is your inference input? | Image |
||
What is your inference input? | Image | Recorded Video Files |
|
What programming language do you want to use? | Python | cURL | JavaScript |
Swift |
|||
First, install the Inference CLIP extension:
pip install inference inference[clip]
Next, start an Inference server:
inference server start
To calculate image and text embeddings, use the following code:
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"],
)
embeddings_image = CLIENT.get_clip_image_embeddings(inference_input="https://i.imgur.com/Q6lDy8B.jpg")
embeddings_text = CLIENT.get_clip_text_embeddings(text="the quick brown fox jumped over the lazy dog")
You can then compare the embeddings using cosine similarity:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(embeddings_text, embeddings_image)
First, install the Inference CLIP extension:
pip install inference inference[clip]
Next, start an Inference server:
inference server start
To calculate image and text embeddings, use the following code:
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"],
)
embeddings_image = CLIENT.get_clip_image_embeddings(inference_input="https://i.imgur.com/Q6lDy8B.jpg")
embeddings_text = CLIENT.get_clip_text_embeddings(text="the quick brown fox jumped over the lazy dog")
You can then compare the embeddings using cosine similarity:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(embeddings_text, embeddings_image)
First, install the Inference CLIP extension:
pip install inference inference[clip]
Next, start an Inference server:
inference server start
To calculate image and text embeddings, use the following code:
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"],
)
embeddings_image = CLIENT.get_clip_image_embeddings(inference_input="https://i.imgur.com/Q6lDy8B.jpg")
embeddings_text = CLIENT.get_clip_text_embeddings(text="the quick brown fox jumped over the lazy dog")
You can then compare the embeddings using cosine similarity:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(embeddings_text, embeddings_image)
First, install the Inference CLIP extension:
pip install inference inference[clip]
Next, start an Inference server:
inference server start
To calculate image and text embeddings, use the following code:
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"],
)
embeddings_image = CLIENT.get_clip_image_embeddings(inference_input="https://i.imgur.com/Q6lDy8B.jpg")
embeddings_text = CLIENT.get_clip_text_embeddings(text="the quick brown fox jumped over the lazy dog")
You can then compare the embeddings using cosine similarity:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(embeddings_text, embeddings_image)
First, install the Inference CLIP extension:
pip install inference inference[clip]
Next, start an Inference server:
inference server start
To calculate image and text embeddings, use the following code:
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"],
)
embeddings_image = CLIENT.get_clip_image_embeddings(inference_input="https://i.imgur.com/Q6lDy8B.jpg")
embeddings_text = CLIENT.get_clip_text_embeddings(text="the quick brown fox jumped over the lazy dog")
You can then compare the embeddings using cosine similarity:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(embeddings_text, embeddings_image)
First, install the Inference CLIP extension:
pip install inference inference[clip]
Next, start an Inference server:
inference server start
To calculate image and text embeddings, use the following code:
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"],
)
embeddings_image = CLIENT.get_clip_image_embeddings(inference_input="https://i.imgur.com/Q6lDy8B.jpg")
embeddings_text = CLIENT.get_clip_text_embeddings(text="the quick brown fox jumped over the lazy dog")
You can then compare the embeddings using cosine similarity:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(embeddings_text, embeddings_image)
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
First, install the Inference CLIP extension:
pip install inference inference[clip]
Next, start an Inference server:
inference server start
To calculate image and text embeddings, use the following code:
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"],
)
embeddings_image = CLIENT.get_clip_image_embeddings(inference_input="https://i.imgur.com/Q6lDy8B.jpg")
embeddings_text = CLIENT.get_clip_text_embeddings(text="the quick brown fox jumped over the lazy dog")
You can then compare the embeddings using cosine similarity:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(embeddings_text, embeddings_image)
First, install the Inference CLIP extension:
pip install inference inference[clip]
Next, start an Inference server:
inference server start
To calculate image and text embeddings, use the following code:
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"],
)
embeddings_image = CLIENT.get_clip_image_embeddings(inference_input="https://i.imgur.com/Q6lDy8B.jpg")
embeddings_text = CLIENT.get_clip_text_embeddings(text="the quick brown fox jumped over the lazy dog")
You can then compare the embeddings using cosine similarity:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(embeddings_text, embeddings_image)
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
First, install the Inference CLIP extension:
pip install inference inference[clip]
Next, start an Inference server:
inference server start
To calculate image and text embeddings, use the following code:
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"],
)
embeddings_image = CLIENT.get_clip_image_embeddings(inference_input="https://i.imgur.com/Q6lDy8B.jpg")
embeddings_text = CLIENT.get_clip_text_embeddings(text="the quick brown fox jumped over the lazy dog")
You can then compare the embeddings using cosine similarity:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(embeddings_text, embeddings_image)
First, install the Roboflow SDK:
pip install roboflow
To calculate embeddings for all frames in a video, run:
from roboflow import Roboflow, GazeModel
rf = Roboflow(api_key="ROBOFLOW_API_KEY")
model = CLIPModel()
job_id, signed_url, expire_time = model.predict_video(
"YOUR_VIDEO.mp4",
fps=5,
prediction_type="batch-video",
)
results = model.poll_until_video_results(job_id)
print(results)
Above, replace:
ROBOFLOW_API_KEY
with your Roboflow API key. Learn how to retrieve your API key.YOUR_VIDEO.mp4
with the path to your video file.
For more information and usage examples, see our hosted video documentation.
First, install the Inference Grounding DINO extension:
pip install "inference[grounding-dino]"
Create a new Python file called app.py and add the following code:
from inference.models.grounding_dino import GroundingDINO
model = GroundingDINO(api_key="")
results = model.infer(
{
"image": {
"type": "url",
"value": "https://media.roboflow.com/fruit.png",
},
"text": ["apple"]
}
)
print(results.json())
In this code, we load Grounding DINO, run Grounding DINO on an image, and annotate the image with the predictions from the model.
Above, replace:
coffee cup
with the object you want to detect.image.jpg
with the path to the image in which you want to detect objects.
To use Grounding DINO with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Then, run the Python script you have created:
python app.py
When this code first runs, the Grounding DINO model weights will be downloaded. This will take a few minutes.
The predictions from the Grounding DINO model will be printed to the console.
First, install the Inference Grounding DINO extension:
pip install "inference[grounding-dino]"
Create a new Python file called app.py and add the following code:
from inference.models.grounding_dino import GroundingDINO
model = GroundingDINO(api_key="")
results = model.infer(
{
"image": {
"type": "url",
"value": "https://media.roboflow.com/fruit.png",
},
"text": ["apple"]
}
)
print(results.json())
In this code, we load Grounding DINO, run Grounding DINO on an image, and annotate the image with the predictions from the model.
Above, replace:
coffee cup
with the object you want to detect.image.jpg
with the path to the image in which you want to detect objects.
To use Grounding DINO with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Then, run the Python script you have created:
python app.py
When this code first runs, the Grounding DINO model weights will be downloaded. This will take a few minutes.
The predictions from the Grounding DINO model will be printed to the console.
First, install the Inference Grounding DINO extension:
pip install "inference[grounding-dino]"
Create a new Python file called app.py and add the following code:
from inference.models.grounding_dino import GroundingDINO
model = GroundingDINO(api_key="")
results = model.infer(
{
"image": {
"type": "url",
"value": "https://media.roboflow.com/fruit.png",
},
"text": ["apple"]
}
)
print(results.json())
In this code, we load Grounding DINO, run Grounding DINO on an image, and annotate the image with the predictions from the model.
Above, replace:
coffee cup
with the object you want to detect.image.jpg
with the path to the image in which you want to detect objects.
To use Grounding DINO with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Then, run the Python script you have created:
python app.py
When this code first runs, the Grounding DINO model weights will be downloaded. This will take a few minutes.
The predictions from the Grounding DINO model will be printed to the console.
First, install the Inference Grounding DINO extension:
pip install "inference[grounding-dino]"
Create a new Python file called app.py and add the following code:
from inference.models.grounding_dino import GroundingDINO
model = GroundingDINO(api_key="")
results = model.infer(
{
"image": {
"type": "url",
"value": "https://media.roboflow.com/fruit.png",
},
"text": ["apple"]
}
)
print(results.json())
In this code, we load Grounding DINO, run Grounding DINO on an image, and annotate the image with the predictions from the model.
Above, replace:
coffee cup
with the object you want to detect.image.jpg
with the path to the image in which you want to detect objects.
To use Grounding DINO with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Then, run the Python script you have created:
python app.py
When this code first runs, the Grounding DINO model weights will be downloaded. This will take a few minutes.
The predictions from the Grounding DINO model will be printed to the console.
First, install the Inference Grounding DINO extension:
pip install "inference[grounding-dino]"
Create a new Python file called app.py and add the following code:
from inference.models.grounding_dino import GroundingDINO
model = GroundingDINO(api_key="")
results = model.infer(
{
"image": {
"type": "url",
"value": "https://media.roboflow.com/fruit.png",
},
"text": ["apple"]
}
)
print(results.json())
In this code, we load Grounding DINO, run Grounding DINO on an image, and annotate the image with the predictions from the model.
Above, replace:
coffee cup
with the object you want to detect.image.jpg
with the path to the image in which you want to detect objects.
To use Grounding DINO with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Then, run the Python script you have created:
python app.py
When this code first runs, the Grounding DINO model weights will be downloaded. This will take a few minutes.
The predictions from the Grounding DINO model will be printed to the console.
First, install the Inference Grounding DINO extension:
pip install "inference[grounding-dino]"
Create a new Python file called app.py and add the following code:
from inference.models.grounding_dino import GroundingDINO
model = GroundingDINO(api_key="")
results = model.infer(
{
"image": {
"type": "url",
"value": "https://media.roboflow.com/fruit.png",
},
"text": ["apple"]
}
)
print(results.json())
In this code, we load Grounding DINO, run Grounding DINO on an image, and annotate the image with the predictions from the model.
Above, replace:
coffee cup
with the object you want to detect.image.jpg
with the path to the image in which you want to detect objects.
To use Grounding DINO with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Then, run the Python script you have created:
python app.py
When this code first runs, the Grounding DINO model weights will be downloaded. This will take a few minutes.
The predictions from the Grounding DINO model will be printed to the console.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
First, install the Inference Grounding DINO extension:
pip install "inference[grounding-dino]"
Create a new Python file called app.py and add the following code:
from inference.models.grounding_dino import GroundingDINO
model = GroundingDINO(api_key="")
results = model.infer(
{
"image": {
"type": "url",
"value": "https://media.roboflow.com/fruit.png",
},
"text": ["apple"]
}
)
print(results.json())
In this code, we load Grounding DINO, run Grounding DINO on an image, and annotate the image with the predictions from the model.
Above, replace:
coffee cup
with the object you want to detect.image.jpg
with the path to the image in which you want to detect objects.
To use Grounding DINO with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Then, run the Python script you have created:
python app.py
When this code first runs, the Grounding DINO model weights will be downloaded. This will take a few minutes.
The predictions from the Grounding DINO model will be printed to the console.
First, install the Inference Grounding DINO extension:
pip install "inference[grounding-dino]"
Create a new Python file called app.py and add the following code:
from inference.models.grounding_dino import GroundingDINO
model = GroundingDINO(api_key="")
results = model.infer(
{
"image": {
"type": "url",
"value": "https://media.roboflow.com/fruit.png",
},
"text": ["apple"]
}
)
print(results.json())
In this code, we load Grounding DINO, run Grounding DINO on an image, and annotate the image with the predictions from the model.
Above, replace:
coffee cup
with the object you want to detect.image.jpg
with the path to the image in which you want to detect objects.
To use Grounding DINO with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Then, run the Python script you have created:
python app.py
When this code first runs, the Grounding DINO model weights will be downloaded. This will take a few minutes.
The predictions from the Grounding DINO model will be printed to the console.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on GCP.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Learn how to retrieve your Roboflow API key
Then, create a new Python file called app.py
and add the following code:
import cv2
import supervision as sv
from inference.models.yolo_world.yolo_world import YOLOWorld
image = cv2.imread("image.jpeg")
model = YOLOWorld(model_id="yolo_world/l")
classes = ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
results = model.infer("image.jpeg", text=classes, confidence=0.03)
detections = sv.Detections.from_inference(results[0])
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
labels = [classes[class_id] for class_id in detections.class_id]
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections
)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections, labels=labels
)
sv.plot_image(annotated_image)
Above, replace:
image.jpeg
with the path to the image in which you want to detect objects.classes
with the objects you want to detect.
Then, run the Python script you have created:
python app.py
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Learn how to retrieve your Roboflow API key
Then, create a new Python file called app.py
and add the following code:
import cv2
import supervision as sv
from inference.models.yolo_world.yolo_world import YOLOWorld
image = cv2.imread("image.jpeg")
model = YOLOWorld(model_id="yolo_world/l")
classes = ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
results = model.infer("image.jpeg", text=classes, confidence=0.03)
detections = sv.Detections.from_inference(results[0])
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
labels = [classes[class_id] for class_id in detections.class_id]
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections
)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections, labels=labels
)
sv.plot_image(annotated_image)
Above, replace:
image.jpeg
with the path to the image in which you want to detect objects.classes
with the objects you want to detect.
Then, run the Python script you have created:
python app.py
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Learn how to retrieve your Roboflow API key
Then, create a new Python file called app.py
and add the following code:
import cv2
import supervision as sv
from inference.models.yolo_world.yolo_world import YOLOWorld
image = cv2.imread("image.jpeg")
model = YOLOWorld(model_id="yolo_world/l")
classes = ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
results = model.infer("image.jpeg", text=classes, confidence=0.03)
detections = sv.Detections.from_inference(results[0])
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
labels = [classes[class_id] for class_id in detections.class_id]
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections
)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections, labels=labels
)
sv.plot_image(annotated_image)
Above, replace:
image.jpeg
with the path to the image in which you want to detect objects.classes
with the objects you want to detect.
Then, run the Python script you have created:
python app.py
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Learn how to retrieve your Roboflow API key
Then, create a new Python file called app.py
and add the following code:
import cv2
import supervision as sv
from inference.models.yolo_world.yolo_world import YOLOWorld
image = cv2.imread("image.jpeg")
model = YOLOWorld(model_id="yolo_world/l")
classes = ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
results = model.infer("image.jpeg", text=classes, confidence=0.03)
detections = sv.Detections.from_inference(results[0])
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
labels = [classes[class_id] for class_id in detections.class_id]
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections
)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections, labels=labels
)
sv.plot_image(annotated_image)
Above, replace:
image.jpeg
with the path to the image in which you want to detect objects.classes
with the objects you want to detect.
Then, run the Python script you have created:
python app.py
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Learn how to retrieve your Roboflow API key
Then, create a new Python file called app.py
and add the following code:
import cv2
import supervision as sv
from inference.models.yolo_world.yolo_world import YOLOWorld
image = cv2.imread("image.jpeg")
model = YOLOWorld(model_id="yolo_world/l")
classes = ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
results = model.infer("image.jpeg", text=classes, confidence=0.03)
detections = sv.Detections.from_inference(results[0])
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
labels = [classes[class_id] for class_id in detections.class_id]
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections
)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections, labels=labels
)
sv.plot_image(annotated_image)
Above, replace:
image.jpeg
with the path to the image in which you want to detect objects.classes
with the objects you want to detect.
Then, run the Python script you have created:
python app.py
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Learn how to retrieve your Roboflow API key
Then, create a new Python file called app.py
and add the following code:
import cv2
import supervision as sv
from inference.models.yolo_world.yolo_world import YOLOWorld
image = cv2.imread("image.jpeg")
model = YOLOWorld(model_id="yolo_world/l")
classes = ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
results = model.infer("image.jpeg", text=classes, confidence=0.03)
detections = sv.Detections.from_inference(results[0])
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
labels = [classes[class_id] for class_id in detections.class_id]
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections
)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections, labels=labels
)
sv.plot_image(annotated_image)
Above, replace:
image.jpeg
with the path to the image in which you want to detect objects.classes
with the objects you want to detect.
Then, run the Python script you have created:
python app.py
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Learn how to retrieve your Roboflow API key
Then, create a new Python file called app.py
and add the following code:
import cv2
import supervision as sv
from inference.models.yolo_world.yolo_world import YOLOWorld
image = cv2.imread("image.jpeg")
model = YOLOWorld(model_id="yolo_world/l")
classes = ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
results = model.infer("image.jpeg", text=classes, confidence=0.03)
detections = sv.Detections.from_inference(results[0])
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
labels = [classes[class_id] for class_id in detections.class_id]
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections
)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections, labels=labels
)
sv.plot_image(annotated_image)
Above, replace:
image.jpeg
with the path to the image in which you want to detect objects.classes
with the objects you want to detect.
Then, run the Python script you have created:
python app.py
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Learn how to retrieve your Roboflow API key
Then, create a new Python file called app.py
and add the following code:
import cv2
import supervision as sv
from inference.models.yolo_world.yolo_world import YOLOWorld
image = cv2.imread("image.jpeg")
model = YOLOWorld(model_id="yolo_world/l")
classes = ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
results = model.infer("image.jpeg", text=classes, confidence=0.03)
detections = sv.Detections.from_inference(results[0])
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
labels = [classes[class_id] for class_id in detections.class_id]
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections
)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections, labels=labels
)
sv.plot_image(annotated_image)
Above, replace:
image.jpeg
with the path to the image in which you want to detect objects.classes
with the objects you want to detect.
Then, run the Python script you have created:
python app.py
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on GCP.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
To use DocTR with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account.
Then, retrieve your API key from the Roboflow dashboard. Learn how to retrieve your API key.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"]
)
result = CLIENT.ocr_image(inference_input="./container.jpg") # single image request
print(result)
Above, replace container.jpeg with the path to the image in which you want to detect objects.
The results of DocTR will appear in your terminal:
{'result': '', 'time': 3.98263641900121, 'result': 'MSKU 0439215', 'time': 3.870879542999319}
To use DocTR with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account.
Then, retrieve your API key from the Roboflow dashboard. Learn how to retrieve your API key.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"]
)
result = CLIENT.ocr_image(inference_input="./container.jpg") # single image request
print(result)
Above, replace container.jpeg with the path to the image in which you want to detect objects.
The results of DocTR will appear in your terminal:
{'result': '', 'time': 3.98263641900121, 'result': 'MSKU 0439215', 'time': 3.870879542999319}
To use DocTR with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account.
Then, retrieve your API key from the Roboflow dashboard. Learn how to retrieve your API key.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"]
)
result = CLIENT.ocr_image(inference_input="./container.jpg") # single image request
print(result)
Above, replace container.jpeg with the path to the image in which you want to detect objects.
The results of DocTR will appear in your terminal:
{'result': '', 'time': 3.98263641900121, 'result': 'MSKU 0439215', 'time': 3.870879542999319}
To use DocTR with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account.
Then, retrieve your API key from the Roboflow dashboard. Learn how to retrieve your API key.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"]
)
result = CLIENT.ocr_image(inference_input="./container.jpg") # single image request
print(result)
Above, replace container.jpeg with the path to the image in which you want to detect objects.
The results of DocTR will appear in your terminal:
{'result': '', 'time': 3.98263641900121, 'result': 'MSKU 0439215', 'time': 3.870879542999319}
To use DocTR with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account.
Then, retrieve your API key from the Roboflow dashboard. Learn how to retrieve your API key.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"]
)
result = CLIENT.ocr_image(inference_input="./container.jpg") # single image request
print(result)
Above, replace container.jpeg with the path to the image in which you want to detect objects.
The results of DocTR will appear in your terminal:
{'result': '', 'time': 3.98263641900121, 'result': 'MSKU 0439215', 'time': 3.870879542999319}
To use DocTR with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account.
Then, retrieve your API key from the Roboflow dashboard. Learn how to retrieve your API key.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"]
)
result = CLIENT.ocr_image(inference_input="./container.jpg") # single image request
print(result)
Above, replace container.jpeg with the path to the image in which you want to detect objects.
The results of DocTR will appear in your terminal:
{'result': '', 'time': 3.98263641900121, 'result': 'MSKU 0439215', 'time': 3.870879542999319}
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
To use DocTR with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account.
Then, retrieve your API key from the Roboflow dashboard. Learn how to retrieve your API key.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"]
)
result = CLIENT.ocr_image(inference_input="./container.jpg") # single image request
print(result)
Above, replace container.jpeg with the path to the image in which you want to detect objects.
The results of DocTR will appear in your terminal:
{'result': '', 'time': 3.98263641900121, 'result': 'MSKU 0439215', 'time': 3.870879542999319}
To use DocTR with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account.
Then, retrieve your API key from the Roboflow dashboard. Learn how to retrieve your API key.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
Create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://infer.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"]
)
result = CLIENT.ocr_image(inference_input="./container.jpg") # single image request
print(result)
Above, replace container.jpeg with the path to the image in which you want to detect objects.
The results of DocTR will appear in your terminal:
{'result': '', 'time': 3.98263641900121, 'result': 'MSKU 0439215', 'time': 3.870879542999319}
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on GCP.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
To use CogVLM with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account.
Then, retrieve your API key from the Roboflow dashboard. Learn how to retrieve your API key.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
We recommend using CogVLM paired with inference HTTP API adjusted to run in GPU environment. It's easy to set up with our inference-cli tool. Run the following command to set up environment and run the API under http://localhost:9001
pip install inference inference-cli inference-sdk
inference server start # make sure that you are running this at machine with GPU! Otherwise CogVLM will not be available
Use inference-sdk
to prompt the model:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="http://localhost:9001", # only local hosting supported
api_key=os.environ["ROBOFLOW_API_KEY"]
)
result = CLIENT.prompt_cogvlm(
visual_prompt="./forklift.jpg",
text_prompt="Is there a forklift close to a conveyor belt?",
)
print(result)
Above, replace forklift.jpeg
with the path to the image in which you want to detect objects.
The results of CogVLM will appear in your terminal:
{
'response': 'yes, there is a forklift close to a conveyor belt, and it appears to be transporting a stack of items onto it.',
'time': 12.89864671198302
}
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
To use CogVLM with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account.
Then, retrieve your API key from the Roboflow dashboard. Learn how to retrieve your API key.
Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
We recommend using CogVLM paired with inference HTTP API adjusted to run in GPU environment. It's easy to set up with our inference-cli tool. Run the following command to set up environment and run the API under http://localhost:9001
pip install inference inference-cli inference-sdk
inference server start # make sure that you are running this at machine with GPU! Otherwise CogVLM will not be available
Use inference-sdk
to prompt the model:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="http://localhost:9001", # only local hosting supported
api_key=os.environ["ROBOFLOW_API_KEY"]
)
result = CLIENT.prompt_cogvlm(
visual_prompt="./forklift.jpg",
text_prompt="Is there a forklift close to a conveyor belt?",
)
print(result)
Above, replace forklift.jpeg
with the path to the image in which you want to detect objects.
The results of CogVLM will appear in your terminal:
{
'response': 'yes, there is a forklift close to a conveyor belt, and it appears to be transporting a stack of items onto it.',
'time': 12.89864671198302
}
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on GCP.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
To use L2CS-Net with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
L2CS-Net accepts an image and returns pitch and yaw values that you can use to:
- Figure out the direction in which someone is looking, and;
- Estimate, roughly, where someone is looking.
We recommend using L2CS-Net paired with inference HTTP API. It's easy to set up with our inference-cli tool. Run the following command to set up environment and run the API under http://localhost:9001
pip install inference inference-cli inference-sdk
inference server start # this starts server under http://localhost:9001
Then, create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="http://localhost:9001", # only local hosting supported
api_key=os.environ["ROBOFLOW_API_KEY"]
)
CLIENT.detect_gazes(inference_input="./image.jpg") # single image request
Above, replace image.jpg
with the image in which you want to detect gazes.
The code above makes two assumptions:
- Faces are roughly one meter away from the camera.
- Faces are roughly 250mm tall.
These assumptions are a good starting point if you are using a computer webcam with L2CS-Net, where people in the frame are likely to be sitting at a desk.
On the first run, the model will be downloaded. On subsequent runs, the model will be cached locally and loaded from the cache. It will take a few moments for the model to download.
The results of L2CS-Net will appear in your terminal:
[{'face': {'x': 1107.0, 'y': 1695.5, 'width': 1056.0, 'height': 1055.0, 'confidence': 0.9355756640434265, 'class': 'face', 'class_confidence': None, 'class_id': 0, 'tracker_id': None, 'landmarks': [{'x': 902.0, 'y': 1441.0}, {'x': 1350.0, 'y': 1449.0}, {'x': 1137.0, 'y': 1692.0}, {'x': 1124.0, 'y': 1915.0}, {'x': 625.0, 'y': 1551.0}, {'x': 1565.0, 'y': 1571.0}]}, 'yaw': -0.04104889929294586, 'pitch': 0.029525401070713997}]
We have created an example project that will let you run L2CS-Net and see the results of the model in real time. Learn how to set up the example.
To use L2CS-Net with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
L2CS-Net accepts an image and returns pitch and yaw values that you can use to:
- Figure out the direction in which someone is looking, and;
- Estimate, roughly, where someone is looking.
We recommend using L2CS-Net paired with inference HTTP API. It's easy to set up with our inference-cli tool. Run the following command to set up environment and run the API under http://localhost:9001
pip install inference inference-cli inference-sdk
inference server start # this starts server under http://localhost:9001
Then, create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="http://localhost:9001", # only local hosting supported
api_key=os.environ["ROBOFLOW_API_KEY"]
)
CLIENT.detect_gazes(inference_input="./image.jpg") # single image request
Above, replace image.jpg
with the image in which you want to detect gazes.
The code above makes two assumptions:
- Faces are roughly one meter away from the camera.
- Faces are roughly 250mm tall.
These assumptions are a good starting point if you are using a computer webcam with L2CS-Net, where people in the frame are likely to be sitting at a desk.
On the first run, the model will be downloaded. On subsequent runs, the model will be cached locally and loaded from the cache. It will take a few moments for the model to download.
The results of L2CS-Net will appear in your terminal:
[{'face': {'x': 1107.0, 'y': 1695.5, 'width': 1056.0, 'height': 1055.0, 'confidence': 0.9355756640434265, 'class': 'face', 'class_confidence': None, 'class_id': 0, 'tracker_id': None, 'landmarks': [{'x': 902.0, 'y': 1441.0}, {'x': 1350.0, 'y': 1449.0}, {'x': 1137.0, 'y': 1692.0}, {'x': 1124.0, 'y': 1915.0}, {'x': 625.0, 'y': 1551.0}, {'x': 1565.0, 'y': 1571.0}]}, 'yaw': -0.04104889929294586, 'pitch': 0.029525401070713997}]
We have created an example project that will let you run L2CS-Net and see the results of the model in real time. Learn how to set up the example.
To use L2CS-Net with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
L2CS-Net accepts an image and returns pitch and yaw values that you can use to:
- Figure out the direction in which someone is looking, and;
- Estimate, roughly, where someone is looking.
We recommend using L2CS-Net paired with inference HTTP API. It's easy to set up with our inference-cli tool. Run the following command to set up environment and run the API under http://localhost:9001
pip install inference inference-cli inference-sdk
inference server start # this starts server under http://localhost:9001
Then, create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="http://localhost:9001", # only local hosting supported
api_key=os.environ["ROBOFLOW_API_KEY"]
)
CLIENT.detect_gazes(inference_input="./image.jpg") # single image request
Above, replace image.jpg
with the image in which you want to detect gazes.
The code above makes two assumptions:
- Faces are roughly one meter away from the camera.
- Faces are roughly 250mm tall.
These assumptions are a good starting point if you are using a computer webcam with L2CS-Net, where people in the frame are likely to be sitting at a desk.
On the first run, the model will be downloaded. On subsequent runs, the model will be cached locally and loaded from the cache. It will take a few moments for the model to download.
The results of L2CS-Net will appear in your terminal:
[{'face': {'x': 1107.0, 'y': 1695.5, 'width': 1056.0, 'height': 1055.0, 'confidence': 0.9355756640434265, 'class': 'face', 'class_confidence': None, 'class_id': 0, 'tracker_id': None, 'landmarks': [{'x': 902.0, 'y': 1441.0}, {'x': 1350.0, 'y': 1449.0}, {'x': 1137.0, 'y': 1692.0}, {'x': 1124.0, 'y': 1915.0}, {'x': 625.0, 'y': 1551.0}, {'x': 1565.0, 'y': 1571.0}]}, 'yaw': -0.04104889929294586, 'pitch': 0.029525401070713997}]
We have created an example project that will let you run L2CS-Net and see the results of the model in real time. Learn how to set up the example.
To use L2CS-Net with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
L2CS-Net accepts an image and returns pitch and yaw values that you can use to:
- Figure out the direction in which someone is looking, and;
- Estimate, roughly, where someone is looking.
We recommend using L2CS-Net paired with inference HTTP API. It's easy to set up with our inference-cli tool. Run the following command to set up environment and run the API under http://localhost:9001
pip install inference inference-cli inference-sdk
inference server start # this starts server under http://localhost:9001
Then, create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="http://localhost:9001", # only local hosting supported
api_key=os.environ["ROBOFLOW_API_KEY"]
)
CLIENT.detect_gazes(inference_input="./image.jpg") # single image request
Above, replace image.jpg
with the image in which you want to detect gazes.
The code above makes two assumptions:
- Faces are roughly one meter away from the camera.
- Faces are roughly 250mm tall.
These assumptions are a good starting point if you are using a computer webcam with L2CS-Net, where people in the frame are likely to be sitting at a desk.
On the first run, the model will be downloaded. On subsequent runs, the model will be cached locally and loaded from the cache. It will take a few moments for the model to download.
The results of L2CS-Net will appear in your terminal:
[{'face': {'x': 1107.0, 'y': 1695.5, 'width': 1056.0, 'height': 1055.0, 'confidence': 0.9355756640434265, 'class': 'face', 'class_confidence': None, 'class_id': 0, 'tracker_id': None, 'landmarks': [{'x': 902.0, 'y': 1441.0}, {'x': 1350.0, 'y': 1449.0}, {'x': 1137.0, 'y': 1692.0}, {'x': 1124.0, 'y': 1915.0}, {'x': 625.0, 'y': 1551.0}, {'x': 1565.0, 'y': 1571.0}]}, 'yaw': -0.04104889929294586, 'pitch': 0.029525401070713997}]
We have created an example project that will let you run L2CS-Net and see the results of the model in real time. Learn how to set up the example.
To use L2CS-Net with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
L2CS-Net accepts an image and returns pitch and yaw values that you can use to:
- Figure out the direction in which someone is looking, and;
- Estimate, roughly, where someone is looking.
We recommend using L2CS-Net paired with inference HTTP API. It's easy to set up with our inference-cli tool. Run the following command to set up environment and run the API under http://localhost:9001
pip install inference inference-cli inference-sdk
inference server start # this starts server under http://localhost:9001
Then, create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="http://localhost:9001", # only local hosting supported
api_key=os.environ["ROBOFLOW_API_KEY"]
)
CLIENT.detect_gazes(inference_input="./image.jpg") # single image request
Above, replace image.jpg
with the image in which you want to detect gazes.
The code above makes two assumptions:
- Faces are roughly one meter away from the camera.
- Faces are roughly 250mm tall.
These assumptions are a good starting point if you are using a computer webcam with L2CS-Net, where people in the frame are likely to be sitting at a desk.
On the first run, the model will be downloaded. On subsequent runs, the model will be cached locally and loaded from the cache. It will take a few moments for the model to download.
The results of L2CS-Net will appear in your terminal:
[{'face': {'x': 1107.0, 'y': 1695.5, 'width': 1056.0, 'height': 1055.0, 'confidence': 0.9355756640434265, 'class': 'face', 'class_confidence': None, 'class_id': 0, 'tracker_id': None, 'landmarks': [{'x': 902.0, 'y': 1441.0}, {'x': 1350.0, 'y': 1449.0}, {'x': 1137.0, 'y': 1692.0}, {'x': 1124.0, 'y': 1915.0}, {'x': 625.0, 'y': 1551.0}, {'x': 1565.0, 'y': 1571.0}]}, 'yaw': -0.04104889929294586, 'pitch': 0.029525401070713997}]
We have created an example project that will let you run L2CS-Net and see the results of the model in real time. Learn how to set up the example.
To use L2CS-Net with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
L2CS-Net accepts an image and returns pitch and yaw values that you can use to:
- Figure out the direction in which someone is looking, and;
- Estimate, roughly, where someone is looking.
We recommend using L2CS-Net paired with inference HTTP API. It's easy to set up with our inference-cli tool. Run the following command to set up environment and run the API under http://localhost:9001
pip install inference inference-cli inference-sdk
inference server start # this starts server under http://localhost:9001
Then, create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="http://localhost:9001", # only local hosting supported
api_key=os.environ["ROBOFLOW_API_KEY"]
)
CLIENT.detect_gazes(inference_input="./image.jpg") # single image request
Above, replace image.jpg
with the image in which you want to detect gazes.
The code above makes two assumptions:
- Faces are roughly one meter away from the camera.
- Faces are roughly 250mm tall.
These assumptions are a good starting point if you are using a computer webcam with L2CS-Net, where people in the frame are likely to be sitting at a desk.
On the first run, the model will be downloaded. On subsequent runs, the model will be cached locally and loaded from the cache. It will take a few moments for the model to download.
The results of L2CS-Net will appear in your terminal:
[{'face': {'x': 1107.0, 'y': 1695.5, 'width': 1056.0, 'height': 1055.0, 'confidence': 0.9355756640434265, 'class': 'face', 'class_confidence': None, 'class_id': 0, 'tracker_id': None, 'landmarks': [{'x': 902.0, 'y': 1441.0}, {'x': 1350.0, 'y': 1449.0}, {'x': 1137.0, 'y': 1692.0}, {'x': 1124.0, 'y': 1915.0}, {'x': 625.0, 'y': 1551.0}, {'x': 1565.0, 'y': 1571.0}]}, 'yaw': -0.04104889929294586, 'pitch': 0.029525401070713997}]
We have created an example project that will let you run L2CS-Net and see the results of the model in real time. Learn how to set up the example.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
To use L2CS-Net with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
L2CS-Net accepts an image and returns pitch and yaw values that you can use to:
- Figure out the direction in which someone is looking, and;
- Estimate, roughly, where someone is looking.
We recommend using L2CS-Net paired with inference HTTP API. It's easy to set up with our inference-cli tool. Run the following command to set up environment and run the API under http://localhost:9001
pip install inference inference-cli inference-sdk
inference server start # this starts server under http://localhost:9001
Then, create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="http://localhost:9001", # only local hosting supported
api_key=os.environ["ROBOFLOW_API_KEY"]
)
CLIENT.detect_gazes(inference_input="./image.jpg") # single image request
Above, replace image.jpg
with the image in which you want to detect gazes.
The code above makes two assumptions:
- Faces are roughly one meter away from the camera.
- Faces are roughly 250mm tall.
These assumptions are a good starting point if you are using a computer webcam with L2CS-Net, where people in the frame are likely to be sitting at a desk.
On the first run, the model will be downloaded. On subsequent runs, the model will be cached locally and loaded from the cache. It will take a few moments for the model to download.
The results of L2CS-Net will appear in your terminal:
[{'face': {'x': 1107.0, 'y': 1695.5, 'width': 1056.0, 'height': 1055.0, 'confidence': 0.9355756640434265, 'class': 'face', 'class_confidence': None, 'class_id': 0, 'tracker_id': None, 'landmarks': [{'x': 902.0, 'y': 1441.0}, {'x': 1350.0, 'y': 1449.0}, {'x': 1137.0, 'y': 1692.0}, {'x': 1124.0, 'y': 1915.0}, {'x': 625.0, 'y': 1551.0}, {'x': 1565.0, 'y': 1571.0}]}, 'yaw': -0.04104889929294586, 'pitch': 0.029525401070713997}]
We have created an example project that will let you run L2CS-Net and see the results of the model in real time. Learn how to set up the example.
To use L2CS-Net with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=
L2CS-Net accepts an image and returns pitch and yaw values that you can use to:
- Figure out the direction in which someone is looking, and;
- Estimate, roughly, where someone is looking.
We recommend using L2CS-Net paired with inference HTTP API. It's easy to set up with our inference-cli tool. Run the following command to set up environment and run the API under http://localhost:9001
pip install inference inference-cli inference-sdk
inference server start # this starts server under http://localhost:9001
Then, create a new Python file and add the following code:
import os
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="http://localhost:9001", # only local hosting supported
api_key=os.environ["ROBOFLOW_API_KEY"]
)
CLIENT.detect_gazes(inference_input="./image.jpg") # single image request
Above, replace image.jpg
with the image in which you want to detect gazes.
The code above makes two assumptions:
- Faces are roughly one meter away from the camera.
- Faces are roughly 250mm tall.
These assumptions are a good starting point if you are using a computer webcam with L2CS-Net, where people in the frame are likely to be sitting at a desk.
On the first run, the model will be downloaded. On subsequent runs, the model will be cached locally and loaded from the cache. It will take a few moments for the model to download.
The results of L2CS-Net will appear in your terminal:
[{'face': {'x': 1107.0, 'y': 1695.5, 'width': 1056.0, 'height': 1055.0, 'confidence': 0.9355756640434265, 'class': 'face', 'class_confidence': None, 'class_id': 0, 'tracker_id': None, 'landmarks': [{'x': 902.0, 'y': 1441.0}, {'x': 1350.0, 'y': 1449.0}, {'x': 1137.0, 'y': 1692.0}, {'x': 1124.0, 'y': 1915.0}, {'x': 625.0, 'y': 1551.0}, {'x': 1565.0, 'y': 1571.0}]}, 'yaw': -0.04104889929294586, 'pitch': 0.029525401070713997}]
We have created an example project that will let you run L2CS-Net and see the results of the model in real time. Learn how to set up the example.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on GCP.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
You can use Segment Anything to calculate segmentation masks for objects in images.
First, install Inference:
pip install inference inference-sdk
Next, start an Inference server to which you can make requests:
inference server start
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
Create a Segment Anything embedding
An embedding is a numeric representation of an image. SAM uses embeddings as input to calcualte the location of objects in an image.
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } base_url = "http://localhost:9001" api_key = os.environ["ROBOFLOW_API_KEY"] res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_payload, ) embeddings = res.json()['embeddings']
This code makes a request to Inference to embed an image using SAM.
The example_image_id is used to cache the embeddings for later use so you don't have to send them back in future segmentation requests.
Segment an object
To segment an object, you need to know at least one point in the image that represents the object that you want to use.
You may also opt to use an object detection model to identify an object, then use the center point of the bounding box as a prompt for segmentation.
Create a new Python file and add the following code:
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "point_coords": [[380, 350]], "point_labels": [1], "image_id": "example_image_id", } res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_clip_payload, ) masks = request.json()['masks']
You can use Segment Anything to calculate segmentation masks for objects in images.
First, install Inference:
pip install inference inference-sdk
Next, start an Inference server to which you can make requests:
inference server start
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
Create a Segment Anything embedding
An embedding is a numeric representation of an image. SAM uses embeddings as input to calcualte the location of objects in an image.
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } base_url = "http://localhost:9001" api_key = os.environ["ROBOFLOW_API_KEY"] res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_payload, ) embeddings = res.json()['embeddings']
This code makes a request to Inference to embed an image using SAM.
The example_image_id is used to cache the embeddings for later use so you don't have to send them back in future segmentation requests.
Segment an object
To segment an object, you need to know at least one point in the image that represents the object that you want to use.
You may also opt to use an object detection model to identify an object, then use the center point of the bounding box as a prompt for segmentation.
Create a new Python file and add the following code:
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "point_coords": [[380, 350]], "point_labels": [1], "image_id": "example_image_id", } res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_clip_payload, ) masks = request.json()['masks']
You can use Segment Anything to calculate segmentation masks for objects in images.
First, install Inference:
pip install inference inference-sdk
Next, start an Inference server to which you can make requests:
inference server start
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
Create a Segment Anything embedding
An embedding is a numeric representation of an image. SAM uses embeddings as input to calcualte the location of objects in an image.
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } base_url = "http://localhost:9001" api_key = os.environ["ROBOFLOW_API_KEY"] res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_payload, ) embeddings = res.json()['embeddings']
This code makes a request to Inference to embed an image using SAM.
The example_image_id is used to cache the embeddings for later use so you don't have to send them back in future segmentation requests.
Segment an object
To segment an object, you need to know at least one point in the image that represents the object that you want to use.
You may also opt to use an object detection model to identify an object, then use the center point of the bounding box as a prompt for segmentation.
Create a new Python file and add the following code:
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "point_coords": [[380, 350]], "point_labels": [1], "image_id": "example_image_id", } res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_clip_payload, ) masks = request.json()['masks']
You can use Segment Anything to calculate segmentation masks for objects in images.
First, install Inference:
pip install inference inference-sdk
Next, start an Inference server to which you can make requests:
inference server start
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
Create a Segment Anything embedding
An embedding is a numeric representation of an image. SAM uses embeddings as input to calcualte the location of objects in an image.
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } base_url = "http://localhost:9001" api_key = os.environ["ROBOFLOW_API_KEY"] res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_payload, ) embeddings = res.json()['embeddings']
This code makes a request to Inference to embed an image using SAM.
The example_image_id is used to cache the embeddings for later use so you don't have to send them back in future segmentation requests.
Segment an object
To segment an object, you need to know at least one point in the image that represents the object that you want to use.
You may also opt to use an object detection model to identify an object, then use the center point of the bounding box as a prompt for segmentation.
Create a new Python file and add the following code:
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "point_coords": [[380, 350]], "point_labels": [1], "image_id": "example_image_id", } res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_clip_payload, ) masks = request.json()['masks']
You can use Segment Anything to calculate segmentation masks for objects in images.
First, install Inference:
pip install inference inference-sdk
Next, start an Inference server to which you can make requests:
inference server start
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
Create a Segment Anything embedding
An embedding is a numeric representation of an image. SAM uses embeddings as input to calcualte the location of objects in an image.
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } base_url = "http://localhost:9001" api_key = os.environ["ROBOFLOW_API_KEY"] res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_payload, ) embeddings = res.json()['embeddings']
This code makes a request to Inference to embed an image using SAM.
The example_image_id is used to cache the embeddings for later use so you don't have to send them back in future segmentation requests.
Segment an object
To segment an object, you need to know at least one point in the image that represents the object that you want to use.
You may also opt to use an object detection model to identify an object, then use the center point of the bounding box as a prompt for segmentation.
Create a new Python file and add the following code:
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "point_coords": [[380, 350]], "point_labels": [1], "image_id": "example_image_id", } res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_clip_payload, ) masks = request.json()['masks']
You can use Segment Anything to calculate segmentation masks for objects in images.
First, install Inference:
pip install inference inference-sdk
Next, start an Inference server to which you can make requests:
inference server start
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
Create a Segment Anything embedding
An embedding is a numeric representation of an image. SAM uses embeddings as input to calcualte the location of objects in an image.
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } base_url = "http://localhost:9001" api_key = os.environ["ROBOFLOW_API_KEY"] res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_payload, ) embeddings = res.json()['embeddings']
This code makes a request to Inference to embed an image using SAM.
The example_image_id is used to cache the embeddings for later use so you don't have to send them back in future segmentation requests.
Segment an object
To segment an object, you need to know at least one point in the image that represents the object that you want to use.
You may also opt to use an object detection model to identify an object, then use the center point of the bounding box as a prompt for segmentation.
Create a new Python file and add the following code:
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "point_coords": [[380, 350]], "point_labels": [1], "image_id": "example_image_id", } res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_clip_payload, ) masks = request.json()['masks']
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
You can use Segment Anything to calculate segmentation masks for objects in images.
First, install Inference:
pip install inference inference-sdk
Next, start an Inference server to which you can make requests:
inference server start
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
Create a Segment Anything embedding
An embedding is a numeric representation of an image. SAM uses embeddings as input to calcualte the location of objects in an image.
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } base_url = "http://localhost:9001" api_key = os.environ["ROBOFLOW_API_KEY"] res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_payload, ) embeddings = res.json()['embeddings']
This code makes a request to Inference to embed an image using SAM.
The example_image_id is used to cache the embeddings for later use so you don't have to send them back in future segmentation requests.
Segment an object
To segment an object, you need to know at least one point in the image that represents the object that you want to use.
You may also opt to use an object detection model to identify an object, then use the center point of the bounding box as a prompt for segmentation.
Create a new Python file and add the following code:
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "point_coords": [[380, 350]], "point_labels": [1], "image_id": "example_image_id", } res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_clip_payload, ) masks = request.json()['masks']
You can use Segment Anything to calculate segmentation masks for objects in images.
First, install Inference:
pip install inference inference-sdk
Next, start an Inference server to which you can make requests:
inference server start
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
Create a Segment Anything embedding
An embedding is a numeric representation of an image. SAM uses embeddings as input to calcualte the location of objects in an image.
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } base_url = "http://localhost:9001" api_key = os.environ["ROBOFLOW_API_KEY"] res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_payload, ) embeddings = res.json()['embeddings']
This code makes a request to Inference to embed an image using SAM.
The example_image_id is used to cache the embeddings for later use so you don't have to send them back in future segmentation requests.
Segment an object
To segment an object, you need to know at least one point in the image that represents the object that you want to use.
You may also opt to use an object detection model to identify an object, then use the center point of the bounding box as a prompt for segmentation.
Create a new Python file and add the following code:
import requests from PIL import Image from io import BytesIO import base64 import os with open("image.png", "rb") as image_file: encoded_string = base64.b64encode(image_file.read()).decode("utf-8") infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "image_id": "example_image_id", } infer_payload = { "image": { "type": "base64", "value": "data:image/png;base64," + encoded_string }, "point_coords": [[380, 350]], "point_labels": [1], "image_id": "example_image_id", } res = requests.post( f"{base_url}/sam/embed_image?api_key={api_key}", json=infer_clip_payload, ) masks = request.json()['masks']
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
You can run your object detection model on a video file using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="video.mp4", # Replace with the path to your video on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.video.mp4
with the path to your video.
You can run your object detection model on a webcam or RTSP video stream using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="rtsp://username:password" on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.rtsp://username:password
with your RTSP username and password or webcam ID (i.e.0
for the default webcam).
Read our UDP inference guide to learn how to deploy a model with UDP.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
You can run your object detection model on a video file using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="video.mp4", # Replace with the path to your video on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.video.mp4
with the path to your video.
You can run your object detection model on a webcam or RTSP video stream using Inference.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="rtsp://username:password" on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.rtsp://username:password
with your RTSP username and password or webcam ID (i.e.0
for the default webcam).
Read our UDP inference guide to learn how to deploy a model with UDP.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
You can run your object detection model on a video file using Inference.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="video.mp4", # Replace with the path to your video on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.video.mp4
with the path to your video.
You can run your object detection model on a webcam or RTSP video stream using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="rtsp://username:password" on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.rtsp://username:password
with your RTSP username and password or webcam ID (i.e.0
for the default webcam).
Read our UDP inference guide to learn how to deploy a model with UDP.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
You can run your object detection model on a video file using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="video.mp4", # Replace with the path to your video on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.video.mp4
with the path to your video.
You can run your object detection model on a webcam or RTSP video stream using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="rtsp://username:password" on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.rtsp://username:password
with your RTSP username and password or your webcam ID.
Read our UDP inference guide to learn how to deploy a model with UDP.
Refer to our Lens Studio deployment for more information.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("bottle-cap-integrity/8") results = model.infer(image="YOUR_IMAGE.jpg")
You can run your object detection model on a webcam or RTSP video stream using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="video.mp4", # Replace with the path to your video on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.video.mp4
with the path to your video.
You can run your object detection model on a webcam or RTSP video stream using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="rtsp://username:password" on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.rtsp://username:password
with your RTSP username and password or webcam ID (i.e.0
for the default webcam).
Read our UDP inference guide to learn how to deploy a model with UDP.Read our UDP inference guide to learn how to deploy a model with UDP.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("bottle-cap-integrity/8") results = model.infer(image="YOUR_IMAGE.jpg")
You can run your object detection model on a video file using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="video.mp4", # Replace with the path to your video on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.video.mp4
with the path to your video.
You can run your object detection model on a webcam or RTSP video stream using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="rtsp://username:password" on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.rtsp://username:password
with your RTSP username and password or webcam ID (i.e.0
for the default webcam).
Read our UDP inference guide to learn how to deploy a model with UDP.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
You can run your object detection model on a webcam or RTSP video stream using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="video.mp4", # Replace with the path to your video on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.video.mp4
with the path to your video.
You can run your object detection model on a webcam or RTSP video stream using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="rtsp://username:password" on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.rtsp://username:password
with your RTSP username and password or webcam ID (i.e.0
for the default webcam).
Read our UDP inference guide to learn how to deploy a model with UDP.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
You can run your object detection model on a webcam or RTSP video stream using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="video.mp4", # Replace with the path to your video on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.video.mp4
with the path to your video.
You can run your object detection model on a webcam or RTSP video stream using Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
from inference import InferencePipeline from inference.core.interfaces.stream.sinks import render_boxes pipeline = InferencePipeline.init( model_id="model_id/version", video_reference="rtsp://username:password" on_prediction=render_boxes, # Function to run after each prediction ) pipeline.start() pipeline.join()
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.rtsp://username:password
with your RTSP username and password or webcam ID (i.e.0
for the default webcam).
Read our UDP inference guide to learn how to deploy a model with UDP.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
You can run your object detection models in the Roboflow cloud.
First, install the Inference SDK:
pip install inference-sdk
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
Then, use the following code:
import os from inference_sdk import InferenceHTTPClient CLIENT = InferenceHTTPClient( api_url="http://detect.roboflow.com", api_key=os.environ["ROBOFLOW_API_KEY"] ) result = CLIENT.infer("YOUR_IMAGE.jpg", model_id="model_id/version")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
Linux or MacOS
Retrieving JSON predictions for a local file called YOUR_IMAGE.jpg:
base64 YOUR_IMAGE.jpg | curl -d @- \ "https://detect.roboflow.com/model-id/version?api_key=KEY"
Inferring on an image hosted elsewhere on the web via its URL (don't forget to URL encode it):
curl -X POST \ "https://detect.roboflow.com/model-id/version?api_key=KEY&image=URL_OF_YOUR_IMAGE"
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.KEY
with your API key. Learn how to retrieve your Roboflow API key.
Windows
You will need to install curl for Windows and GNU's base64 tool for Windows. The easiest way to do this is to use the git for Windows installer which also includes the curl and base64 command line tools when you select "Use Git and optional Unix tools from the Command Prompt" during installation. Then you can use the same commands as above.
We're using axios to perform the POST request in this example so first run npm install axios to install the dependency.
Inferring on a local image:
const axios = require("axios"); const fs = require("fs"); const image = fs.readFileSync("YOUR_IMAGE.jpg", { encoding: "base64" }); axios({ method: "POST", url: "https://detect.roboflow.com/model-id/version" params: { api_key: "key" }, data: image, headers: { "Content-Type": "application/x-www-form-urlencoded" } }) .then(function(response) { console.log(response.data); }) .catch(function(error) { console.log(error.message); });
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.KEY
with your API key. Learn how to retrieve your Roboflow API key.
Inferring on an Image Hosted Elsewhere via URL:
const axios = require("axios"); axios({ method: "POST", url: "https://detect.roboflow.com/model-id/version", params: { api_key: "key", image: "https://i.imgur.com/PEEvqPN.png" } }) .then(function(response) { console.log(response.data); }) .catch(function(error) { console.log(error.message); });
Above, replace:
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.KEY
with your API key. Learn how to retrieve your Roboflow API key.
import UIKit // Load Image and Convert to Base64 let image = UIImage(named: "YOUR_IMAGE.jpg") let imageData = image?.jpegData(compressionQuality: 1) let fileContent = imageData?.base64EncodedString() let postData = fileContent!.data(using: .utf8) // Initialize Inference Server Request with API_KEY, Model, and Model Version var request = URLRequest(url: URL(string: "https://detect.roboflow.com/model-id/version?api_key=key&name=YOUR_IMAGE.jpg")!,timeoutInterval: Double.infinity) request.addValue("application/x-www-form-urlencoded", forHTTPHeaderField: "Content-Type") request.httpMethod = "POST" request.httpBody = postData // Execute Post Request URLSession.shared.dataTask(with: request, completionHandler: { data, response, error in // Parse Response to String guard let data = data else { print(String(describing: error)) return } // Convert Response String to Dictionary do { let dict = try JSONSerialization.jsonObject(with: data, options: []) as? [String: Any] } catch { print(error.localizedDescription) } // Print String Response print(String(data: data, encoding: .utf8)!) }).resume()
model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.KEY
with your API key. Learn how to retrieve your Roboflow API key.
You can use models you have trained or uploaded to Roboflow with the video inference API.
Use a Fine-Tuned Model with the Video Inference API
First, install the Roboflow Python package:
pip install roboflow
Next, create a new Python file and add the following code:
from roboflow import Roboflow rf = Roboflow(api_key="KEY") project = rf.workspace().project("model-id") model = project.version("version").model job_id, signed_url, expire_time = model.predict_video( "YOUR_VIDEO.mp4", fps=5, prediction_type="batch-video", ) results = model.poll_until_video_results(job_id) print(results)
Above, replace:
YOUR_VIDEO.mp4
with the path to your video.model_id
with the model ID you want to use. Learn how to retrieve your model and version ID.version
with the version you want to use. Learn how to retrieve your model and version ID.KEY
with your API key. Learn how to retrieve your Roboflow API key.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
Linux or MacOS
Retrieving JSON predictions for a local file called YOUR_IMAGE.jpg:
base64 YOUR_IMAGE.jpg | curl -d @- \
"https://detect.roboflow.com/model_id/version?api_key=API_KEY"
Above, replace:
YOUR_IMAGE.jpg
with the path to the image you want to detect objects in.model_id
with the ID of the model you want to use.version
with the version of the model you want to use
Inferring on an image hosted elsewhere on the web via its URL (don't forget to URL encode it):
curl -X POST \
"https://detect.roboflow.com/model_id/version?api_key=API_KEYℑ=URL_OF_YOUR_IMAGE"
Above, replace:
URL_OF_YOUR_IMAGE
with the path to the image you want to detect objects in.model_id
with the ID of the model you want to use.version
with the version of the model you want to use
Windows
You will need to install curl for Windows and GNU's base64 tool for Windows. The easiest way to do this is to use the git for Windows installer which also includes the curl and base64 command line tools when you select "Use Git and optional Unix tools from the Command Prompt" during installation. Then you can use the same commands as above.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
You can run your object detection models in the Roboflow cloud.
First, install the Inference SDK:
pip install inference-sdk
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
Then, use the following code:
import os from inference_sdk import InferenceHTTPClient CLIENT = InferenceHTTPClient( api_url="http://detect.roboflow.com", api_key=os.environ["ROBOFLOW_API_KEY"] ) result = CLIENT.infer("YOUR_IMAGE.jpg", model_id="model_id/version")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
Linux or MacOS
Retrieving JSON predictions for a local file called YOUR_IMAGE.jpg:
base64 YOUR_IMAGE.jpg | curl -d @- \
"https://detect.roboflow.com/model_id/version?api_key=API_KEY"
Above, replace:
YOUR_IMAGE.jpg
with the path to the image you want to detect objects in.model_id
with the ID of the model you want to use.version
with the version of the model you want to use
Inferring on an image hosted elsewhere on the web via its URL (don't forget to URL encode it):
curl -X POST \
"https://detect.roboflow.com/model_id/version?api_key=API_KEYℑ=URL_OF_YOUR_IMAGE"
Above, replace:
URL_OF_YOUR_IMAGE
with the path to the image you want to detect objects in.model_id
with the ID of the model you want to use.version
with the version of the model you want to use
Windows
You will need to install curl for Windows and GNU's base64 tool for Windows. The easiest way to do this is to use the git for Windows installer which also includes the curl and base64 command line tools when you select "Use Git and optional Unix tools from the Command Prompt" during installation. Then you can use the same commands as above.
Linux or MacOS
Retrieving JSON predictions for a local file called YOUR_IMAGE.jpg:
base64 YOUR_IMAGE.jpg | curl -d @- \
"https://detect.roboflow.com/model_id/version?api_key=API_KEY"
Above, replace:
YOUR_IMAGE.jpg
with the path to the image you want to detect objects in.model_id
with the ID of the model you want to use.version
with the version of the model you want to use
Inferring on an image hosted elsewhere on the web via its URL (don't forget to URL encode it):
curl -X POST \
"https://detect.roboflow.com/model_id/version?api_key=API_KEYℑ=URL_OF_YOUR_IMAGE"
Above, replace:
URL_OF_YOUR_IMAGE
with the path to the image you want to detect objects in.model_id
with the ID of the model you want to use.version
with the version of the model you want to use
Windows
You will need to install curl for Windows and GNU's base64 tool for Windows. The easiest way to do this is to use the git for Windows installer which also includes the curl and base64 command line tools when you select "Use Git and optional Unix tools from the Command Prompt" during installation. Then you can use the same commands as above.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
First, install Inference:
pip install inference-gpu
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference model = inference.load_roboflow_model("model-name/version") results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the model ID and version you want to use. Learn how to retrieve your model and version ID.
Contact the Roboflow sales team to learn more about deploying a Kubernetes cluster for deploying your models on AWS.
Contact the Roboflow sales team to learn more about deploying your model to Azure.
To run inference through our hosted API using Python, use the roboflow Python package:
# import the inference-sdk
from inference_sdk import InferenceHTTPClient
# initialize the client
CLIENT = InferenceHTTPClient(
api_url="http://segment.roboflow.com",
api_key="API_KEY"
)
# infer on a local image
result = CLIENT.infer("YOUR_IMAGE.jpg", model_id="model_id/version")
Above, replace:
http://segment.roboflow.com
with the URL of your Roboflow deploymentAPI_KEY
with your Roboflow API key. Learn how to retrieve your Roboflow API key.YOUR_IMAGE.jpg
with the path to your image file.model_id/version
with the model ID of the model you want to use. Learn how to retrieve your model ID.
Then, run the Python script. The result will be a dictionary containing the inference results.