Previous
Vision
The ML model service API allows you to make inferences based on a provided ML model.
The ML Model service supports the following methods:
| Method Name | Description | 
|---|---|
| Infer | Take an already ordered input tensor as an array, make an inference on the model, and return an output tensor map. | 
| Metadata | Get the metadata: name, data type, expected tensor/array shape, inputs, and outputs associated with the ML model. | 
| Reconfigure | Reconfigure this resource. | 
| DoCommand | Execute model-specific commands that are not otherwise defined by the service API. | 
| GetResourceName | Get the ResourceNamefor this instance of the ML model service. | 
| Close | Safely shut down the resource and prevent further use. | 
Take an already ordered input tensor as an array, make an inference on the model, and return an output tensor map.
Parameters:
input_tensors (Dict[str, typing.NDArray]) (required): A dictionary of input flat tensors as specified in the metadata.extra (Mapping[str, Any]) (optional): Extra options to pass to the underlying RPC call.timeout (float) (optional): An option to set how long to wait (in seconds) before calling a time-out and closing the underlying RPC call.Returns:
Example:
import numpy as np
my_mlmodel = MLModelClient.from_robot(robot=machine, name="my_mlmodel_service")
image_data = np.zeros((1, 384, 384, 3), dtype=np.uint8)
# Create the input tensors dictionary
input_tensors = {
    "image": image_data
}
output_tensors = await my_mlmodel.infer(input_tensors)
For more information, see the Python SDK Docs.
Parameters:
ctx (Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.tensors (ml.Tensors): The input map of tensors, as specified in the metadata.Returns:
Example:
import (
  "go.viam.com/rdk/ml"
  "gorgonia.org/tensor"
 )
myMLModel, err := mlmodel.FromRobot(machine, "my_mlmodel")
input_tensors := ml.Tensors{
  "image": tensor.New(
    tensor.Of(tensor.Uint8),
    tensor.WithShape(1, 384, 384, 3),
          tensor.WithBacking(make([]uint8, 1*384*384*3)),
  ),
}
output_tensors, err := myMLModel.Infer(context.Background(), input_tensors)
For more information, see the Go SDK Docs.
Get the metadata: name, data type, expected tensor/array shape, inputs, and outputs associated with the ML model.
Parameters:
extra (Mapping[str, Any]) (optional): Extra options to pass to the underlying RPC call.timeout (float) (optional): An option to set how long to wait (in seconds) before calling a time-out and closing the underlying RPC call.Returns:
Example:
my_mlmodel = MLModelClient.from_robot(robot=machine, name="my_mlmodel_service")
metadata = await my_mlmodel.metadata()
For more information, see the Python SDK Docs.
Parameters:
ctx (Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.Returns:
Example:
myMLModel, err := mlmodel.FromRobot(machine, "my_mlmodel")
metadata, err := myMLModel.Metadata(context.Background())
For more information, see the Go SDK Docs.
Reconfigure this resource. Reconfigure must reconfigure the resource atomically and in place.
Parameters:
ctx (Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.deps (Dependencies): The resource dependencies.conf (Config): The resource configuration.Returns:
For more information, see the Go SDK Docs.
Execute model-specific commands that are not otherwise defined by the service API.
Most models do not implement DoCommand.
Any available model-specific commands should be covered in the model’s documentation.
If you are implementing your own ML model service and want to add features that have no corresponding built-in API method, you can implement them with DoCommand.
Parameters:
ctx (Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.cmd (map[string]interface{}): The command to execute.Returns:
Example:
myMlmodelSvc, err := mlmodel.FromRobot(machine, "my_mlmodel_svc")
command := map[string]interface{}{"cmd": "test", "data1": 500}
result, err := myMlmodelSvc.DoCommand(context.Background(), command)
For more information, see the Go SDK Docs.
Get the ResourceName for this instance of the ML model service.
Parameters:
name (str) (required): The name of the Resource.Returns:
Example:
my_mlmodel_svc_name = MLModelClient.get_resource_name("my_mlmodel_svc")
For more information, see the Python SDK Docs.
Parameters:
Returns:
Example:
my_mlmodel, err := mlmodel.FromRobot(machine, "my_ml_model")
err := my_mlmodel.Name()
For more information, see the Go SDK Docs.
Safely shut down the resource and prevent further use.
Parameters:
Returns:
Example:
my_mlmodel_svc = MLModelClient.from_robot(robot=machine, name="my_mlmodel_svc")
await my_mlmodel_svc.close()
For more information, see the Python SDK Docs.
Parameters:
ctx (Context): A Context carries a deadline, a cancellation signal, and other values across API boundaries.Returns:
Example:
my_mlmodel, err := mlmodel.FromRobot(machine, "my_ml_model")
err := my_mlmodel.Close(context.Background())
For more information, see the Go SDK Docs.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!