Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v0.1 API Review] Documentation improvement #204

Closed
wants to merge 7 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 31 additions & 23 deletions api/v1alpha1/inferencemodel_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,18 @@ import (
)

// InferenceModel is the Schema for the InferenceModels API.
// The InferenceModel is intended to represent a model workload (also referred to as a model use case) within Kubernetes.
// The management of the model server is not done by the InferenceModel. Instead, the
// focus of the InferenceModel is to provide the tools needed to effectively manage multiple models
// that share the same base model (currently the focus is LoRA adapters). Fields such as TargetModel
// are intended to simplify A/B testing and version rollout of adapters. While Criticality assists with
// governance of multiplexing many usecases over shared hardware.
//
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="ModelName",type=string,JSONPath=`.spec.modelName`
// +kubebuilder:printcolumn:name="Accepted",type=string,JSONPath=`.status.conditions[?(@.type=="Accepted")].status`
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
// +genclient
type InferenceModel struct {
metav1.TypeMeta `json:",inline"`
Expand All @@ -42,29 +51,21 @@ type InferenceModelList struct {
Items []InferenceModel `json:"items"`
}

// InferenceModelSpec represents the desired state of a specific model use case. This resource is
// InferenceModelSpec represents the desired state of an InferenceModel. This resource is
// managed by the "Inference Workload Owner" persona.
//
// The Inference Workload Owner persona is someone that trains, verifies, and
// leverages a large language model from a model frontend, drives the lifecycle
// and rollout of new versions of those models, and defines the specific
// leverages a large language model focusing on model fidelity performance, and
// less on inference performance (which is managed by the Inference Platform Admin).
// They also drive the lifecycle and rollout of new versions of those models, and defines the specific
// performance and latency goals for the model. These workloads are
// expected to operate within an InferencePool sharing compute capacity with other
// InferenceModels, defined by the Inference Platform Admin.
//
// InferenceModel's modelName (not the ObjectMeta name) is unique for a given InferencePool,
// if the name is reused, an error will be shown on the status of a
// InferenceModel that attempted to reuse. The oldest InferenceModel, based on
// creation timestamp, will be selected to remain valid. In the event of a race
// condition, one will be selected at random.
// InferenceModels, with specific governance defined by the Inference Platform Admin.
type InferenceModelSpec struct {
// ModelName is the name of the model as it will be set in the "model" parameter for an incoming request.
// ModelNames must be unique for a referencing InferencePool
// (names can be reused for a different pool in the same cluster).
// The modelName with the oldest creation timestamp is retained, and the incoming
// InferenceModel is sets the Ready status to false with a corresponding reason.
// In the rare case of a race condition, one Model will be selected randomly to be considered valid, and the other rejected.
// Names can be reserved without an underlying model configured in the pool.
// ModelName is the name of the model as the users set in the "model" parameter in the requests.
// The name should be unique among the workloads that reference the same backend pool.
// This is the parameter that will be used to match the request with.
// Names can be reserved without implementing an actual model in the pool.
// This can be done by specifying a target model and setting the weight to zero,
// an error will be returned specifying that no valid target model is found.
//
Expand All @@ -73,20 +74,27 @@ type InferenceModelSpec struct {
ModelName string `json:"modelName"`

// Criticality defines how important it is to serve the model compared to other models referencing the same pool.
// Criticality impacts how traffic is handled in resource constrained situations. It handles this by
// queuing or rejecting requests of lower criticality. InferenceModels of an equivalent Criticality will
// fairly share resources over throughput of tokens. In the future, the metric used to calculate fairness,
// and the proportionality of fairness will be configurable.
// TODO: Update field upon resolution of: https://github.com/kubernetes-sigs/gateway-api-inference-extension/issues/213
//
// Default values for this field will not be set, to allow for future additions of new field that may 'one of' with this field.
// Any implementations that may consume this field may treat an unset value as the 'Standard' range.
// +optional
Criticality *Criticality `json:"criticality,omitempty"`

// TargetModels allow multiple versions of a model for traffic splitting.
// If not specified, the target model name is defaulted to the modelName parameter.
// Traffic splitting is handled via weights. The targetModel field is optional, however,
// if not specified, the target model name is defaulted to the modelName parameter.
// modelName is often in reference to a LoRA adapter.
//
// Examples:
// - A model server serving `llama2-7b` may be represented by:
// - setting the modelName to `llama2-7b` and setting no targetModels
// - setting the modelName to `hello-world` and setting a single targetModel to `llama2-7b`, and setting no weights
// - setting modelName to 'my-fine-tune', setting 2 targetModels 'fine-tune-v1' & 'fine-tune-v2', and setting no weights.
// This has the effect of weighing the two models equally
// - setting modelName to 'my-fine-tune', setting 2 targetModels 'fine-tune-v1' w/weight: 10 & 'fine-tune-v2' w/weight: 1.
// This has the effect of the fine-tune-v1 being selected 10x as often as v2
//
// +optional
// +kubebuilder:validation:MaxItems=10
// +kubebuilder:validation:XValidation:message="Weights should be set for all models, or none of the models.",rule="self.all(model, has(model.weight)) || self.all(model, !has(model.weight))"
Expand Down Expand Up @@ -154,7 +162,7 @@ const (
// to exist at request time, the error is processed by the Inference Gateway
// and emitted on the appropriate InferenceModel object.
type TargetModel struct {
// Name is the name of the adapter or base model, as expected by the ModelServer.
// Name is the name of the LoRA adapter or base model, as expected by the ModelServer.
//
// +kubebuilder:validation:MaxLength=253
// +kubebuilder:validation:Required
Expand Down
4 changes: 4 additions & 0 deletions api/v1alpha1/inferencepool_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,10 @@ import (
)

// InferencePool is the Schema for the InferencePools API.
// The InferencePool object is intended to allow for easy maintenance of a set of model servers.
// Best practice is for every model server to share a base model, or, for every model server to be able to serve every 'modelName' that will be available.
// The InferencePool was made for the Inference Platform Admin: https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/docs/proposals/002-api-proposal/proposal.md#inference-platform-admin
// The InferencePool depends on the K8s Gateway, and relies on the gateway controller to manage reconciliation.
//
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
Expand Down
65 changes: 40 additions & 25 deletions config/crd/bases/inference.networking.x-k8s.io_inferencemodels.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,27 @@ spec:
singular: inferencemodel
scope: Namespaced
versions:
- name: v1alpha1
- additionalPrinterColumns:
- jsonPath: .spec.modelName
name: ModelName
type: string
- jsonPath: .status.conditions[?(@.type=="Accepted")].status
name: Accepted
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: InferenceModel is the Schema for the InferenceModels API.
description: |-
InferenceModel is the Schema for the InferenceModels API.
The InferenceModel is intended to represent a model workload (also referred to as a model use case) within Kubernetes.
The management of the model server is not done by the InferenceModel. Instead, the
focus of the InferenceModel is to provide the tools needed to effectively manage multiple models
that share the same base model (currently the focus is LoRA adapters). Fields such as TargetModel
are intended to simplify A/B testing and version rollout of adapters. While Criticality assists with
governance of multiplexing many usecases over shared hardware.
properties:
apiVersion:
description: |-
Expand All @@ -38,29 +55,20 @@ spec:
type: object
spec:
description: |-
InferenceModelSpec represents the desired state of a specific model use case. This resource is
InferenceModelSpec represents the desired state of an InferenceModel. This resource is
managed by the "Inference Workload Owner" persona.

The Inference Workload Owner persona is someone that trains, verifies, and
leverages a large language model from a model frontend, drives the lifecycle
and rollout of new versions of those models, and defines the specific
leverages a large language model focusing on model fidelity performance, and
less on inference performance (which is managed by the Inference Platform Admin).
They also drive the lifecycle and rollout of new versions of those models, and defines the specific
performance and latency goals for the model. These workloads are
expected to operate within an InferencePool sharing compute capacity with other
InferenceModels, defined by the Inference Platform Admin.

InferenceModel's modelName (not the ObjectMeta name) is unique for a given InferencePool,
if the name is reused, an error will be shown on the status of a
InferenceModel that attempted to reuse. The oldest InferenceModel, based on
creation timestamp, will be selected to remain valid. In the event of a race
condition, one will be selected at random.
InferenceModels, with specific governance defined by the Inference Platform Admin.
properties:
criticality:
description: |-
Criticality defines how important it is to serve the model compared to other models referencing the same pool.
Criticality impacts how traffic is handled in resource constrained situations. It handles this by
queuing or rejecting requests of lower criticality. InferenceModels of an equivalent Criticality will
fairly share resources over throughput of tokens. In the future, the metric used to calculate fairness,
and the proportionality of fairness will be configurable.

Default values for this field will not be set, to allow for future additions of new field that may 'one of' with this field.
Any implementations that may consume this field may treat an unset value as the 'Standard' range.
Expand All @@ -71,13 +79,10 @@ spec:
type: string
modelName:
description: |-
ModelName is the name of the model as it will be set in the "model" parameter for an incoming request.
ModelNames must be unique for a referencing InferencePool
(names can be reused for a different pool in the same cluster).
The modelName with the oldest creation timestamp is retained, and the incoming
InferenceModel is sets the Ready status to false with a corresponding reason.
In the rare case of a race condition, one Model will be selected randomly to be considered valid, and the other rejected.
Names can be reserved without an underlying model configured in the pool.
ModelName is the name of the model as the users set in the "model" parameter in the requests.
The name should be unique among the workloads that reference the same backend pool.
This is the parameter that will be used to match the request with.
Names can be reserved without implementing an actual model in the pool.
This can be done by specifying a target model and setting the weight to zero,
an error will be returned specifying that no valid target model is found.
maxLength: 256
Expand Down Expand Up @@ -110,8 +115,18 @@ spec:
targetModels:
description: |-
TargetModels allow multiple versions of a model for traffic splitting.
If not specified, the target model name is defaulted to the modelName parameter.
Traffic splitting is handled via weights. The targetModel field is optional, however,
if not specified, the target model name is defaulted to the modelName parameter.
modelName is often in reference to a LoRA adapter.

Examples:
- A model server serving `llama2-7b` may be represented by:
- setting the modelName to `llama2-7b` and setting no targetModels
- setting the modelName to `hello-world` and setting a single targetModel to `llama2-7b`, and setting no weights
- setting modelName to 'my-fine-tune', setting 2 targetModels 'fine-tune-v1' & 'fine-tune-v2', and setting no weights.
This has the effect of weighing the two models equally
- setting modelName to 'my-fine-tune', setting 2 targetModels 'fine-tune-v1' w/weight: 10 & 'fine-tune-v2' w/weight: 1.
This has the effect of the fine-tune-v1 being selected 10x as often as v2
items:
description: |-
TargetModel represents a deployed model or a LoRA adapter. The
Expand All @@ -123,7 +138,7 @@ spec:
and emitted on the appropriate InferenceModel object.
properties:
name:
description: Name is the name of the adapter or base model,
description: Name is the name of the LoRA adapter or base model,
as expected by the ModelServer.
maxLength: 253
type: string
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,12 @@ spec:
- name: v1alpha1
schema:
openAPIV3Schema:
description: InferencePool is the Schema for the InferencePools API.
description: |-
InferencePool is the Schema for the InferencePools API.
The InferencePool object is intended to allow for easy maintenance of a set of model servers.
Best practice is for every model server to share a base model, or, for every model server to be able to serve every 'modelName' that will be available.
The InferencePool was made for the Inference Platform Admin: https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/main/docs/proposals/002-api-proposal/proposal.md#inference-platform-admin
The InferencePool depends on the K8s Gateway, and relies on the gateway controller to manage reconciliation.
properties:
apiVersion:
description: |-
Expand Down