You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ But most `FrameSet` operations were migrated to `Frame` class, so you can still
47
47
48
48
Below section shows you how to fix the backward incompatible changes when you upgrade the version to `0.1.0`.
49
49
50
-
1. Generate your v2 API key from LandingLens. See [here](https://support.landing.ai/docs/api-key) for more information.
50
+
1. Generate your v2 API key from LandingLens. See [here](https://landinglens.docs.landing.ai/api-key) for more information.
51
51
2. The `api_secret` parameter is removed in the `Predictor` and `OcrPredictor` class. `api_key` is a named parameter now, which means you must specify the parameter name, i.e. `api_key`, if you want to pass it to a `Predictor` as an argument.
@@ -69,7 +69,7 @@ For example, let's say we've created and deployed a model in LandingLens that de
69
69
> If you don't have a LandingLens account, create one [here](https://app.landing.ai/). You will need to get an "endpoint ID" and "API key" from LandingLens in order to run inferences. Check our [Running Inferences / Getting Started](https://landing-ai.github.io/landingai-python/inferences/getting-started/).
70
70
71
71
> [!NOTE]
72
-
> Learn how to use LandingLens from our [Support Center]([https://support.landing.ai/docs/landinglens-workflow](https://support.landing.ai/landinglens/en)) and [Video Tutorial Library](https://support.landing.ai/docs/landinglens-workflow-2).
72
+
> Learn how to use LandingLens from our [documentation](https://landinglens.docs.landing.ai/).
73
73
> Need help with specific use cases? Post your questions in our [Community](https://community.landing.ai/home).
1. We support several image file types. See the full list [here](https://support.landing.ai/docs/upload-images).
35
+
1. We support several image file types. See the full list [here](https://landinglens.docs.landing.ai/upload-images).
36
36
2. Resize the frame to 512x512p.
37
37
3. Save the resized image to `/tmp/resized-image.png`.
38
38
@@ -64,7 +64,7 @@ For example, let's say we've created and deployed a model in LandingLens that de
64
64
65
65
???+ note
66
66
67
-
If you don't have a LandingLens account, create one [here](https://app.landing.ai/). Learn how to use LandingLens from our [Support Center]([https://support.landing.ai/docs/landinglens-workflow](https://support.landing.ai/landinglens/en)) and [Video Tutorial Library](https://support.landing.ai/docs/landinglens-workflow-2). Need help with specific use cases? Post your questions in our [Community](https://community.landing.ai/home).
67
+
If you don't have a LandingLens account, create one [here](https://app.landing.ai/). Learn how to use LandingLens from our [documentation](https://landinglens.docs.landing.ai/). Need help with specific use cases? Post your questions in our [Community](https://community.landing.ai/home).
Copy file name to clipboardExpand all lines: docs/inferences/docker-deployment.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
Running inferences with the standard `landingai.predict.Predictor` will send your image to LandingLens cloud, which is ideal if you don't want to worry about backend scalability, hardware provisioning, availability, etc. But this also adds some networking overhead that might limit how many inferences per second you can run.
2
2
3
-
If you need to run several inferences per second, and you have your own cloud service or local machine, you might want to run inference using your own resources. For that, we provide **[Docker deployment](https://support.landing.ai/docs/docker-deploy)**, a Docker image with your LandingLens trained model embeded that you can run anywhere.
3
+
If you need to run several inferences per second, and you have your own cloud service or local machine, you might want to run inference using your own resources. For that, we provide **[Docker deployment](https://landinglens.docs.landing.ai/landingedge/landingedge-overview)**, a Docker image with your LandingLens trained model embeded that you can run anywhere.
4
4
5
5
6
6
???+ note
7
7
8
-
You can get more details on how to set up and run the Docker deployment container locally or in your own cloud service in our [Support Center](https://support.landing.ai/docs/docker-deploy).
8
+
You can get more details on how to set up and run the Docker deployment container locally or in your own cloud service in our [documentation](https://landinglens.docs.landing.ai/landingedge/docker-deploy).
9
9
10
10
Once you go through the Support Center guide, you will have the model running in a container, accessible in a specific host and port. The example below refers to these as `localhost` and `8000`, respectively.
11
11
@@ -37,4 +37,4 @@ The `EdgePredictor` class is a subclass of `Predictor`, so you can use it in the
37
37
38
38
The time it takes to run the inference will vary according to the hardware where the Docker container is running (if you set it up to run on a GPU, for example, it will probably yield faster predictions).
39
39
40
-
Check out the [Support Center](https://support.landing.ai/docs/docker-deploy) for more information on how to get a deployment license, run the Docker deployment with a a GPU, and more.
40
+
Check out the [documentation](https://landinglens.docs.landing.ai/landingedge/docker-deploy) for more information on how to get a deployment license, run the Docker deployment with a a GPU, and more.
Copy file name to clipboardExpand all lines: docs/inferences/getting-started.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ Once you are ready to [acquire images](image-acquisition/image-acquisition.md),
3
3
4
4
## Building your first model
5
5
6
-
To run inferences using LandingLens, you must first build a model. If you didn't sign up before, visit https://app.landing.ai/, sign up for a free account and create a new project. If you are not familiar with LandingLens, you can find a lot of useful information in the [LandingLens support center](https://support.landing.ai/docs/landinglens-workflow).
6
+
To run inferences using LandingLens, you must first build a model. If you didn't sign up before, visit https://app.landing.ai/, sign up for a free account and create a new project. If you are not familiar with LandingLens, you can find a lot of useful information in the [LandingLens documentation](https://landinglens.docs.landing.ai/).
7
7
8
8
Long story short, after creating a project in LandingLens you will need to:
Copy file name to clipboardExpand all lines: docs/metadata.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ This section explains how to use the associated Python APIs to:
12
12
13
13
Metadata is additional information you can attach to an image. Every image can be associated with multiple metadata. Each metadata is a key-value pair associated with an image, where key is the metadata name and value is a string that represents the information. For example, when you upload an image to LandingLens, you can add metadata like the country where the image was created, the timestamp when the image was created, etc.
14
14
15
-
Metadata is useful when you need to manage hundreds or thousands of images in LandingLens or you need to collaborate with other team members to label datasets. For example, you can metadata to group certain types of images together (ex: images taken last week), then change their [split key](https://support.landing.ai/docs/datasets-and-splits) or create a [labeling task](https://support.landing.ai/landinglens/docs/agreement-based-labeling#send-labeling-tasks) for those images.
15
+
Metadata is useful when you need to manage hundreds or thousands of images in LandingLens or you need to collaborate with other team members to label datasets. For example, you can metadata to group certain types of images together (ex: images taken last week), then change their [split key](https://landinglens.docs.landing.ai/splits) or create a [labeling task](hhttps://landinglens.docs.landing.ai/agreement-based-labeling) for those images.
16
16
17
17
Use the `landingai.data_management.metadata.Metadata` API to manage metadata.
When managing hundreds or thousands of images on the platform, it can be more efficient to manage (add/update/remove) the [split key](https://support.landing.ai/docs/datasets-and-splits) programmatically. Use the `update_split_key()` function in `landingai.data_management.media.Media` to manage the the split value for images.
47
+
When managing hundreds or thousands of images on the platform, it can be more efficient to manage (add/update/remove) the [split key](https://landinglens.docs.landing.ai/splits) programmatically. Use the `update_split_key()` function in `landingai.data_management.media.Media` to manage the the split value for images.
48
48
49
49
**Example**
50
50
@@ -77,12 +77,12 @@ Use the `landingai.data_management.media.Media` API to upload images to a specif
77
77
In addition to uploading images, the upload API supports the following features:
78
78
1. Assign a split ('train'/'dev'/'test') to images. An empty string '' represents Unassigned and is the default.
79
79
2. Upload labels along with images. The supported label files are:
80
-
*[Pascal VOC XML files](https://support.landing.ai/docs/upload-labeled-images-od) for Object Detection projects.
81
-
*[Segmentation mask files](https://support.landing.ai/docs/upload-labeled-images-seg) for Segmentation projects.
80
+
*[Pascal VOC XML files](https://landinglens.docs.landing.ai/upload-labeled-images-od) for Object Detection projects.
81
+
*[Segmentation mask files](https://landinglens.docs.landing.ai/upload-labeled-images-seg) for Segmentation projects.
82
82
* A classification name (string) for Classification projects.
83
83
3. Attach additional metadata (key-value pairs) to images.
84
84
85
-
for more information, go [here](https://support.landing.ai/landinglens/docs/uploading#upload-images-with-split-and-label-information).
85
+
For more information about uploading images, go [here](https://landinglens.docs.landing.ai/upload-images).
Copy file name to clipboardExpand all lines: examples/capture-service/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ The program captures frames from the video feed every few seconds, and then runs
23
23
## Customize the Example
24
24
25
25
1. Set up a camera that exposes an RTSP URL to your network (your local intranet). If you're not sure if the RTSP URL is working, learn how to test it in this [article](https://support.ipconfigure.com/hc/en-us/articles/115005588503-Using-VLC-to-test-camera-stream).
26
-
2. Train a model in LandingLens, and deploy it to an endpoint via [Cloud Deployment](https://support.landing.ai/landinglens/docs/cloud-deployment).
26
+
2. Train a model in LandingLens, and deploy it to an endpoint via [Cloud Deployment](https://landinglens.docs.landing.ai/cloud-deployment).
27
27
3. Get the `endpoint id`, `api key` and `api secret` from LandingLens.
28
28
4. Open the file `examples/capture-service/run.py`, and update the following with your information: `api_key`, `endpoint_id` and `stream_url`.
Copy file name to clipboardExpand all lines: examples/capture-service/run.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@
56
56
cloud_sky_model=EdgePredictor()
57
57
exceptConnectionError:
58
58
_LOGGER.error(
59
-
f"""Failed to connect to the local LandingLens docker inference service. Have you launched the LandingLens container? If not please read the guide here (https://support.landing.ai/docs/docker-deploy)\nOnce you have installed it and obtained a license, run:
59
+
f"""Failed to connect to the local LandingLens docker inference service. Have you launched the LandingLens container? If not please read the guide here (https://landinglens.docs.landing.ai/landingedge/docker-deploy)\nOnce you have installed it and obtained a license, run:
Copy file name to clipboardExpand all lines: pdocs/user_guide/1_concepts.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,10 +17,10 @@ This section explains important high-level concepts that will help you better us
17
17
18
18
### Model Deployment Options
19
19
20
-
Before using this library for inference, you need train your model in LandingLens and [deploy it](https://support.landing.ai/docs/deployment-options).
20
+
Before using this library for inference, you need train your model in LandingLens and [deploy it](https://landinglens.docs.landing.ai/deployment-options).
0 commit comments