Skip to content

Commit e5f6b94

Browse files
authored
Updated links to LandingLens documentation (#245)
The links to the LandingLens docs still pointed to support.landing.ai. Updated these to point to https://landinglens.docs.landing.ai/. See https://app.asana.com/1/504311096896991/project/1203963067800274/task/1213411714740378?focus=true
1 parent 140f75a commit e5f6b94

12 files changed

Lines changed: 26 additions & 26 deletions

File tree

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ But most `FrameSet` operations were migrated to `Frame` class, so you can still
4747

4848
Below section shows you how to fix the backward incompatible changes when you upgrade the version to `0.1.0`.
4949

50-
1. Generate your v2 API key from LandingLens. See [here](https://support.landing.ai/docs/api-key) for more information.
50+
1. Generate your v2 API key from LandingLens. See [here](https://landinglens.docs.landing.ai/api-key) for more information.
5151
2. The `api_secret` parameter is removed in the `Predictor` and `OcrPredictor` class. `api_key` is a named parameter now, which means you must specify the parameter name, i.e. `api_key`, if you want to pass it to a `Predictor` as an argument.
5252
See below code as an example:
5353

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ The LandingLens Python library contains the LandingLens development library and
1616
## Documentation
1717

1818
- [LandingAI Python Library Docs](https://landing-ai.github.io/landingai-python/)
19-
- [LandingAI Support Center](https://support.landing.ai/)
19+
- [LandingLens Documentation](https://landinglens.docs.landing.ai/)
2020
- [LandingLens Walk-Through Video](https://www.youtube.com/watch?v=779kvo2dxb4)
2121

2222

@@ -69,7 +69,7 @@ For example, let's say we've created and deployed a model in LandingLens that de
6969
> If you don't have a LandingLens account, create one [here](https://app.landing.ai/). You will need to get an "endpoint ID" and "API key" from LandingLens in order to run inferences. Check our [Running Inferences / Getting Started](https://landing-ai.github.io/landingai-python/inferences/getting-started/).
7070
7171
> [!NOTE]
72-
> Learn how to use LandingLens from our [Support Center]([https://support.landing.ai/docs/landinglens-workflow](https://support.landing.ai/landinglens/en)) and [Video Tutorial Library](https://support.landing.ai/docs/landinglens-workflow-2).
72+
> Learn how to use LandingLens from our [documentation](https://landinglens.docs.landing.ai/).
7373
> Need help with specific use cases? Post your questions in our [Community](https://community.landing.ai/home).
7474
7575

docs/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ frame.resize(width=512, height=512) # (2)!
3232
frame.save_image("/tmp/resized-image.png") # (3)!
3333
```
3434

35-
1. We support several image file types. See the full list [here](https://support.landing.ai/docs/upload-images).
35+
1. We support several image file types. See the full list [here](https://landinglens.docs.landing.ai/upload-images).
3636
2. Resize the frame to 512x512p.
3737
3. Save the resized image to `/tmp/resized-image.png`.
3838

@@ -64,7 +64,7 @@ For example, let's say we've created and deployed a model in LandingLens that de
6464

6565
???+ note
6666

67-
If you don't have a LandingLens account, create one [here](https://app.landing.ai/). Learn how to use LandingLens from our [Support Center]([https://support.landing.ai/docs/landinglens-workflow](https://support.landing.ai/landinglens/en)) and [Video Tutorial Library](https://support.landing.ai/docs/landinglens-workflow-2). Need help with specific use cases? Post your questions in our [Community](https://community.landing.ai/home).
67+
If you don't have a LandingLens account, create one [here](https://app.landing.ai/). Learn how to use LandingLens from our [documentation](https://landinglens.docs.landing.ai/). Need help with specific use cases? Post your questions in our [Community](https://community.landing.ai/home).
6868

6969

7070
???+ note

docs/inferences/docker-deployment.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
Running inferences with the standard `landingai.predict.Predictor` will send your image to LandingLens cloud, which is ideal if you don't want to worry about backend scalability, hardware provisioning, availability, etc. But this also adds some networking overhead that might limit how many inferences per second you can run.
22

3-
If you need to run several inferences per second, and you have your own cloud service or local machine, you might want to run inference using your own resources. For that, we provide **[Docker deployment](https://support.landing.ai/docs/docker-deploy)**, a Docker image with your LandingLens trained model embeded that you can run anywhere.
3+
If you need to run several inferences per second, and you have your own cloud service or local machine, you might want to run inference using your own resources. For that, we provide **[Docker deployment](https://landinglens.docs.landing.ai/landingedge/landingedge-overview)**, a Docker image with your LandingLens trained model embeded that you can run anywhere.
44

55

66
???+ note
77

8-
You can get more details on how to set up and run the Docker deployment container locally or in your own cloud service in our [Support Center](https://support.landing.ai/docs/docker-deploy).
8+
You can get more details on how to set up and run the Docker deployment container locally or in your own cloud service in our [documentation](https://landinglens.docs.landing.ai/landingedge/docker-deploy).
99

1010
Once you go through the Support Center guide, you will have the model running in a container, accessible in a specific host and port. The example below refers to these as `localhost` and `8000`, respectively.
1111

@@ -37,4 +37,4 @@ The `EdgePredictor` class is a subclass of `Predictor`, so you can use it in the
3737

3838
The time it takes to run the inference will vary according to the hardware where the Docker container is running (if you set it up to run on a GPU, for example, it will probably yield faster predictions).
3939

40-
Check out the [Support Center](https://support.landing.ai/docs/docker-deploy) for more information on how to get a deployment license, run the Docker deployment with a a GPU, and more.
40+
Check out the [documentation](https://landinglens.docs.landing.ai/landingedge/docker-deploy) for more information on how to get a deployment license, run the Docker deployment with a a GPU, and more.

docs/inferences/getting-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ Once you are ready to [acquire images](image-acquisition/image-acquisition.md),
33

44
## Building your first model
55

6-
To run inferences using LandingLens, you must first build a model. If you didn't sign up before, visit https://app.landing.ai/, sign up for a free account and create a new project. If you are not familiar with LandingLens, you can find a lot of useful information in the [LandingLens support center](https://support.landing.ai/docs/landinglens-workflow).
6+
To run inferences using LandingLens, you must first build a model. If you didn't sign up before, visit https://app.landing.ai/, sign up for a free account and create a new project. If you are not familiar with LandingLens, you can find a lot of useful information in the [LandingLens documentation](https://landinglens.docs.landing.ai/).
77

88
Long story short, after creating a project in LandingLens you will need to:
99

docs/metadata.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ This section explains how to use the associated Python APIs to:
1212

1313
Metadata is additional information you can attach to an image. Every image can be associated with multiple metadata. Each metadata is a key-value pair associated with an image, where key is the metadata name and value is a string that represents the information. For example, when you upload an image to LandingLens, you can add metadata like the country where the image was created, the timestamp when the image was created, etc.
1414

15-
Metadata is useful when you need to manage hundreds or thousands of images in LandingLens or you need to collaborate with other team members to label datasets. For example, you can metadata to group certain types of images together (ex: images taken last week), then change their [split key](https://support.landing.ai/docs/datasets-and-splits) or create a [labeling task](https://support.landing.ai/landinglens/docs/agreement-based-labeling#send-labeling-tasks) for those images.
15+
Metadata is useful when you need to manage hundreds or thousands of images in LandingLens or you need to collaborate with other team members to label datasets. For example, you can metadata to group certain types of images together (ex: images taken last week), then change their [split key](https://landinglens.docs.landing.ai/splits) or create a [labeling task](hhttps://landinglens.docs.landing.ai/agreement-based-labeling) for those images.
1616

1717
Use the `landingai.data_management.metadata.Metadata` API to manage metadata.
1818

@@ -44,7 +44,7 @@ metadata_client.update(media_ids=[123, 124], timestamp=12345, country="us", labe
4444

4545
### Update Split Key for Images
4646

47-
When managing hundreds or thousands of images on the platform, it can be more efficient to manage (add/update/remove) the [split key](https://support.landing.ai/docs/datasets-and-splits) programmatically. Use the `update_split_key()` function in `landingai.data_management.media.Media` to manage the the split value for images.
47+
When managing hundreds or thousands of images on the platform, it can be more efficient to manage (add/update/remove) the [split key](https://landinglens.docs.landing.ai/splits) programmatically. Use the `update_split_key()` function in `landingai.data_management.media.Media` to manage the the split value for images.
4848

4949
**Example**
5050

@@ -77,12 +77,12 @@ Use the `landingai.data_management.media.Media` API to upload images to a specif
7777
In addition to uploading images, the upload API supports the following features:
7878
1. Assign a split ('train'/'dev'/'test') to images. An empty string '' represents Unassigned and is the default.
7979
2. Upload labels along with images. The supported label files are:
80-
* [Pascal VOC XML files](https://support.landing.ai/docs/upload-labeled-images-od) for Object Detection projects.
81-
* [Segmentation mask files](https://support.landing.ai/docs/upload-labeled-images-seg) for Segmentation projects.
80+
* [Pascal VOC XML files](https://landinglens.docs.landing.ai/upload-labeled-images-od) for Object Detection projects.
81+
* [Segmentation mask files](https://landinglens.docs.landing.ai/upload-labeled-images-seg) for Segmentation projects.
8282
* A classification name (string) for Classification projects.
8383
3. Attach additional metadata (key-value pairs) to images.
8484

85-
for more information, go [here](https://support.landing.ai/landinglens/docs/uploading#upload-images-with-split-and-label-information).
85+
For more information about uploading images, go [here](https://landinglens.docs.landing.ai/upload-images).
8686

8787

8888
### Upload Segmentation Masks

examples/capture-service/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ The program captures frames from the video feed every few seconds, and then runs
2323
## Customize the Example
2424

2525
1. Set up a camera that exposes an RTSP URL to your network (your local intranet). If you're not sure if the RTSP URL is working, learn how to test it in this [article](https://support.ipconfigure.com/hc/en-us/articles/115005588503-Using-VLC-to-test-camera-stream).
26-
2. Train a model in LandingLens, and deploy it to an endpoint via [Cloud Deployment](https://support.landing.ai/landinglens/docs/cloud-deployment).
26+
2. Train a model in LandingLens, and deploy it to an endpoint via [Cloud Deployment](https://landinglens.docs.landing.ai/cloud-deployment).
2727
3. Get the `endpoint id`, `api key` and `api secret` from LandingLens.
2828
4. Open the file `examples/capture-service/run.py`, and update the following with your information: `api_key`, `endpoint_id` and `stream_url`.
2929

examples/capture-service/run.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@
5656
cloud_sky_model = EdgePredictor()
5757
except ConnectionError:
5858
_LOGGER.error(
59-
f"""Failed to connect to the local LandingLens docker inference service. Have you launched the LandingLens container? If not please read the guide here (https://support.landing.ai/docs/docker-deploy)\nOnce you have installed it and obtained a license, run:
59+
f"""Failed to connect to the local LandingLens docker inference service. Have you launched the LandingLens container? If not please read the guide here (https://landinglens.docs.landing.ai/landingedge/docker-deploy)\nOnce you have installed it and obtained a license, run:
6060
docker run -p 8000:8000 --rm --name landingedge\\
6161
-e LANDING_LICENSE_KEY=YOUR_LICENSE_KEY \\
6262
public.ecr.aws/landing-ai/deploy:latest \\

landingai/common.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ def is_api_key_valid(cls, key: str) -> str:
3636
"""Check if the API key is a v2 key."""
3737
if not key.startswith("land_sk_"):
3838
raise InvalidApiKeyError(
39-
f"API key (v2) must start with 'land_sk_' prefix, but it's {key}. See https://support.landing.ai/docs/api-key for more information."
39+
f"API key (v2) must start with 'land_sk_' prefix, but it's {key}. See https://landinglens.docs.landing.ai/api-key for more information."
4040
)
4141
return key
4242

@@ -69,7 +69,7 @@ class ClassificationPrediction(Prediction):
6969
label_index: int
7070
"""The predicted label index.
7171
A label index is an unique integer that identifies a label in your label book.
72-
For more information, see https://support.landing.ai/docs/manage-label-book.
72+
For more information, see https://landinglens.docs.landing.ai/manage-label-book.
7373
"""
7474

7575

pdocs/user_guide/1_concepts.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,10 @@ This section explains important high-level concepts that will help you better us
1717

1818
### Model Deployment Options
1919

20-
Before using this library for inference, you need train your model in LandingLens and [deploy it](https://support.landing.ai/docs/deployment-options).
20+
Before using this library for inference, you need train your model in LandingLens and [deploy it](https://landinglens.docs.landing.ai/deployment-options).
2121

2222
This library supports two deployment options:
23-
- [Cloud Deployment](https://support.landing.ai/landinglens/docs/cloud-deployment)
23+
- [Cloud Deployment](https://landinglens.docs.landing.ai/cloud-deployment)
2424
- [Edge Deployment] (support coming soon...)
2525

2626
The easiest way to get started is using a Cloud Deployment.

0 commit comments

Comments
 (0)