Skip to content
This repository was archived by the owner on Jul 16, 2024. It is now read-only.

Commit a5988b4

Browse files
authored
Fixed a few small spelling + grammar mistakes (#351)
* fix a few spelling + grammar mistakes * fix grammar mistakes in multitag.rst * fix grammar issue in-object-detection.rst * Fix spelling issues in 3D-tracking.rst * fix a ton more spelling issues
1 parent 5ad5e32 commit a5988b4

File tree

8 files changed

+23
-23
lines changed

8 files changed

+23
-23
lines changed

source/docs/apriltag-pipelines/2D-tracking-tuning.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
Tracking Apriltags
55
------------------
66

7-
Before you get started tracking AprilTags, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and swtich to the "AprilTag" or "Aruco" type. You should see a screen similar to the one below.
7+
Before you get started tracking AprilTags, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and switch to the "AprilTag" or "Aruco" type. You should see a screen similar to the one below.
88

99
.. image:: images/apriltag.png
1010
:align: center

source/docs/apriltag-pipelines/3D-tracking.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ Ambiguity
88

99
Translating from 2D to 3D using data from the calibration and the four tag corners can lead to "pose ambiguity", where it appears that the AprilTag pose is flipping between two different poses. You can read more about this issue `here. <https://docs.wpilib.org/en/stable/docs/software/vision-processing/apriltag/apriltag-intro.html#d-to-3d-ambiguity>` Ambiguity is calculated as the ratio of reprojection errors between two pose solutions (if they exist), where reprojection error is the error corresponding to the image distance between where the apriltag's corners are detected vs where we expect to see them based on the tag's estimated camera relative pose.
1010

11-
There a few steps you can take to resolve/mitigate this issue:
11+
There are a few steps you can take to resolve/mitigate this issue:
1212

13-
1. Mount cameras at oblique angles so it is less likely that the tag will be seen straght on.
13+
1. Mount cameras at oblique angles so it is less likely that the tag will be seen straight on.
1414
2. Use the :ref:`MultiTag system <docs/apriltag-pipelines/multitag:MultiTag Localization>` in order to combine the corners from multiple tags to get a more accurate and unambiguous pose.
15-
3. Reject all tag poses where the ambiguity ratio (availiable via PhotonLib) is greater than 0.2.
15+
3. Reject all tag poses where the ambiguity ratio (available via PhotonLib) is greater than 0.2.

source/docs/apriltag-pipelines/multitag.rst

+4-4
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
MultiTag Localization
22
=====================
33

4-
PhotonVision can combine AprilTag detections from multiple simultaniously observed AprilTags from a particular camera with information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib.
4+
PhotonVision can combine AprilTag detections from multiple simultaneously observed AprilTags from a particular camera with information about where tags are expected to be located on the field to produce a better estimate of where the camera (and therefore robot) is located on the field. PhotonVision can calculate this multi-target result on your coprocessor, reducing CPU usage on your RoboRio. This result is sent over NetworkTables along with other detected targets as part of the ``PhotonPipelineResult`` provided by PhotonLib.
55

6-
.. warning:: MultiTag requires an accurate field layout JSON be uploaded! Differences between this layout and tag's physical location will drive error in the estimated pose output.
6+
.. warning:: MultiTag requires an accurate field layout JSON to be uploaded! Differences between this layout and the tags' physical location will drive error in the estimated pose output.
77

88
Enabling MultiTag
99
^^^^^^^^^^^^^^^^^
1010

11-
Ensure that your camera is calibrated and 3D mode is enabled. Navigate to the Output tab and enable "Do Multi-Target Estimation". This enables MultiTag using the uploaded field layout JSON to calculate your camera's pose in the field. This 3D transform will be shown as an additional table in the "targets" tab, along with the IDs of AprilTags used to compute this transform.
11+
Ensure that your camera is calibrated and 3D mode is enabled. Navigate to the Output tab and enable "Do Multi-Target Estimation". This enables MultiTag to use the uploaded field layout JSON to calculate your camera's pose in the field. This 3D transform will be shown as an additional table in the "targets" tab, along with the IDs of AprilTags used to compute this transform.
1212

1313
.. image:: images/multitag-ui.png
1414
:width: 600
@@ -48,6 +48,6 @@ PhotonVision ships by default with the `2024 field layout JSON <https://github.c
4848
:width: 600
4949
:alt: The currently saved field layout in the Photon UI
5050

51-
An updated field layout can be uploaded by navigating to the "Device Control" card of the Settings tab and clicking "Import Settings". In the pop-up dialog, select the "Apriltag Layout" type and choose a updated layout JSON (in the same format as the WPILib field layout JSON linked above) using the paperclip icon, and select "Import Settings". The AprilTag layout in the "AprilTag Field Layout" card below should update to reflect the new layout.
51+
An updated field layout can be uploaded by navigating to the "Device Control" card of the Settings tab and clicking "Import Settings". In the pop-up dialog, select the "AprilTag Layout" type and choose an updated layout JSON (in the same format as the WPILib field layout JSON linked above) using the paperclip icon, and select "Import Settings". The AprilTag layout in the "AprilTag Field Layout" card below should be updated to reflect the new layout.
5252

5353
.. note:: Currently, there is no way to update this layout using PhotonLib, although this feature is under consideration.

source/docs/contributing/photonvision/build-instructions.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Get the source code from git:
2323
2424
git clone https://github.com/PhotonVision/photonvision
2525
26-
or alternatively download to source code from github and extract the zip:
26+
or alternatively download the source code from github and extract the zip:
2727

2828
.. image:: assets/git-download.png
2929
:width: 600
@@ -96,7 +96,7 @@ Running the following command under the root directory will build the jar under
9696
Build and Run PhotonVision on a Raspberry Pi Coprocessor
9797
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
9898

99-
As a convinenece, the build has built in `deploy` command which builds, deploys, and starts the current source code on a coprocessor.
99+
As a convenience, the build has a built-in `deploy` command which builds, deploys, and starts the current source code on a coprocessor.
100100

101101
An architecture override is required to specify the deploy target's architecture.
102102

source/docs/hardware/selecting-hardware.rst

+9-9
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Minimum System Requirements
1010
^^^^^^^^^^^^^^^^^^^^^^^^^^^
1111

1212
* Ubuntu 22.04 LTS or Windows 10/11
13-
* We don't reccomend using Windows for anything except testing out the system on a local machine.
13+
* We don't recommend using Windows for anything except testing out the system on a local machine.
1414
* CPU: ARM Cortex-A53 (the CPU on Raspberry Pi 3) or better
1515
* At least 8GB of storage
1616
* 2GB of RAM
@@ -20,7 +20,7 @@ Minimum System Requirements
2020
* Note that we only support using the Raspberry Pi's MIPI-CSI port, other MIPI-CSI ports from other coprocessors may not work.
2121
* Ethernet port for networking
2222

23-
Coprocessor Reccomendations
23+
Coprocessor Recommendations
2424
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2525

2626
When selecting a coprocessor, it is important to consider various factors, particularly when it comes to AprilTag detection. Opting for a coprocessor with a more powerful CPU can generally result in higher FPS AprilTag detection, leading to more accurate pose estimation. However, it is important to note that there is a point of diminishing returns, where the benefits of a more powerful CPU may not outweigh the additional cost. Below is a list of supported hardware, along with some notes on each.
@@ -30,7 +30,7 @@ When selecting a coprocessor, it is important to consider various factors, parti
3030
* Raspberry Pi 4/5 ($55-$80)
3131
* This is the recommended coprocessor for teams on a budget. It has a less powerful CPU than the Orange Pi 5, but is still capable of running PhotonVision at a reasonable FPS.
3232
* Mini PCs (such as Beelink N5095)
33-
* This coprcoessor will likely have similar performance to the Orange Pi 5 but has a higher performance ceiling (when using more powerful CPUs). Do note that this would require extra effort to wire to the robot / get set up. More information can be found in the set up guide `here. <https://docs.google.com/document/d/1lOSzG8iNE43cK-PgJDDzbwtf6ASyf4vbW8lQuFswxzw/edit?usp=drivesdk>`_
33+
* This coprocessor will likely have similar performance to the Orange Pi 5 but has a higher performance ceiling (when using more powerful CPUs). Do note that this would require extra effort to wire to the robot / get set up. More information can be found in the set up guide `here. <https://docs.google.com/document/d/1lOSzG8iNE43cK-PgJDDzbwtf6ASyf4vbW8lQuFswxzw/edit?usp=drivesdk>`_
3434
* Other coprocessors can be used but may require some extra work / command line usage in order to get it working properly.
3535

3636
Choosing a Camera
@@ -46,17 +46,17 @@ PhotonVision relies on `CSCore <https://github.com/wpilibsuite/allwpilib/tree/ma
4646
.. note::
4747
We do not currently support the usage of two of the same camera on the same coprocessor. You can only use two or more cameras if they are of different models or they are from Arducam, which has a `tool that allows for cameras to be renamed <https://docs.arducam.com/UVC-Camera/Serial-Number-Tool-Guide/>`_.
4848

49-
Reccomended Cameras
49+
Recommended Cameras
5050
^^^^^^^^^^^^^^^^^^^
51-
For colored shape detection, any non-fisheye camera supported by PhotonVision will work. We reccomend the Pi Camera V1 or a high fps USB camera.
51+
For colored shape detection, any non-fisheye camera supported by PhotonVision will work. We recommend the Pi Camera V1 or a high fps USB camera.
5252

53-
For driver camera, we reccomend a USB camera with a fisheye lens, so your driver can see more of the field.
53+
For driver camera, we recommend a USB camera with a fisheye lens, so your driver can see more of the field.
5454

55-
For AprilTag detection, we reccomend you use a global shutter camera that has ~100 degree diagonal FOV. This will allow you to see more AprilTags in frame, and will allow for more accurate pose estimation. You also want a camera that supports high FPS, as this will allow you to update your pose estimator at a higher frequency.
55+
For AprilTag detection, we recommend you use a global shutter camera that has ~100 degree diagonal FOV. This will allow you to see more AprilTags in frame, and will allow for more accurate pose estimation. You also want a camera that supports high FPS, as this will allow you to update your pose estimator at a higher frequency.
5656

57-
* Reccomendations For AprilTag Detection
57+
* Recommendations For AprilTag Detection
5858
* Arducam USB OV9281
59-
* This is the reccomended camera for AprilTag detection as it is a high FPS, global shutter camera USB camera that has a ~70 degree FOV.
59+
* This is the recommended camera for AprilTag detection as it is a high FPS, global shutter camera USB camera that has a ~70 degree FOV.
6060
* Innomaker OV9281
6161
* Spinel AR0144
6262
* Pi Camera Module V1

source/docs/installation/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ This page will help you install PhotonVision on your coprocessor, wire it, and p
77
Step 1: Software Install
88
------------------------
99

10-
This section will walk you through how to install PhotonVision on your coprcoessor. Your coprocessor is the device that has the camera and you are using to detect targets (ex. if you are using a Limelight / Raspberry Pi, that is your coprocessor and you should follow those instructions).
10+
This section will walk you through how to install PhotonVision on your coprocessor. Your coprocessor is the device that has the camera and you are using to detect targets (ex. if you are using a Limelight / Raspberry Pi, that is your coprocessor and you should follow those instructions).
1111

1212
.. warning:: You only need to install PhotonVision on the coprocessor/device that is being used to detect targets, you do NOT need to install it on the device you use to view the webdashboard. All you need to view the webdashboard is for a device to be on the same network as your vision coprocessor and an internet browser.
1313

source/docs/objectDetection/about-object-detection.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,11 @@ For the 2024 season, PhotonVision ships with a **pre-trained NOTE detector** (sh
1313
Tracking Objects
1414
^^^^^^^^^^^^^^^^
1515

16-
Before you get started with object detection, ensure that you have followed the previous sections on installation, wiring and networking. Next, open the Web UI, go to the top right card, and switch to the “Object Detection” type. You should see a screen similar to the image above.
16+
Before you get started with object detection, ensure that you have followed the previous sections on installation, wiring, and networking. Next, open the Web UI, go to the top right card, and switch to the “Object Detection” type. You should see a screen similar to the image above.
1717

1818
PhotonVision currently ships with a NOTE detector based on a `YOLOv5 model <https://docs.ultralytics.com/yolov5/>`_. This model is trained to detect one or more object "classes" (such as cars, stoplights, or in our case, NOTES) in an input image. For each detected object, the model outputs a bounding box around where in the image the object is located, what class the object belongs to, and a unitless confidence between 0 and 1.
1919

20-
.... note:: This model output means that while its fairly easy to say that "this rectangle probably contains a NOTE", we doesn't have any information about the NOTE's orientation or location. Further math in user code would be required to make estimates about where an object is physically located relative to the camera.
20+
.... note:: This model output means that while its fairly easy to say that "this rectangle probably contains a NOTE", we don't have any information about the NOTE's orientation or location. Further math in user code would be required to make estimates about where an object is physically located relative to the camera.
2121

2222
Tuning and Filtering
2323
^^^^^^^^^^^^^^^^^^^^

source/docs/troubleshooting/common-errors.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Camera won't show up
3131
^^^^^^^^^^^^^^^^^^^^
3232
Try these steps to :ref:`troubleshoot your camera connection <docs/troubleshooting/camera-troubleshooting:Camera Troubleshooting>`.
3333

34-
If you are using a USB camera, it is possible your USB Camera isn't supported by CSCore and therefore won't work with PhotonVision. See :ref:`supported hardware page for more information <docs/hardware/selecting-hardware:Reccomended Cameras>`, or the above Camera Troubleshooting page for more information on determining this locally.
34+
If you are using a USB camera, it is possible your USB Camera isn't supported by CSCore and therefore won't work with PhotonVision. See :ref:`supported hardware page for more information <docs/hardware/selecting-hardware:Recommended Cameras>`, or the above Camera Troubleshooting page for more information on determining this locally.
3535

3636
Camera is consistently returning incorrect values when in 3D mode
3737
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

0 commit comments

Comments
 (0)