v0.4.0
What's Changed
New feature : create_calibrator
function that allows to create calibrator from name and kwargs is added to alonet.torch2trt to optimize imports in Tensorrt scripts.
from alonet.torch2trt import create_calibrator, DataBatchStreamer
cache_file = "calib.bin"
data_streamer = DataBatchStreamer(...)
calibrator = create_calibrator("minmax", data_streamer, cache_file)
- Fix bug :
aloscene.read_image
is now supported in the Jetsonnx.
- Fix bug : Change the way of representing an augmented tensor. Clearer separation between property with new separator and without unnecessary ones.
- Fix bug : The default value (10) for calibration batches does not allow the use of the whole calibration dataset. Default value has been changed to
None
.
- Fix bug : Change all the links to the documentation due to the change of name of the repository.
- New feature : Added a
setup.py
to facilitate the installation of aloception.
- New feature : Kitti Dataset (Stereo, Flow, Scene Flow, Depth, Odometry, Object, Tracking, Road, Semantics)
Kitti Depth : How to use Kitti Depth
date = "2011_09_26"
idsOfDrives = [
"0001", # sample from training subset
"0002", # sample from validation subset
]
custom_drives = {date: idsOfDrives}
kitti_ds = KittiDepth(
subset="all",
return_depth=True,
custom_drives=custom_drives,
)
for f, frames in enumerate(kitti_ds.train_loader(batch_size=2)):
frames = Frame.batch_list(frames)
Kitti Semantic : The semantic class
dataset = KittiSemanticDataset()
obj = dataset.getitem(0)
obj.get_view().render()
How to use the remaining task of the dataset : Dataset's class list : KittiStereoFlow2012
, KittiStereoFlowSFlow2015
, KittiOdometryDataset
, KittiObjectDataset
, KittiTrackingDataset
, KittiRoadDataset
dataset = DATASET_CLASS(right_frame=False)
obj = dataset.getitem(0)
obj["left"].get_view().render()
Scene Flow: dimensions : Error with shape of occlusion mask.
- Fix bug : Fix calibrator import issue. All TensorRT and prod package are now optional.
by @thibo73800 in #215
Support for the kumler_bauer projection in coords2rtheta
- New feature : Add support for the kumler_bauer projection in coords2rtheta
coords2rtheta(..., distortion=(0.25, 0.45), projection="kumler_bauer")
coords2rtheta(..., distortion=0.25, projection="equidistant") # API doesn't change for other projections
- New feature : Add WoodScape dataset.
WooodScapeDataset.
from alodatset import WooodScapeDataset
woodscape = WooodScapeDataset(
labels=[],
cameras=[],
fragment=1.,
)
frame = woodscape[222]
frame.get_view().render()
WooodScapeSplitDataset : WoodScape dataset with train and validation fractions.
from alodatset import WoodScapeSplitDataset, Split
woodscapeTrain = WoodScapeSplitDataset(split=Split.TRAIN)
frame = woodscapeTrain[222]
frame.get_view().render()
- Fix bug: kumler-bauer projection support for
aloscene.Depth
as_planar
,as_euclidean
: Assert error because the missing of "kumler_bauer" in verification condition.as_points3d
: missing of the calculation for distorted focal length for kumler_bauer.
- New feature : Better handle distortion coef for equidistant projection: both
float
andlist
are accepted.
# Python code snippet showing how to use it.
import torch
from aloscene import Depth
x = torch.rand(size=(1, 1, 20, 20))
depth1 = Depth(x, projection="equidistant", distortion=[0.5])
depth2 = Depth(x, projection="equidistant", distortion=0.5)
- New feature : 3 different implementations of focus blur augmentation
from alodataset.transforms import RandomFocusBlur, RandomFocusBlurV2, RandomFocusBlurV3
import aloscene
import torch
frame = aloscene.Frame(torch.rand((3, 300, 300)))
blured_frame1 = RandomFocusBlur()(frame)
blured_frame2 = RandomFocusBlurV2()(frame)
blured_frame3 = RandomFocusBlurV3()(frame)
- New feature : Motion blur augmentation from optical flow
## Motion blur from RAFT-flow
from alonet.raft.raft import RAFT
flow_model = RAFT(weights="raft-things")
flow_model = model.eval()
frame_t0_t1 = aloscene.Frame(torch.ones((2, 3, 300, 300)), names=tuple("TCHW"))
frame_t0 = frame_t0_t1[0]
frame_t1 = frame_t0_t1[1]
blured_t1 = RandomFlowMotionBlur(flow_model=flow_model)(frame_t1, p_frame=frame_t0)
blured_t1.get_view().render()
## Motion blur from ground truth optical flow
flow = aloscene.Flow(torch.ones((2, 300, 300)))
blured_t1 = RandomFlowMotionBlur()(frame_t1, flow=flow)
blured_t1.get_view().render()
- New feature : Random corner masking augmentation
from alodataset.transforms import RandomCornersMask
import aloscene
import torch
frame = aloscene.Frame(torch.ones((3, 300, 300)))
randomly_masked_frame = RandomCornersMask()(frame)
- Fix bug :
CameraIntrinsic
initialization with a shape of 4x4 is now possible using__init__
.
- Fix bug : Fix detr exportation to onnx & trt
- New feature : Added title to frames displayed with get_view()
Added a "title" argument to the get_view() method to be able to directly input a title for your display.
frames.get_view(title="test").render()
- Typing by @Ardorax in #212
- Warning gcc/g++ version for cuda ops by @Aurelien-VB in #230
- fix euclidean depth when as_point3d, better handle points behind camera by @anhtu293 in #311
- Delete github actions by @Ardorax in #313
Full Changelog: v0.3.0...v0.4.0