Browse the Repository | Released Assets
- Validate allowed keyword arguments.
- List all allowed keyword arguments in the function specs.
Browse the Repository | Released Assets
- [videocapture] fixed an issue where invoking
Evision.VideoCapture.waitAny/{1,2}reports a NIF error"cv::VideoCapture::waitAny not loaded".
Browse the Repository | Released Assets
- [nerves-build] use fwup v1.10.1
- [precompiled] support
armv6-linux-gnueabihftarget - [precompiled] precompiled binaries are now built with Erlang/OTP NIF version 2.16, and they are compatible with NIF version 2.16 and later.
- [model_zoo] use permanent URLs for all models.
- [experimental] support
aarch64-windows-msvctarget - [nerves-build] added
rpi5andsrhub - [model_zoo] added PP-OCR V3 text detection models
- [model_zoo] added image classification mobilenet v2 models
Browse the Repository | Released Assets
- Allow implicitly cast to
Evision.Feature2Dfrom the following types:Evision.AKAZEEvision.AffineFeatureEvision.AgastFeatureDetectorEvision.BRISKEvision.FastFeatureDetectorEvision.GFTTDetectorEvision.KAZEEvision.MSEREvision.ORBEvision.SimpleBlobDetectorEvision.XFeatures2D.BEBLIDEvision.XFeatures2D.BoostDescEvision.XFeatures2D.BriefDescriptorExtractorEvision.XFeatures2D.DAISYEvision.XFeatures2D.FREAKEvision.XFeatures2D.LATCHEvision.XFeatures2D.LUCIDEvision.XFeatures2D.MSDDetectorEvision.XFeatures2D.StarDetectorEvision.XFeatures2D.TBMREvision.XFeatures2D.TEBLIDEvision.XFeatures2D.VGG
- [experimental] Support compiling for iOS. Precompiled binaries are also available for iOS, but they are not tested yet, and they require a few more steps to use. Please see this guide for more information.
Browse the Repository | Released Assets
- Detect and use env var
HTTP_PROXYandHTTPS_PROXYwhen downloading precompiled binaries. - Updated to OpenCV 4.9.0. Some APIs may have changed, please see OpenCV's release note for more information.
- Use embedded
:evision_windows_fixinstead of:dll_loader_helper. - Updated CUDA versions for precompiled binaries. Now we only have
EVISION_CUDA_VERSION=118andEVISION_CUDA_VERSION=121.
Browse the Repository | Released Assets
- [compatibilities] Make it compatible with newer versions of Kino. Thanks for the contribution from @jonatanklosko.
Browse the Repository | Released Assets
- Updated to OpenCV 4.8.0. Some APIs have changed, please see OpenCV's release note for more information.
- Using manylinux2014 to build precompiled binaries for x86_64-linux-gnu (w/ and w/o contrib, except for the cuda ones). This allows us to only require glibc version to be at least 2.17.
Browse the Repository | Released Assets
- [Evision.DNN] Added support for passing
Nx.TensororEvision.Matas the input argument ofbboxesin-
Evision.DNN.nmsBoxes/{4,5}iex> Evision.DNN.nmsBoxes([{0,1,2,3}], [1], 0.4, 0.3) [0] iex> Evision.DNN.nmsBoxes(Nx.tensor([[0,1,2,3]]), [1], 0.4, 0.3) [0] iex> Evision.DNN.nmsBoxes(Evision.Mat.literal([[0,1,2,3]], :f64), [1], 0.4, 0.3) [0]
-
Evision.DNN.nmsBoxesBatched/{5,6}iex> Evision.DNN.nmsBoxesBatched([{0,1,2,3}], [1], [1], 0.4, 0.3) [0] iex> Evision.DNN.nmsBoxesBatched(Nx.tensor([[0,1,2,3]]), [1], [1], 0.4, 0.3) [0] iex> Evision.DNN.nmsBoxesBatched(Evision.Mat.literal([[0,1,2,3]], :f64), [1], [1], 0.4, 0.3) [0]
-
Evision.DNN.softNMSBoxes/{4,5}iex> Evision.DNN.softNMSBoxes([{0,1,2,3}], [1], 0.4, 0.3) {[1.0], [0]} iex> Evision.DNN.softNMSBoxes(Nx.tensor([[0,1,2,3]]), [1], 0.4, 0.3) {[1.0], [0]} iex> Evision.DNN.softNMSBoxes(Evision.Mat.literal([[0,1,2,3]], :s32), [1], 0.4, 0.3) {[1.0], [0]}
-
Browse the Repository | Released Assets
- [smartcell] Fixed outputBlob is embedded in a list for CRNN and MobileNetV1 models.
Browse the Repository | Released Assets
- [py_src] Fixed incorrect typespec for
Saclar. Thanks @tusqasi and @Nicd. - [smartcell] Fixed PPResnet based models.
- [smartcell] Fixed invalid charset URLs as they were removed in the upstream repo. CRNN models URLs to commit 12817b80.
Browse the Repository | Released Assets
- [py_src]
ArgInfo.has_defaultis now set totrueifa.defvalisf"{a.tp}()". Fixed #174. - [ci-win-precompile-core] Removed the line that deletes the
priv/x64directory. It should be removed in the last version because we will addpriv/x64to the DLL search path instead of copying all opencv dll files to theprivdirectory.
Browse the Repository | Released Assets
-
[Evision.Constant] Constant values are all relocated to the
Evision.Constantmodule. To use them, either doEvision.Constant.cv_IMREAD_ANY()
or
import Evision.Constant cv_IMREAD_ANY()
-
[Evision]
Evision.__enabled_modules__/0=>Evision.enabled_modules/0. Result will now be computed usingHAVE_OPENCV_{MODULE_NAME}macros.
- [Evision.Mat] fixed
Evision.Mat.update_roi/3. - [py_src] fix incorrect typespecs.
- [py_src]
VideoCaptureAPIsshould be a single number instead of a list of number.
- [c_src] check if we can use existing atom from
enif_make_existing_atombefore calling toenif_make_atom.
-
OpenCV contrib modules.
-
For users who prefer using precompiled binaries, almost all modules in opencv_contrib is included in precompiled archives (except for CUDA related ones, please see the next bullet point), and this will be the new default for precompiled users.
However, we do provide precompiled binaries that only include core modules. Please see the next paragraph.
-
For all users, to use core modules only, please set env var
EVISION_ENABLE_CONTRIB=false.
-
-
CUDA 11 + cudnn 8 support with precompiled binaries.
Note that CUDA 11 and cudnn 8 runtime libraries/dll files are not included in the precompiled archive.
Please following the installation guide on NVIDIA's website.
-
For precompiled binaries users, please set
EVISION_ENABLE_CUDAtotrue. Besides that, there are 3 CUDA versions to choose:EVISION_CUDA_VERSION=111EVISION_CUDA_VERSION=114EVISION_CUDA_VERSION=118
Please choose the one that matches your local CUDA versions, and set this
EVISION_CUDA_VERSIONenv var correspondingly. -
For users who prefer compiling from source, you'll only need to set
EVISION_ENABLE_CUDAtotrue, and OpenCV will detect and use (if possible) your local CUDA/cudnn runtime.
Lastly, if
EVISION_ENABLE_CUDAistruewhileEVISION_ENABLE_CONTRIBisfalse, CUDA related modules will not be compiled/downloaded. -
Please use v0.1.27 as precompilation binaries for targets x86_64-windows-msvc-contrib-cuda* and x86_64-linux-gnu-contrib-cuda* were incorrect.
Browse the Repository | Released Assets
- [smartcell] fixed a typo in SFace.
Browse the Repository | Released Assets
- [smartcell] fixed charset loading when initialising FP16/INT8 CRNN models. #144
- [smartcell] fixed OpenCL target label.
-
[smartcell] register the model zoo smart cell (
Evision.SmartCell.Zoo) on starting. Thanks to @josevalim. -
[smartcell] make
:kinoand:progress_baroptional dependencies. -
[ci] added one more step to make sure it compiles without optional deps. Thanks to @josevalim.
-
[smartcell] hide all FP16 models of CRNN because they were not supported until opencv/opencv #22337, which was after the release date of OpenCV 4.6.0.
See more on opencv/opencv#18735 (comment).
-
[smartcell] hide
CRNN CH (INT8)andCRNN EN (INT8)because OpenCV 4.6.0 seemed to have problems loading/parsing them even with thedemo.pyscript in the official opencv_zoo repo.
Browse the Repository | Released Assets
- [smartcell] OpenCV Model Zoo.
Evision.SmartCell.Zoo
Browse the Repository | Released Assets
-
[py_src] fixed functions in dnn that
return *this.For this part, this original code (as in
python-opencv) would case a new object to be allocated in C++ likeTextDetectionModel_DB retval; retval = self.setSomeValue(...) return pyopencv_from(retval);Noticing the address of the object has changed (because it's a new one) after calling
m.setBinaryThreshold.>>> import cv2' >>> m = cv2.dnn_TextDetectionModel_DB("DB_IC15_resnet18.onnx") >>> m < cv2.dnn.TextDetectionModel_DB 0x1064cf210> >>> m.setBinaryThreshold(0.5) < cv2.dnn.TextDetectionModel_DB 0x11ecda7f0>
Browse the Repository | Released Assets
- [Precompiled] fixed incorrect checksum for
x86_64-linux-gnu.
Browse the Repository | Released Assets
- [py_src/c_src] Added
has_defaultfield toArgInfo.
Browse the Repository | Released Assets
- [precompile] Fixed
Mix.Tasks.Compile.EvisionPrecompiled.read_checksum_map/1 - [py_src] Fixed code generation for derived classes in namespace
cv::dnn - [test] added test for
Evision.DNN.DetectionModel.
Browse the Repository | Released Assets
- [py_src] Fixed a code generation bug when all the input arguments of a function are optional.
- [example]
Req.get!should only raise on 4xx and 5xx. Thanks @wojtekmach
-
[example] Added two examples:
- find and draw contours in an image.
- extracting sudoku puzzle from an image.
-
[erlang] Structurised/recordized all
#references that have their own Erlang module. -
[erlang] Download precompiled binaries using
evision_precompiled.erl. -
[erlang] Generate typespecs.
Browse the Repository | Released Assets
-
[deps]
:kinowill be an optional dependency, if we useifbeforedefmodule. This reverts the changes in in v0.1.15.Thanks @josevalim for helping me figuring out why using
ifbeforedefmodulewould solve the problem. More details can be found here.
-
[config.exs] Added configurable parameters related to rendering
Evision.Matin Kino. (They are optional and can also be adjusted in runtime)config :evision, kino_render_image_encoding: :pngconfig :evision, kino_render_image_max_size: {8192, 8192}config :evision, kino_render_tab_order: [:image, :raw, :numerical]
-
[Evision.Mat] Added a few functions related to Kino.Render
Function Description Evision.Mat.kino_render_tab_order/0Get preferred order of Kino.Layout tabs for Evision.Matin Livebook.Evision.Mat.set_kino_render_tab_order/1Set preferred order of Kino.Layout tabs for Evision.Matin Livebook.Evision.Mat.kino_render_image_max_size/0Get the maximum allowed image size to render in Kino. Evision.Mat.set_kino_render_image_max_size/1Set the maximum allowed image size to render in Kino. Evision.Mat.kino_render_image_encoding/0Get preferred image encoding when rendering in Kino. Evision.Mat.set_kino_render_image_encoding/1Set preferred image encoding when rendering in Kino.
Browse the Repository | Released Assets
- [mix compile] Suppress logs if
evision.sois already presented when compiling from source. - [Precompile] Added precompile target
aarch64-windows-msvc.
- [deps]
:kinoshould be a required dependency
Browse the Repository | Released Assets
-
[Precompile] Linux: remove GTK support in precompiled binaries. (This change only affects users on Linux.)
This means functions in the
Evision.HighGuimodule will return error if you are using precompiled binaries. This follows the convention inopencv-python.Workarounds for this:
- compile
evisionfrom source so that OpenCV will try to use the GUI backends they support on your system. - use
Evision.Wx. still in development, but basic functions likeimshow/2are available. However, it requires Erlang to be compiled withwxWidgets. - use Livebook with
:kino >= 0.7.evisionhas built-in support forKino.Renderwhich can automatically give a visualised result in Livebook. This requires:kino >= 0.7.
- compile
-
[Evision.Nx] Module
Evision.Nxis now removed. Functions inEvision.Nxwere moved toEvision.Matin v0.1.13. Many thanks to @zacky1972 and @josevalim for their contributions to this module in very early days of the development.Old New Evision.Nx.to_mat/{1,2}Evision.Mat.from_nx/{1,2}Evision.Nx.to_mat/5Evision.Mat.from_binary/5Evision.Nx.to_mat_2d/1Evision.Mat.from_nx_2d/1Evision.Nx.to_nx/1Evision.Mat.to_nx/1
-
[Evision.Wx] implemented
imshow/2,destroyWindow/1anddestroyAllWindows/0. -
[SmartCell] Added SmartCells. They are optional and
:kino >= 0.7will be required to use them.If you'd like to use smartcells, please add
:kinotodepsin themix.exsfile.defp deps do [ # ... {:kino, "~> 0.7"}, # ... ] end
And then please register smartcells to
:kinoby invokingEvision.SmartCell.register_smartcells().Evision.SmartCell.available_smartcells/0will return all available smartcells.(Optional step) It's also possible to add only some of these smartcells, for example,
Evision.SmartCell.register_smartcells([ Evision.SmartCell.ML.TrainData, Evision.SmartCell.ML.SVM ])
Browse the Repository | Released Assets
- [c_src] Specialised function
evision_to [with Tp_=cv::UMat]. - [Evision.Backend] ensure that an
Evision.Matis returned fromreject_error/1.
-
[c_src]
parseSequencewill only handle tuples. -
[Evision.Mat]
Evision.Mat.quicklookwill use alternative escaping sequence to avoid having a dedicate function in NIF. Thanks to @akash-akya and @kipcole9 (vix#68).ST means either BEL (hex code 0x07) or ESC \\. -
[nx-integration] Functions in
Evision.Nxare now moved toEvision.Mat.Old New Evision.Nx.to_mat/{1,2}Evision.Mat.from_nx/{1,2}Evision.Nx.to_mat/5Evision.Mat.from_binary/5Evision.Nx.to_mat_2d/1Evision.Mat.from_nx_2d/1Evision.Nx.to_nx/1Evision.Mat.to_nx/1
As of v0.1.13, calls to these old functions will be forwarded to the corresponding new ones.
In the next release (v0.1.14), Evision.Nx will be removed.
- [Evision.Mat]
Evision.Mat.tranposewill usecv::transposeNDif possible. - [Precompile] Try to compile OpenCV with gtk3 support.
- [test] Added a test for
Evision.warpPerspective. - [example] Added an example for
Evision.warpPerspective. - [example] Added some examples for
Evision.warpPolar. - [example] Added QRCode encoding and decoding example.
- [docs] Added a cheatsheet.
Browse the Repository | Released Assets
- [Evision.QRCodeEncoder.Params] Renamed
Evision.QRCodeEncoder.Params.qrcodeencoder_params/0toEvision.QRCodeEncoder.Params.params/0.
- Function guard should also allow
Nx.Tensorwhen the corresponding input argument isEvision.Mat.maybe_mat_in(). - [Evision.Mat]
Evision.Mat.quicklook/1should also check the number of channels is one of[1, 3, 4]whendims == 2. - [c_src]
evision_cv_mat_broadcast_toshould callenif_free((void *)dst_data);ifvoid * tmp_data = (void *)enif_alloc(elem_size * count_new_elem);failed. - [py_src] Fixed the template of simple call constructor.
-
[Docs] Example Livebooks is now included in docs as extras.
-
[Evision.Mat]
Evision.Mat.roi/{2,3}now supports Elixir Range. -
[Evision.Mat] Implemented Access behaviour.
-
Access.fetch/2examples:iex> img = Evision.imread("test/qr_detector_test.png") %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {300, 300, 3}, ref: #Reference<0.809884129.802291734.78316> } # Same behaviour as Nx. # Also, img[0] gives the same result as img[[0]] # For this example, they are both equvilent of img[[0, :all, :all]] iex> img[[0]] %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {1, 300, 3}, ref: #Reference<0.809884129.802291731.77296> } # same as img[[0..100, 50..200, :all]] # however, currently we only support ranges with step size 1 # # **IMPORTANT NOTE** # # also, please note that we are using Elixir.Range here # and Elixir.Range is **inclusive**, i.e, [start, end] # while cv::Range `{integer(), integer()}` is `[start, end)` # the difference can be observed in the `shape` field iex> img[[0..100, 50..200]] %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {101, 151, 3}, ref: #Reference<0.809884129.802291731.77297> } iex> img[[{0, 100}, {50, 200}]] %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {100, 150, 3}, ref: #Reference<0.809884129.802291731.77297> } # for this example, the result is the same as `Evision.extractChannel(img, 0)` iex> img[[:all, :all, 0]] %Evision.Mat{ channels: 1, dims: 2, type: {:u, 8}, raw_type: 0, shape: {300, 300}, ref: #Reference<0.809884129.802291731.77298> } iex> img[[:all, :all, 0..1]] %Evision.Mat{ channels: 2, dims: 2, type: {:u, 8}, raw_type: 8, shape: {300, 300, 2}, ref: #Reference<0.809884129.802291731.77299> } # when index is out of bounds iex> img[[:all, :all, 42]] {:error, "index 42 is out of bounds for axis 2 with size 3"} # it works the same way for any dimensional Evision.Mat iex> mat = Evision.Mat.ones({10, 10, 10, 10, 10}, :u8) iex> mat[[1..7, :all, 2..6, 3..9, :all]] %Evision.Mat{ channels: 1, dims: 5, type: {:u, 8}, raw_type: 0, shape: {7, 10, 5, 7, 10}, ref: #Reference<0.3015448455.3766878228.259075> }
-
Access.get_and_update/3examples:iex> mat = Evision.Mat.zeros({5, 5}, :u8) iex> Evision.Nx.to_nx(mat) #Nx.Tensor< u8[5][5] Evision.Backend [ [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0] ] > iex> {old, new} = Evision.Mat.get_and_update(mat, [1..3, 1..3], fn roi -> {roi, Nx.broadcast(Nx.tensor(255, type: roi.type), roi.shape)} end) iex> Evision.Nx.to_nx(new) #Nx.Tensor< u8[5][5] Evision.Backend [ [0, 0, 0, 0, 0], [0, 255, 255, 255, 0], [0, 255, 255, 255, 0], [0, 255, 255, 255, 0], [0, 0, 0, 0, 0] ] >
-
Browse the Repository | Released Assets
In v0.1.10, an invalid checksum file was pushed to hex.pm, please read the changelog, especially the breaking changes in v0.1.10. Changelog for v0.1.10.
- [Precompile]
Mix.Tasks.Evision.Fetchshould always download and oerwrite existing files.
Browse the Repository | Released Assets
Invalid checksum file was pushed to hex.pm, please use v0.1.11 instead.
-
Say goodbye to the bang(!) version functions!
Thanks to @josevalim who wrote me this
Errorizemodule back in Feb 2022, and in v0.1.10 this module will be removed. There are two main reasons for this:- I've managed to structurise all
#referencesthat have their own modules in #101. - After generating function specs, dialyzer seems to be really upset about these bang(!) version functions, and would emit a few thousand warnings.
- I've managed to structurise all
-
[Precompile] Include NIF version in precompiled tarball filename.
"evision-nif_#{nif_version}-#{target}-#{version}" -
Return value changed if the first return type of the function is
bool-
If the function only returns a
bool, the updated return value will simple betrueorfalse.# before iex> :ok = Evision.imwrite("/path/to/image.png", img) iex> :error = Evision.imwrite("/path/to/image.png", invalid_img) # after iex> true = Evision.imwrite("/path/to/image.png", img) iex> false = Evision.imwrite("/path/to/image.png", invalid_img)
-
If the first return type is
bool, and there is another value to return:# before iex> frame = Evision.VideoCapture.read(capture) # has a frame available iex> :error = Evision.VideoCapture.read(capture) # cannot read / no more available frames # after iex> frame = Evision.VideoCapture.read(capture) # has a frame available iex> false = Evision.VideoCapture.read(capture) # cannot read / no more available frames
-
If the first return type is
bool, and there are more than one value to return:# before iex> {val1, val2} = Evision.SomeModule.some_function(arg1) # when succeeded iex> :error = Evision.SomeModule.some_function(capture) # when failed # after iex> {val1, val2} = Evision.SomeModule.some_function(arg1) # when succeeded iex> false = Evision.SomeModule.some_function(capture) # when failed
-
-
std::stringandcv::Stringwill be wrapped in a binary term instead of a list.For example,
# before iex> {'detected text', _, _} = Evision.QRCodeDetector.detectAndDecode(qr, img) # after iex> {"detected text", _, _} = Evision.QRCodeDetector.detectAndDecode(qr, img)
-
Structurised all
#references that have their own module.A list of modules that are now wrapped in structs can be found in the appendix section.
-
[Evision.DNN] As it's not possible to distinguish
std::vector<uchar>andStringin Elixir,Evision.DNN::readNet*functions that load a model from in-memoy buffer will be renamed toEvision.DNN::readNet*Buffer.For example,
@spec readNetFromONNX(binary()) :: Evision.DNN.Net.t() | {:error, String.t()} def readNetFromONNX(onnxFile) @spec readNetFromONNXBuffer(binary()) :: Evision.DNN.Net.t() | {:error, String.t()} def readNetFromONNXBuffer(buffer)
-
[Evision.Backend] raise a better error message for callbacks that haven't been implemented in
Evision.Backend. Thanks to @josevalimAn example of the updated error message:
iex> Evision.Backend.slice(1,2,3,4,5) ** (RuntimeError) operation slice is not yet supported on Evision.Backend. Please use another backend like Nx.BinaryBackend or Torchx.Backend. To use Torchx.Backend, :torchx should be added to your app's deps. Please see https://github.com/elixir-nx/nx/tree/main/torchx for more information on how to install and use it. To convert the tensor to another backend, please use Evision.Nx.to_nx(tensor, Backend.ModuleName) for example, Evision.Nx.to_nx(tensor, Nx.BinaryBackend) or Evision.Nx.to_nx(tensor, Torchx.Backend). Pull request would be more than welcomed if you'd like to implmenent this function and make contributions. (evision 0.1.10-dev) lib/evision/backend.ex:815: Evision.Backend.slice/5 iex:1: (file)
-
[Docs] Improved cross reference in inline docs. For example,
@doc """ ... @see setCVFolds ... """ def getCVFolds(self) do
@doc """ ... @see `setCVFolds/2` ... """ def getCVFolds(self) do
In this way, you can navigate to the referenced function in the generated html docs.
- Docs: included
retvalandselfin theReturnsection.
-
[Spec] Function spec for all Elixir functions, including generated ones.
-
[Evision.Mat] Added
Evision.Mat.roi/{2,3}.iex> img = Evision.imread("test/qr_detector_test.png") %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {300, 300, 3}, ref: #Reference<0.3957900973.802816029.173984> } # Mat operator()( const Rect& roi ) const; iex> sub_img = Evision.Mat.roi(img, {10, 10, 100, 200}) %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {200, 100, 3}, ref: #Reference<0.3957900973.802816020.173569> } # Mat operator()( Range rowRange, Range colRange ) const; iex> sub_img = Evision.Mat.roi(img, {10, 100}, {20, 200}) %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {90, 180, 3}, ref: #Reference<0.3957900973.802816020.173570> } iex> sub_img = Evision.Mat.roi(img, :all, {20, 200}) %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {300, 180, 3}, ref: #Reference<0.3957900973.802816020.173571> } # Mat operator()(const std::vector<Range>& ranges) const; iex> sub_img = Evision.Mat.roi(img, [{10, 100}, {10, 100}]) %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {90, 90, 3}, ref: #Reference<0.3957900973.802816020.173567> } iex> sub_img = Evision.Mat.roi(img, [{10, 100}, :all]) %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {90, 300, 3}, ref: #Reference<0.3957900973.802816020.173568> }
-
[Evision.Mat] Added
Evision.Mat.quicklook/1.This function will check the value of
:display_inline_image_iterm2in the application config. If istrue, then it will detect if current session is running iniTerm2(by checking the environment variableLC_TERMINAL).If both are
true, we next check if the image is a 2D image, also if its size is within the limits. The maximum size can be set in the application config, for example,config :evision, display_inline_image_iterm2: true config :evision, display_inline_image_max_size: {8192, 8192}
If it passes all the checks, then it will be displayed as an inline image in iTerm2.
List of modules that are now wrapped in structs.
Evision.AKAZEEvision.AffineFeatureEvision.AgastFeatureDetectorEvision.AlgorithmEvision.AlignExposuresEvision.AlignMTBEvision.AsyncArrayEvision.BFMatcherEvision.BOWImgDescriptorExtractorEvision.BOWKMeansTrainerEvision.BOWTrainerEvision.BRISKEvision.BackgroundSubtractorEvision.BackgroundSubtractorKNNEvision.BackgroundSubtractorMOG2Evision.CLAHEEvision.CUDAEvision.CUDA.BufferPoolEvision.CUDA.DeviceInfoEvision.CUDA.EventEvision.CUDA.GpuMatEvision.CUDA.HostMemEvision.CUDA.StreamEvision.CUDA.TargetArchsEvision.CalibrateCRFEvision.CalibrateDebevecEvision.CalibrateRobertsonEvision.CascadeClassifierEvision.CirclesGridFinderParametersEvision.DISOpticalFlowEvision.DMatchEvision.DNN.ClassificationModelEvision.DNN.DetectionModelEvision.DNN.DictValueEvision.DNN.KeypointsModelEvision.DNN.LayerEvision.DNN.ModelEvision.DNN.NetEvision.DNN.SegmentationModelEvision.DNN.TextDetectionModelEvision.DNN.TextDetectionModelDBEvision.DNN.TextDetectionModelEASTEvision.DNN.TextRecognitionModelEvision.DenseOpticalFlowEvision.DescriptorMatcherEvision.Detail.AffineBasedEstimatorEvision.Detail.AffineBestOf2NearestMatcherEvision.Detail.BestOf2NearestMatcherEvision.Detail.BestOf2NearestRangeMatcherEvision.Detail.BlenderEvision.Detail.BlocksChannelsCompensatorEvision.Detail.BlocksCompensatorEvision.Detail.BlocksGainCompensatorEvision.Detail.BundleAdjusterAffineEvision.Detail.BundleAdjusterAffinePartialEvision.Detail.BundleAdjusterBaseEvision.Detail.BundleAdjusterRayEvision.Detail.BundleAdjusterReprojEvision.Detail.CameraParamsEvision.Detail.ChannelsCompensatorEvision.Detail.DpSeamFinderEvision.Detail.EstimatorEvision.Detail.ExposureCompensatorEvision.Detail.FeatherBlenderEvision.Detail.FeaturesMatcherEvision.Detail.GainCompensatorEvision.Detail.GraphCutSeamFinderEvision.Detail.HomographyBasedEstimatorEvision.Detail.ImageFeaturesEvision.Detail.MatchesInfoEvision.Detail.MultiBandBlenderEvision.Detail.NoBundleAdjusterEvision.Detail.NoExposureCompensatorEvision.Detail.NoSeamFinderEvision.Detail.PairwiseSeamFinderEvision.Detail.SeamFinderEvision.Detail.SphericalProjectorEvision.Detail.TimelapserEvision.Detail.VoronoiSeamFinderEvision.FaceDetectorYNEvision.FaceRecognizerSFEvision.FarnebackOpticalFlowEvision.FastFeatureDetectorEvision.Feature2DEvision.FileNodeEvision.FileStorageEvision.Flann.IndexEvision.FlannBasedMatcherEvision.GFTTDetectorEvision.GeneralizedHoughEvision.GeneralizedHoughBallardEvision.GeneralizedHoughGuilEvision.HOGDescriptorEvision.KAZEEvision.KalmanFilterEvision.KeyPointEvision.LineSegmentDetectorEvision.ML.ANNMLPEvision.ML.BoostEvision.ML.DTreesEvision.ML.EMEvision.ML.KNearestEvision.ML.LogisticRegressionEvision.ML.NormalBayesClassifierEvision.ML.ParamGridEvision.ML.RTreesEvision.ML.SVMEvision.ML.SVMSGDEvision.ML.StatModelEvision.ML.TrainDataEvision.MSEREvision.MergeDebevecEvision.MergeExposuresEvision.MergeMertensEvision.MergeRobertsonEvision.OCLEvision.OCL.DeviceEvision.ORBEvision.ParallelEvision.PyRotationWarperEvision.QRCodeDetectorEvision.QRCodeEncoderEvision.QRCodeEncoder.ParamsEvision.SIFTEvision.SamplesEvision.Segmentation.IntelligentScissorsMBEvision.SimpleBlobDetectorEvision.SimpleBlobDetector.ParamsEvision.SparseOpticalFlowEvision.SparsePyrLKOpticalFlowEvision.StereoBMEvision.StereoMatcherEvision.StereoSGBMEvision.StitcherEvision.Subdiv2DEvision.TickMeterEvision.TonemapEvision.TonemapDragoEvision.TonemapMantiukEvision.TonemapReinhardEvision.TrackerEvision.TrackerDaSiamRPNEvision.TrackerDaSiamRPN.ParamsEvision.TrackerGOTURNEvision.TrackerGOTURN.ParamsEvision.TrackerMILEvision.TrackerMIL.ParamsEvision.UMatEvision.UsacParamsEvision.Utils.Nested.OriginalClassNameEvision.Utils.Nested.OriginalClassName.ParamsEvision.VariationalRefinementEvision.VideoCaptureEvision.VideoWriter
Browse the Repository | Released Assets
-
Mix.Tasks.Compile.EvisionPrecompiled: usingFile.cp_r/2instead of callingcp -aviaSystem.cmd/3. -
Fixed TLS warnings when downloading precompiled tarball file. Thanks to @kipcole9!
-
Only include
evision_custom_headers/evision_ml.hppif theHAVE_OPENCV_MLmacro is defined. -
Support parsing
RefWrapper<T> (&value)[N]from list or tuple. (#99)See the function in
c_src/evision.cpp.bool parseSequence(ErlNifEnv *env, ERL_NIF_TERM obj, RefWrapper<T> (&value)[N], const ArgInfo& info)
# `RotatedRect` has to be a tuple, {centre, size, angle} Evision.boxPoints!({{224.0, 262.5}, {343.0, 344.0}, 90.0}) # while `Point`/`Size` can be either a list, `[x, y]`, or a tuple, `{x, y}` Evision.boxPoints!({[224.0, 262.5], [343.0, 344.0], 90.0})
-
Fixed the mapping from a type to the corresponding function guard in
py_src/helper.py. (#99)
- Display
RotatedRecttype as{centre={x, y}, size={s1, s2}, angle}in docs.
Browse the Repository | Released Assets
-
CMakeandmake(nmakeif on Windows) will not be used to download and deploy precompiled binaries for Elixir users.This means that
evisioncan be downloaded and deployed once Erlang and Elixir are properly installed on the system.
Browse the Repository | Released Assets
-
EVISION_PREFER_PRECOMPILEDis set totrueby default.:evisionwill try to use precompiled binaries if available. Otherwise, it will fallback to building from source. -
Precompiled binary filename changed:
arm64-apple-darwin => aarch64-apple-darwin amd64-windows-msvc => x86_64-windows-msvc
-
cv::VideoCapturewill be wrapped in struct. For example:iex> cap = Evision.VideoCapture.videoCapture!("test/videocapture_test.mp4") %Evision.VideoCapture{ fps: 43.2, frame_count: 18.0, frame_width: 1920.0, frame_height: 1080.0, isOpened: true, ref: #Reference<0.3650318819.3952214034.37793> } iex> frame = Evision.VideoCapture.read!(cap) %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {1080, 1920, 3}, ref: #Reference<0.3650318819.3952214042.38343> }
-
Evision.Mat.empty/0will also return anEvision.Matstruct (was returning#Reference<some random numbers>).iex> Evision.Mat.empty!() %Evision.Mat{ channels: 1, dims: 0, type: {:u, 8}, raw_type: 0, shape: {}, ref: #Reference<0.2351084001.2568618002.207930> }
-
raise RuntimeError for all unimplemented
:nxcallbacks.raise RuntimeError, "not implemented yet"
-
Elixir functions that have the same name and arity will be grouped together now.
This should massively reduce the number of warnings emitted by the elixir compiler.
-
Only generate corresponding binding code.
- Only generate binding code for Elixir when compiling
:evisionusingmix; - Only generate binding code for erlang when compiling
:evisionusingrebar;
It's possible to generate erlang and Elixir at the same time. However, currently it's only possible to do so when compiling evision using
mix.# default value is `elixir` when compiling evision using `mix` # default value is `erlang` when compiling evision using `rebar` # # expected format is a comma-separated string export EVISION_GENERATE_LANG="erlang,elixir"
- Only generate binding code for Elixir when compiling
-
Better inline docs.
-
Inline docs will have a section for
Positional Argumentsand a section forKeyword Arguments. For example,@doc """ ### Positional Arguments - **bboxes**: vector_Rect2d. - **scores**: vector_float. - **score_threshold**: float. - **nms_threshold**: float. ### Keyword Arguments - **eta**: float. - **top_k**: int. Performs non maximum suppression given boxes and corresponding scores. Python prototype (for reference): ``` NMSBoxes(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices ``` """ @doc namespace: :"cv.dnn" def nmsBoxes(bboxes, scores, score_threshold, nms_threshold, opts)
-
If a function (same name and arity) has multiple variants, the inline docs will show each of them in section
## Variant VAR_INDEX. For example,@doc """ #### Variant 1: ##### Positional Arguments - **dx**: UMat. - **dy**: UMat. - **threshold1**: double. - **threshold2**: double. ##### Keyword Arguments - **edges**: UMat. - **l2gradient**: bool. \\overload Finds edges in an image using the Canny algorithm with custom image gradient. \\f$=\\sqrt{(dI/dx)^2 + (dI/dy)^2}\\f$ should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default \\f$L\\_1\\f$ norm \\f$=|dI/dx|+|dI/dy|\\f$ is enough ( L2gradient=false ). Python prototype (for reference): ``` Canny(dx, dy, threshold1, threshold2[, edges[, L2gradient]]) -> edges ``` #### Variant 2: ##### Positional Arguments - **image**: UMat. - **threshold1**: double. - **threshold2**: double. ##### Keyword Arguments - **edges**: UMat. - **apertureSize**: int. - **l2gradient**: bool. Finds edges in an image using the Canny algorithm @cite Canny86 . The function finds edges in the input image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See <http://en.wikipedia.org/wiki/Canny_edge_detector> \\f$=\\sqrt{(dI/dx)^2 + (dI/dy)^2}\\f$ should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default \\f$L\\_1\\f$ norm \\f$=|dI/dx|+|dI/dy|\\f$ is enough ( L2gradient=false ). Python prototype (for reference): ``` Canny(image, threshold1, threshold2[, edges[, apertureSize[, L2gradient]]]) -> edges ``` """ @doc namespace: :cv def canny(image, threshold1, threshold2, opts) when (is_reference(image) or is_struct(image)) and is_number(threshold1) and is_number(threshold2) and is_list(opts) and (opts == [] or is_tuple(hd(opts))), do: # variant 2 def canny(dx, dy, threshold1, threshold2) when (is_reference(dx) or is_struct(dx)) and (is_reference(dy) or is_struct(dy)) and is_number(threshold1) and is_number(threshold2), do: # variant 1
-
-
Better integration with
:nx.iex> t = Nx.tensor([[[0,0,0], [255, 255, 255]]], type: :u8) #Nx.Tensor< u8[1][2][3] [ [ [0, 0, 0], [255, 255, 255] ] ] > iex> mat = Evision.imread!("test.png") %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {1, 2, 3}, ref: #Reference<0.2067356221.74055707.218654> } iex> mat = Evision.Mat.channel_as_last_dim!(mat) %Evision.Mat{ channels: 1, dims: 3, type: {:u, 8}, raw_type: 0, shape: {1, 2, 3}, ref: #Reference<0.2067356221.74055698.218182> } iex> result = Evision.Mat.add!(t, mat) %Evision.Mat{ channels: 1, dims: 3, type: {:u, 8}, raw_type: 0, shape: {1, 2, 3}, ref: #Reference<0.2067356221.74055698.218184> } iex> Evision.Nx.to_nx!(result) #Nx.Tensor< u8[1][2][3] Evision.Backend [ [ [255, 255, 255], [255, 255, 255] ] ] >
-
Implemented property setter for
cv::Ptr<>wrapped types. For example,iex> k = Evision.KalmanFilter.kalmanFilter!(1, 1) #Reference<0.382162378.457572372.189094> iex> Evision.KalmanFilter.get_gain!(k) |> Evision.Nx.to_nx! #Nx.Tensor< f32[1][1] Evision.Backend [ [0.0] ] > iex> Evision.KalmanFilter.set_gain!(k, Evision.Mat.literal!([1.0], :f32)) #Reference<0.382162378.457572372.189094> iex> Evision.KalmanFilter.get_gain!(k) |> Evision.Nx.to_nx! #Nx.Tensor< f32[1][1] Evision.Backend [ [1.0] ] >
-
More detailed error message for property getter/setter. For example,
-
When setting a property that is type
Aand value passed to the setter is typeB, and there is no known conversion fromBtoA, then it will return an error-tupleiex> k = Evision.KalmanFilter.kalmanFilter!(1, 1) iex> Evision.KalmanFilter.set_gain(k, :p) {:error, "cannot assign new value, mismatched type?"} iex> Evision.KalmanFilter.set_gain(k, :p) ** (RuntimeError) cannot assign new value, mismatched type? (evision 0.1.7) lib/generated/evision_kalmanfilter.ex:175: Evision.KalmanFilter.set_gain!/2 iex:7: (file)
-
For property getter/setter, if the
selfpassed in is a different type than what is expected, an error-tuple will be returnediex> mat = Evision.Mat.literal!([1.0], :f32) %Evision.Mat{ channels: 1, dims: 2, type: {:f, 32}, raw_type: 5, shape: {1, 1}, ref: #Reference<0.1499445684.3682467860.58544> } iex> Evision.KalmanFilter.set_gain(mat, mat) {:error, "cannot get `Ptr<cv::KalmanFilter>` from `self`: mismatched type or invalid resource?"} iex> Evision.KalmanFilter.set_gain!(mat, mat) ** (RuntimeError) cannot get `Ptr<cv::KalmanFilter>` from `self`: mismatched type or invalid resource? (evision 0.1.7) lib/generated/evision_kalmanfilter.ex:175: Evision.KalmanFilter.set_gain!/2 iex:2: (file)
-
-
evision_##NAME##_getp(inc_src/erlcompat.hpp) should just return true or false.Returning a
ERL_NIF_TERM(enif_make_badarg) in the macro (whenenif_get_resourcefails) will prevent the caller from returning an error-tuple with detailed error message. -
Improved the quality of generated inline docs.
Also displays what variable(s) will be returned (when applicable) in the
##### Returnsection.
-
Added
Evision.Mat.literal/{1,2,3}to createEvision.Matfrom list literals.Creating
Evision.Matfrom empty list literal ([]) is the same as callingEvision.Mat.empty().iex> Evision.Mat.literal!([]) %Evision.Mat{ channels: 1, dims: 0, type: {:u, 8}, raw_type: 0, shape: {}, ref: #Reference<0.1204050731.2031747092.46781> }
By default, the shape of the Mat will stay as is.
iex> Evision.Mat.literal!([[[1,1,1],[2,2,2],[3,3,3]]], :u8) %Evision.Mat{ channels: 1, dims: 3, type: {:u, 8}, raw_type: 0, shape: {1, 3, 3}, ref: #Reference<0.512519210.691404819.106300> }
Evision.Mat.literal/3will return a valid 2D image if the keyword argument,as_2d, is set totrueand if the list literal can be represented as a 2D image.iex> Evision.Mat.literal!([[[1,1,1],[2,2,2],[3,3,3]]], :u8, as_2d: true) %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {1, 3, 3}, ref: #Reference<0.512519210.691404820.106293> }
-
Added
Evision.Mat.channel_as_last_dim/1.This function does the opposite as to
Evision.Mat.last_dim_as_channel/1.If the number of channels of the input Evision.Mat is greater than 1, then this function would convert the input Evision.Mat with dims
dims=list(int())to a1-channel Evision.Mat with dims[dims | channels].If the number of channels of the input Evision.Mat is equal to 1,
- if dims == shape, then nothing happens
- otherwise, a new Evision.Mat that has dims=
[dims | channels]will be returned
For example,
iex> mat = Evision.imread!("test.png") %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {1, 2, 3}, ref: #Reference<0.2067356221.74055707.218654> } iex> mat = Evision.Mat.channel_as_last_dim!(mat) %Evision.Mat{ channels: 1, dims: 3, type: {:u, 8}, raw_type: 0, shape: {1, 2, 3}, ref: #Reference<0.2067356221.74055698.218182> }
-
Automatically displays a tabbed output in Livebook if the type of evaluated result is
Evision.Mat.This is an optional feature. To enable it,
:kinoshould be added todeps, e.g.,defp deps do [ # ... {:kino, "~> 0.7"}, # ... ] end
Now, with
:kino>= v0.7 available, a tabbed output will shown in Livebook if the evaluated result is anEvision.Mat.A
Rawtab will always be the first one, e.g.,%Evision.Mat{ channels: 1, dims: 3, type: {:u, 8}, raw_type: 0, shape: {1, 2, 3}, ref: #Reference<0.3310236255.1057357843.168932> }
For 2D images (
dims == 2), the second tab will beImage, which displays the image.For all
Evision.Mat, the last tab will beNumerical, which shows the numbers behind the scene. Of course, for large sizeEvision.Mat, only part of the data will be shown. A example output in this tab:#Nx.Tensor< u8[1][2][3] Evision.Backend [ [ [1, 2, 3], [1, 2, 3] ] ] >
Browse the Repository | Released Assets
-
Evision.imencode/{2,3}will now return encoded image as binary instead of a list. -
cv::Matwill be wrapped in struct. For example:iex> Evision.imread!("path/to/image.png") %Evision.Mat{ channels: 3, dims: 2, type: {:u, 8}, raw_type: 16, shape: {512, 512, 3}, ref: #Reference<0.2992585850.4173463580.172624> }
This should close #76.
Browse the Repository | Released Assets
- Always use
Evision.Mat.from_binary_by_shape/3forEvision.Nx.to_mat. - Check
cv::Mat::Mat.type()when fetching the shape of a Mat. The number of channels will be included as the last dim of the shape if and only ifcv::Mat::Mat.type()did not encode any channel information.
- Fixed
Evision.Mat.transpose: should callshape!instead ofshape. Thanks to @kipcole9 ! #77
-
Added
Evision.Mat.last_dim_as_channel/1.This method convert a tensor-like
Matto a "valid 2D image" with itschannelsequals to3or1. -
Added
Evision.Nx.to_mat/2.This method convert a
Nx.Tensorto aMat. The second argument indicates the wanted/actual shape of the tensor. -
Added more Mat functions:
Evision.Mat.as_shape/2.Evision.Mat.size/1.Evision.Mat.channels/1.Evision.Mat.depth/1.Evision.Mat.raw_type/1.Evision.Mat.isSubmatrix/1.Evision.Mat.isContinuous/1.Evision.Mat.elemSize/1.Evision.Mat.elemSize1/1.Evision.Mat.total/{1,2,3}.
-
Added OpenCV types:
Evision.cv_cn_shift/0.Evision.cv_depth_max/0.Evision.cv_mat_depth_mask/0.Evision.cv_maketype/2.Evision.cv_8U/0.Evision.cv_8UC/1.Evision.cv_8UC1/0.Evision.cv_8UC2/0.Evision.cv_8UC3/0.Evision.cv_8UC4/0.Evision.cv_8S/0.Evision.cv_8SC/1.Evision.cv_8SC1/0.Evision.cv_8SC2/0.Evision.cv_8SC3/0.Evision.cv_8SC4/0.Evision.cv_16U/0.Evision.cv_16UC/1.Evision.cv_16UC1/0.Evision.cv_16UC2/0.Evision.cv_16UC3/0.Evision.cv_16UC4/0.Evision.cv_16S/0.Evision.cv_16SC/1.Evision.cv_16SC1/0.Evision.cv_16SC2/0.Evision.cv_16SC3/0.Evision.cv_16SC4/0.Evision.cv_32S/0.Evision.cv_32SC/1.Evision.cv_32SC1/0.Evision.cv_32SC2/0.Evision.cv_32SC3/0.Evision.cv_32SC4/0.Evision.cv_32F/0.Evision.cv_32FC/1.Evision.cv_32FC1/0.Evision.cv_32FC2/0.Evision.cv_32FC3/0.Evision.cv_32FC4/0.Evision.cv_64F/0.Evision.cv_64FC/1.Evision.cv_64FC1/0.Evision.cv_64FC2/0.Evision.cv_64FC3/0.Evision.cv_64FC4/0.Evision.cv_16F/0.Evision.cv_16FC/1.Evision.cv_16FC1/0.Evision.cv_16FC2/0.Evision.cv_16FC3/0.Evision.cv_16FC4/0.
Browse the Repository | Released Assets
- Default to
Evision.BackendforEvision.Nx.to_nx/2.
- Fixed class inheritance issue in
py_src/class_info.py. - Fixed missing comma in example livebooks'
Mix.install. Thanks to @dbii.
- Added decision tree and random forest example.
Browse the Repository | Released Assets
- Fixed issues in restoring files from precompiled package for macOS and Linux.
- Paths are now quoted.
- using
cp -RPfon macOS whilecp -aon Linux.
- Fixed
destroyAllWindowsin NIF. It was generated as 'erlang:destroyAllWindows/1' but it should be 'erlang:destroyAllWindows/0'.
Browse the Repository | Released Assets
- Fixed transpose.
- Added x86_64 musl compilation CI test.
- Added a few precompilation musl targets:
- x86_64-linux-musl
- aarch64-linux-musl
- armv7l-linux-musleabihf
- riscv64-linux-musl
Browse the Repository | Released Assets
-
Use OpenCV 4.6.0 by default.
-
Deprecated the use of the
EVISION_PRECOMPILED_VERSIONenvironment variable. The version information will be implied by the tag:def deps do [ {:evision, "~> 0.1.1", github: "cocoa-xu/evision", tag: "v0.1.1"} ] end
The value of the environment variable
EVISION_PREFER_PRECOMPILEDdecides whether the precompiled artefacts will be used or not.From the next version (>=0.1.2),evisionwill setEVISION_PREFER_PRECOMPILEDtotrueby default.
- Implemented a few nx callbacks (remaining ones will be implemented in the next release).
First release.