Skip to content

Latest commit

 

History

History
68 lines (44 loc) · 7.68 KB

File metadata and controls

68 lines (44 loc) · 7.68 KB

Ianvs v0.1.0 release

1. Release the Ianvs distributed synergy AI benchmarking framework.

a) Release test environment management and configuration. b) Release test case management and configuration. c) Release test story management and configuration. d) Release the open-source test case generation tool: Use hyperparameter enumeration to fill in one configuration file to generate multiple test cases.

2. Release the PCB-AoI public dataset.

Release the PCB-AoI public dataset, its corresponding preprocessing, and baseline algorithm projects. Ianvs is the first open-source site for that dataset.

3. Support two new paradigms in test environments and test cases.

a) Test environments and test cases that support the single-task learning paradigm. b) Test environments and test cases that support the incremental learning paradigm.

4. Release PCB-AoI benchmark cases based on the two new paradigms.

a) Release PCB-AoI benchmark cases based on single-task learning, including leaderboards and test reports. b) Release PCB-AoI benchmark cases based on incremental learning, including leaderboards and test reports.

Ianvs v0.2.0 release

This version of Ianvs supports the following functions of unstructured lifelong learning:

1. Support lifelong learning throughout the entire lifecycle, including task definition, task assignment, unknown task recognition, and unknown task handling, among other modules, with each module being decoupled.

  • Support unknown task recognition and provide corresponding usage examples based on semantic segmentation tasks in this example.
  • Support multi-task joint inference and provide corresponding usage examples based on object detection tasks in this example.

2. Provide classic lifelong learning testing metrics, and support for visualizing test results.

  • Support lifelong learning system metrics such as BWT and FWT.
  • Support visualization of lifelong learning results.

3. Provide real-world datasets and rich examples for lifelong learning testing, to better evaluate the effectiveness of lifelong learning algorithms in real environments.

  • Provide cloud-robotics datasets in this website.
  • Provide cloud-robotics semantic segmentation examples in this example.

Ianvs v0.3.0 release

What's New in v0.3.0

Ianvs v0.3.0 brings powerful new LLM-related features, including comprehensive (1) LLM testing and benchmarking tools, (2) advanced cloud-edge collaborative inference paradigms, and (3) innovative algorithms tailored for large model optimization.

1. Support for LLM Testing and Benchmarks

Ianvs now supports robust testing for both locally deployed LLMs and public LLM APIs (e.g., OpenAI). This release introduces three specialized benchmarks for evaluating LLM capabilities in diverse scenarios:

2. Enhanced Cloud-Edge Collaborative Inference

This release introduces new paradigms and algorithms for collaborative inference to optimize cloud-edge cooperation and improve performance:

3. Support for New Large Model Algorithms

Ianvs includes new algorithms to improve LLM performance and usability in various scenarios:

  • Personalized LLM Agent Algorithm: This algorithm supports single-task learning using the pretrained Bloom model, enabling personalized LLM operations. Explore the example and review the documentation.

  • Multimodal Large Model Joint Learning Algorithm: A joint learning algorithm for multimodal understanding with the pretrained RFNet model. Try the example here and learn more in the documentation.

  • Unseen Task Processing Algorithm: Supports lifelong learning with pretrained models to handle unseen tasks effectively. Access the example and gain insights from the background documentation.