Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 59 additions & 0 deletions content/docs/Konveyor/Kai/GettingStarted/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---
title: "Getting started with Konveyor AI"
date: 2025-12-13T14:58:52-06:00
draft: false
---
The Getting started section contains information to walk you through the
prerequisites, persistent volume requirements, installation, and
workflows that help you to decide how you want to use the Konveyor AI.

## Prerequisites

This section lists the prerequisites required to successfully use the
generative AI features in the Konveyor AI Visual Studio (VS) Code
extension.

Before you install Konveyor AI, you must:

- Install Java v17 and later

- Install Maven v3.9.9 or later

- Install Git and add it to the $PATH variable

- Install the Konveyor Operator 8.0.0

The Konveyor Operator is mandatory if you plan to enable the Solution
Server. Solution Server provides context for the large language model (LLM) for generating code changes. It enables you to log in to the `konveyor-ai` project where you must enable the Solution Server in the Tackle custom resources (CR).

- Create an API key for an LLM.

You must enter the provider value and model name in Tackle CR to
enable generative AI configuration in the Konveyor VS Code plugin.


| LLM Provider (Tackle CR value) | Large language model examples for Tackle CR configuration|
|--------------------------------|--------------------------------|
| OpenShift AI platform | Models deployed in OpenShift AI |
| Open AI (openai) | `gpt-4`, `gpt-4o`,`gpt-4o-mini` |
| Azure OpenAI (azure_openai) | `gpt-4`,`gpt-35-turbo` |
| Amazon Bedrock (bedrock) | `anthropic.claude-3-5-sonnet-20241022-v2:0`, `meta.llama3-1-70b-instruct-v1:0` |
| Google Gemini (google) | `gemini-2.0-flash-exp`, `gemini-1.5-pro` |
| Ollama (ollama) | `llama3.1`, `codellama`,`mistral`|


**Note:** The availability of public LLM models is maintained by the respective LLM provider.

## Persistent Volume

The Solution Server component requires a backend database to store code changes from previous analyses.

If you plan to enable Solution Server, you must create a `5Gi` `RWO` persistent volume used by the Konveyor AI database.


## Installation

You can install the Konveyor AI extension from the [Konveyor AI release page in GitHub](https://github.com/konveyor/editor-extensions/releases/tag/v0.2.0).

You can use the Konveyor VS Code plug-in to perform analysis and
optionally enable KAI to use generative AI capabilities. Use KAI to fix code issues before migrating the application to target technologies by using the generative AI capabilities.
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: "How to use Konveyor AI"
date: 2025-12-15T14:58:52-06:00
draft: false
---
You can opt to use Konveyor AI features to request a code fix suggestion after running a static code analysis of an application. Konveyor AI augments the manual changes made to code throughout your organization in different migration waves and creates a context that is shared with a large language model (LLM).

The LLM suggests code resolutions based on the issue description, context, and previous examples of code changes to resolve issues.

To make code changes by using the LLM, you must enable the generative AI option.

You can use Konveyor in one of the three ways after enabling generative AI in VS Code:

* Use LLM to generate code fix suggestion.

* Use LLM along with the Solution Server

* Use the LLM with the Agent AI.

The configurations that you complete before you request code fixes depend on how you prefer to request code resolutions.

**NOTE:** If you make any configuration change after enabling the generative AI settings in the extension, you must restart the extension for the change to take effect.

To use the LLM for code fix suggestions:

- Enable the generative AI option in the Konveyor plugin extension
settings.

- Activate the LLM provider in the `provider-settings.yaml` file.

- `Start` the RPC server to run the analysis and get code fix suggestions for the identified issues.

To use the Solution Server to provide an additional context for the LLM:

- Create a secret for your LLM key in the Kubernetes cluster.

- Enable the Solution Server in the Tackle custom resource (CR).

- Configure the LLM base URL and model in the Tackle CR.

- Enable the generative AI option in the Konveyor plugin extension
settings.

- Add the Solution Server configuration in the `settings.json` file.

- Configure the profile settings and activate the LLM provider in the `provider-settings.yaml` file.

To use the agent mode for code fix suggestions:

- Enable the generative AI and the agent mode in the Konveyor plugin extension settings.

- Configure the profile settings and activate the LLM provider in the `provider-settings.yaml` file.
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
---
title: "Generating code fix suggestion example"
date: 2025-12-14T14:58:52-06:00
draft: false
---
This example will walk you through generating code fixes for a `Java` application that must be migrated to the target technology `quarkus`. To generate resolutions for issues in the code, we use the Agentic AI mode and the `my-model` as the large language model (LLM) that you deployed in OpenShift AI.

## Procedure

1. Open the `my-Java` project in Visual Studio (VS) Code.

2. Download the Konveyor AI extension from the [Konveyor AI release page in GitHub](https://github.com/konveyor/editor-extensions/releases/tag/v0.2.0).

3. Open Command Palette:

1. Type `Ctrl+Shift+P` in Windows and Linux systems.

2. Type `Cmd+Shift+P` in Mac systems.

4. Type `Preferences: Open Settings (UI)` in the Command Palette to
open the VS Code settings and select `Extensions > Konveyor AI`.

5. Select `Gen AI:Agent Mode`.

6. In the Konveyor AI extension, click `Open Analysis View`.

7. Type `Konveyor: Manage Analysis Profile` in the Command Palette to open
the analysis profile page.

8. Configure the following fields:

1. **Profile Name**: Type a profile name

2. **Target Technologies**: `quarkus`

3. **Custom Rules**: Select custom rules if you want to include
them while running the analysis. By default, Konveyor AI enables
**Use Default Rules** for `quarkus`.

9. Close the profile manager.

10. Type `Konveyor: Open the Gen AI model provider configuration file` in the
Command Palette.

11. Configure the following in the `provider-settings` file and close
it:
```yaml
models:
openshift-example-model: &active
environment:
OPENAI_API_KEY: "<Server's OPENAI_API_KEY>"
CA_BUNDLE: "<Servers CA Bundle path>"
provider: "ChatOpenAI"
args:
model: "my-model"
configuration:
baseURL: "https://<serving-name>-<data-science-project-name>.apps.konveyor-ai.example.com/v1"
```
You must change the `provider-setting` configuration if you plan to
use a different LLM provider.

12. Type `Konveyor AI: Open Analysis View` in the Command Palette.

13. Click **Start** to start the Konveyor AI RPC server.

14. Select the profile you configured.

15. Click **Run Analysis** to scan the Java application.

Konveyor AI identifies the issues in the code.

16. Click the solutions icon in an issue
to request suggestions to resolve the issue.

Konveyor AI streams the issue description, a preview of the code
changes that resolve the issue, and the file(s) in which the changes
are to be made.

You can review the code changes in the editor and accept or reject
the changes. If you accept the changes, Konveyor AI creates a new
file with the accepted code changes.

17. Click **Continue** to allow Konveyor AI to run a follow-up analysis.

This round of analysis detects lint issues, compilation issues, or
diagnostic issues that may have occurred when you accepted the
suggested code change.

Repeat the review and accept or reject the resolutions. Konveyor AI
continues to run repeated iterations of scan if you allow until all
issues are resolved.
12 changes: 12 additions & 0 deletions content/docs/Konveyor/Kai/IDESettings/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
title: "Using Konveyor AI in IDE"
date: 2025-12-06T14:58:52-06:00
draft: false
---
You must configure the following settings in the Konveyor extension:

- Visual Studio Code IDE settings.

- Profile settings that provide context before you request a code fix for a particular application.


Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
---
title: "Configuring the LLM provider settings"
date: 2025-12-05T14:58:52-06:00
draft: false
---
After you install the Konveyor extension in Visual Studio (VS) Code, you must provide your large language model (LLM) credentials to activate Konveyor AI settings in Visual Studio (VS) Code.

Konveyor AI settings are applied to all AI-assisted analysis that you perform by using the Konveyor extension. The extension settings can be broadly categorized into debugging and logging, Konveyor AI settings, analysis settings, and Solution Server settings.

## Prerequisites

In addition to the overall prerequisites, you have configured the
following:

- You completed the Solution Server configurations in Tackle custom
resource if you opt to use the Solution Server.

## Procedure

1. Go to the Konveyor AI settings in one of the following ways:

1. Click `Extensions > Konveyor Extension for VSCode > Settings`

2. Type `Ctrl + Shift + P` or `Cmd + Shift + P` on the search bar
to open the Command Palette and enter
`Preferences: Open Settings (UI)`. Go to `Extensions > Konveyor` to
open the settings page.

2. Configure the settings described in the following table:

| Settings | Description |
|----------------|----------------------------------------------------|
| Log level | Set the log level for the Konveyor binary. The default log level is `debug`.The log level increases or decreases the verbosity of logs. |
| Analyzer path| Specify an Konveyor custom binary path. If you do not provide a path, Konveyor extension uses the default path to the binary.|
| Auto Accept on Save | This option is enabled by default. When you accept the changes suggested by the LLM, the updated code is saved automatically in a new file. Disable this option if you want to manually save the new file after accepting the suggested code changes.|
| Gen AI:Enabled | This option is enabled by default. It enables you to get code fixes by using Konveyor AI with a large language model.|
| Gen AI: Agent mode | Enable the experimental Agentic AI flow for analysis. Konveyor runs an automated analysis of a file to identify issues and suggest resolutions. After you accept the solutions, Konveyor AI makes the changes in the code and re-analyzes the file. |
| Gen AI: Excluded diagnostic sources | Add diagnostic sources in the `settings.json` file. The issues generated by such diagnostic sources are excluded from the automated Agentic AI analysis.|
| Cache directory| Specify the path to a directory in your filesystem to store cached responses from the LLM. |
| Trace directory| Configure the absolute path to the directory that contains the saved LLM interaction.|
| Trace enabled| Enable to trace Konveyor communication with the LLM model. Traces are stored in the `trace` directory that you configured. |
| Demo mode | Enable to run Konveyor AI in demo mode that uses the LLM responses saved in the `cache` directory for analysis. |
| Solution Server:URL | Edit Solution Server configurion.
| Debug:Webview | Enable debug level logging for Webview message handling in VS Code.|

Solution server configuration:
* “enabled”: Enter a boolean value. Set `true` for connecting the Solution Server client (Konveyor extension) to the Solution Server. <br>

* “url”: Configure the URL of the Solution Server end point. <br>

* “auth”: The authentication settings allows you to configure a list of options to authenticate to the Solution Server. <br>

* "enabled": Set to `true` to enable authentication. If you enable authentication, then you must configure the Solution Server realm. <br>

* "insecure": Set to `true` to skip SSL certificate verification when clients connect to the Solution Server. Set to `false` to allow secure connections to the Solution Server. <br>

* "realm": Enter the name of the Keycloak realm for Solution Server. If you enabled authentication for the Solution Server, you must configure a Keycloak realm to allow clients to connect to the Solution Server. An administrator can configure SSL for the realm.|
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
---
title: "Configuring the Konveyor AI profile settings"
date: 2025-12-03T14:58:52-06:00
draft: false
---
You can use the Visual Studio (VS) Code plugin to run an analysis to discover issues in the code. You can optionally enable Konveyor AI to get AI-assisted code suggestions.

To generate code changes using the Konveyor extension, you must
configure a profile that contains all the necessary configurations, such as source and target technologies and the API key to connect to your chosen large language model (LLM).

## Prerequisites

- You completed the Solution Server configurations in Tackle custom
resource if you opt to use the Solution Server.

- You opened a Java project in your VS Code workspace.

## Procedure

1. Open the `Konveyor View Analysis` page in either of the following
ways:

1. Click the book icon on the `Konveyor: Issues` pane of the
Konveyor extension.

2. Type `Ctrl + Shift + P` or `Cmd + Shift + P` on the search bar
to open the Command Palette and enter
`Konveyor:Open Analysis View`.

2. Click the settings button on the `Konveyor View Analysis` page to
configure a profile for your project. The `Get Ready to Analyze`
pane lists the following basic configurations required for an
analysis:

| Profile settings | Description |
|--------------------------|--------------|
| Select profile | Create a profile that you can reuse for<br>multiple analyses. The profile name is part of the context provided to<br>the LLM for analysis. |
| Configure label selector | A label selector filters rules for<br>analysis based on the source or target technology.<br>Specify one or more target or source technologies (for example,<br>cloud-readiness). The Konveyor extension uses this configuration to<br>determine the rules that are applied to a project during analysis.<br>If you mentioned a new target or a source technology in your custom<br>rule, you can type that name to create and add the new item to the<br>list.<br><br><br>&#10;<br>You must configure either target or source technologies before<br>running an analysis.<br> |
| Set rules | Enable default rules and select your<br>custom rule that you want Konveyor to use for an analysis. You can use<br>the custom rules in addition to the default rules. |
| Configure generative AI | This option opens the<br>provider-settings.yaml file that contains API keys and<br>other parameters for all supported LLMs. By default, Konveyor AI is<br>configured to use OpenAI LLM. To change the model, update the anchor<br>&amp;active to the desired block. Modify this file with the<br>required arguments, such as the model and API key, to complete the<br>setup. See [Configuring the LLM provider settings](../LLMSettings/ref_llm-provider-configurations.md). |


## Verification

After you complete the profile configuration, close the `Get Ready to Analyze` pane. You can verify that your configuration works by running an analysis.

Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
---
title: "Configuring the Solution Server settings"
date: 2025-12-04T14:58:52-06:00
draft: false
---
You need a Keycloak realm and the Solution Server URL to connect
Konveyor extension with the Solution Server.

## Prerequisites

- The Solution Server URL is available.

- An administrator configured the Keycloak realm for the Solution
Server.

## Procedure

1. Type `Ctrl + Shift + P` or `Cmd + Shift + P` on the search bar and
enter `Preferences:Open User Settings (JSON)`.

2. In the `settings.json` file, enter `Ctrl + SPACE` to enable the
auto-complete for the Solution Server configurable fields.

3. Modify the following configuration as necessary:

```json
{
"konveyor.solutionServer": {

"url": "https://konveyor.apps.konveyor-ai.example.com/hub/services/kai/api",

"enabled": true,
"auth": {

"enabled": true,
"insecure": true,
"realm": "tackle"
},

}
}
```

**NOTE:** When you enable Solution Server authentication for the first time, you must enter the `username` and `password` in the VS >Code search bar.

**TIP:** Enter `Konveyor: Restart Solution Server` in the Command Palette >to restart the Solution Server.
Loading