Skip to content

Custom function to start LLM on demand (vimscript/lua compatibility) #120

@jchwenger

Description

@jchwenger

Hi there,

This is likely more a vimscript question than one specifically for this plug-in, but I thought this might be worth asking nonetheless.

Here you see two versions of an invocation script for the plug-in (using Vim Plug, and ollama). The first, commented-out one works fine, but when I try to wrap this into a function, so that I can call it whenever I want suggestions, then it doesn't, and I'm not sure why. No error message so far... The goal would be to have a way to turn the plug-in on and off.

" LLM (neovim) {{{
" https://github.com/huggingface/llm.nvim
if has('nvim')

  " lua << EOF
  " require('llm').setup({
  "   api_token = nil, -- cf Install paragraph
  "   model = 'llama3.2:1b-text-fp16', -- the model ID, behavior depends on backend
  "   backend = 'ollama', -- backend ID, "huggingface" | "ollama" | "openai" | "tgi"
  "   url = 'http://localhost:11434', -- the http url of the backend
  "   tokens_to_clear = { '<|begin_of_text|>','<|end_of_text|>' }, -- tokens to remove from the model's output
  "   -- parameters that are added to the request body, values are arbitrary, you can set any field:value pair here it will be passed as is to the backend
  "   request_body = {
  "     options = { -- depending on backend: parameters = {
  "       num_predict = 1,
  "       temperature = 0.2,
  "       top_p = 0.95,
  "     },
  "   },
  "   -- set this if the model supports fill in the middle
  "   fim = {
  "     enabled = false,
  "     -- prefix = "<fim_prefix>",
  "     -- middle = "<fim_middle>",
  "     -- suffix = "<fim_suffix>",
  "   },
  "   debounce_ms = 150,
  "   accept_keymap = '<Tab>',
  "   dismiss_keymap = '<S-Tab>',
  "   tls_skip_verify_insecure = false,
  "   -- llm-ls configuration, cf llm-ls section
  "   lsp = {
  "     bin_path = nil,
  "     host = nil,
  "     port = nil,
  "     cmd_env = { LLM_LOG_LEVEL = 'DEBUG' }, -- or cmd_env = nil to set the log level of llm-ls
  "     version = '0.5.3',
  "   },
  "   context_window = 8192, -- cf Tokenizer paragraph
  "   enable_suggestions_on_startup = true, -- max number of tokens for the context window
  "   enable_suggestions_on_files = '*', -- pattern matching syntax to enable suggestions on specific files, either a string or a list of strings
  "   disable_url_path_completion = false, -- cf Backend
  " })
" EOF
  " " ↑ EOF must be at the beginning of the line

  function! LLMStart(name = "llama3.2:1b-text-fp16")
    echom "starting llm: " . a:name
    let g:llm_global_lm_name = a:name
    lua << EOF
    require('llm').setup({
      api_token = nil, -- cf Install paragraph
      model = vim.g.llm_global_lm_name, -- the model ID, behavior depends on backend
      backend = 'ollama', -- backend ID, "huggingface" | "ollama" | "openai" | "tgi"
      url = 'http://localhost:11434', -- the http url of the backend
      tokens_to_clear = { '<|begin_of_text|>','<|end_of_text|>' }, -- tokens to remove from the model's output
      -- parameters that are added to the request body, values are arbitrary, you can set any field:value pair here it will be passed as is to the backend
      request_body = {
        options = { -- depending on backend: parameters = {
          num_predict = 1,
          temperature = 0.2,
          top_p = 0.95,
        },
      },
      -- set this if the model supports fill in the middle
      fim = {
        enabled = false,
        -- prefix = "<fim_prefix>",
        -- middle = "<fim_middle>",
        -- suffix = "<fim_suffix>",
      },
      debounce_ms = 150,
      accept_keymap = '<Tab>',
      dismiss_keymap = '<S-Tab>',
      tls_skip_verify_insecure = false,
      -- llm-ls configuration, cf llm-ls section
      lsp = {
        bin_path = nil,
        host = nil,
        port = nil,
        cmd_env = { LLM_LOG_LEVEL = 'DEBUG' }, -- or cmd_env = nil to set the log level of llm-ls
        version = '0.5.3',
      },
      context_window = 8192, -- cf Tokenizer paragraph
      enable_suggestions_on_startup = true, -- max number of tokens for the context window
      enable_suggestions_on_files = '*', -- pattern matching syntax to enable suggestions on specific files, either a string or a list of strings
      disable_url_path_completion = false, -- cf Backend
    })
EOF
  " ↑ EOF must be at the beginning of the line
      echom "started llm"
    endfunction

endif
" }}}

I guess I can achieve sort of the same effect by toggling the autosuggestions, but another use case would be to load a particular model without having to open my vimrc and change the name there?

Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions