Skip to content

405 for POST queries from browser, even with CORS ALLOW ORIGIN * #1134

Open
@scenaristeur

Description

@scenaristeur

I'm bugging form 4 hours.
LocalAi is running on laptop CPU .
and i and to post to the API from a Vuejs app.
all works fine from node :

const axios = require("axios");



console.log("\nHEALTH");
axios
  .get("http://localhost:8080/readyz")
  .then(function (response) {
    // en cas de réussite de la requête
    console.log("readyz:", response.data);
  })
  .catch(function (error) {
    // en cas d’échec de la requête
    console.log(error);
  })
  .finally(function () {
    // dans tous les cas
  });

console.log("\nModels");
axios
  .get("http://localhost:8080/v1/models")
  .then(function (response) {
    // en cas de réussite de la requête
    console.log("models:", response.data);
  })
  .catch(function (error) {
    // en cas d’échec de la requête
    console.log(error);
  })
  .finally(function () {
    // dans tous les cas
  });

  console.log("\nText Completion");
  axios.post('http://localhost:8080/v1/chat/completions', {
    model: 'ggml-gpt4all-j',
    "messages": [{"role": "user", "content": "Say this is a test!"}],
    "temperature": 0.7
  })
  .then(function (response) {
    console.log(response.data);
    console.log(response.data.choices[0].message);
  })
  .catch(function (error) {
    console.log(error);
  });

  console.log("\n Image Generation");
  axios.post('http://localhost:8080/v1/images/generations', {
    "prompt": "A cute baby sea otter",
    "size": "256x256"
  })
  .then(function (response) {
    console.log(response.data);
  })
  .catch(function (error) {
    console.log(error);
  });

gives me chat-completion response and image generation

HEALTH

Models

Text Completion

 Image Generation
readyz: OK
models: {
  object: 'list',
  data: [
    { id: 'animagine-xl', object: 'model' },
    { id: 'text-embedding-ada-002', object: 'model' },
    { id: 'camembert-large', object: 'model' },
    { id: 'stablediffusion', object: 'model' },
    {
      id: 'thebloke__vigogne-2-7b-chat-ggml__vigogne-2-7b-chat.ggmlv3.q8_0.bin',
      object: 'model'
    },
    { id: 'camembert-large', object: 'model' },
    { id: 'ggml-gpt4all-j', object: 'model' }
  ]
}
{
  object: 'chat.completion',
  model: 'ggml-gpt4all-j',
  choices: [ { index: 0, finish_reason: 'stop', message: [Object] } ],
  usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 }
}
{
  role: 'assistant',
  content: "I'm sorry, I don't understand what you mean. Can you please provide more context or clarify your question?"
}
{
  data: [
    {
      embedding: null,
      index: 0,
      url: 'http://localhost:8080/generated-images/b643464075870.png'
    }
  ],
  usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 }

But in the browser, only the GET works : nothing for the POST: basic fetch or axios.

with axios, I got a 405 CORS error, even when settings "CORS_ALLOW_ORIGINS=*" in .env and in docker-compose.
with fetch i got 422 error unprocessable data
Did someone succed to do that ?
Could someone give me a simple browser example for getting a chat completion and a image generation from a browser page ?

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingkind/documentationImprovements or additions to documentationkind/questionFurther information is requested

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions