Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit 1695578

Browse files
authoredMay 16, 2024··
✍️ GPT-4o in under 3 minutes (#20)
- Created new blog post entry "✍️ GPT-4o in under 3 minutes" - Fixed youtube video missing base url
1 parent aa69b33 commit 1695578

File tree

2 files changed

+98
-2
lines changed

2 files changed

+98
-2
lines changed
 

‎src/_includes/blog.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,8 @@ <h3 class="text-4xl lg:text-6xl font-bold blog-title">{{ title }}</h3>
1919

2020
<div class="blog-content">
2121
{% if video %}
22-
<iframe width="560" height="315" src="{{ video }}" title="YouTube video player" frameborder="0"
23-
allow="autoplay; clipboard-write; encrypted-media; picture-in-picture"
22+
<iframe width="560" height="315" src="https://www.youtube.com/embed/{{ video }}" title="YouTube video player"
23+
frameborder="0" allow="autoplay; clipboard-write; encrypted-media; picture-in-picture"
2424
referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
2525
{% endif %}
2626

Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
---
2+
title: "GPT-4o in under 3 minutes"
3+
date: 2024-05-16
4+
description: |
5+
Quick introduction into how to use the newly released model GPT-4o.
6+
tags:
7+
- openai
8+
- ai
9+
video: "4GPXwaWyxX8?si=m60rvMjzd7D9Qq5o"
10+
---
11+
Now that [GPT-4o](https://openai.com/index/hello-gpt-4o/) has been released, let’s look into how we can implement our custom GPT-4o assistant.
12+
13+
To start, go to an empty directory and create a `npm` project and install the dependencies:
14+
- `npm init -y`
15+
- `npm install openai`
16+
17+
Because I’m running the code directly, I’m adding the `type: "module"` to my package.json:
18+
```json
19+
{
20+
"name": "tutorial-gpt-4o",
21+
"type": "module",
22+
"version": "1.0.0",
23+
...
24+
}
25+
```
26+
27+
Now that we have all of our preparation, it is time to create our script.
28+
29+
```js
30+
// We import the OpenAI library
31+
import { OpenAI } from "openai";
32+
33+
// We set our key in a variable
34+
// Pssss.. you should use an environment variable
35+
const key = "YOUR-API-KEY";
36+
37+
async function main() {
38+
// The OpenAI constructor accepts an `apiKey` string parameter
39+
const openai = new OpenAI({ apiKey: key });
40+
41+
console.log("Writing message");
42+
43+
// We need to call a chained function
44+
const chat = await openai.chat.completions.create({
45+
// This is where we assign GPT-4o
46+
model: "gpt-4o",
47+
messages: [{
48+
// Every message has a `role` and `content`
49+
role: "user", content: "What color is the sky?"
50+
}]
51+
});
52+
53+
// Finally we log our response
54+
console.log("Response:", chat.choices[0].message.content);
55+
}
56+
57+
// Don't forget to call your function!
58+
main();
59+
60+
```
61+
62+
If you run `node index.js`, you’ll have an output similar to:
63+
> Writing message
64+
> Response: Hello! I'm just a computer program, so I don't have feelings, but thanks for asking. How can I assist you today?
65+
66+
## Deep diving into the types
67+
Now let’s analyze the code in detail, as some parts may be a little confusing:
68+
```js
69+
messages: [{
70+
role: "user", content: "What color is the sky?"
71+
}]
72+
```
73+
74+
Every `message` object in the array has two values, the first one is `role`, which represents _who_ is sending the message.
75+
We are using `user`, which indicates the model that it has to reply. We can also use `system`, which will be the core instructions for the model, but it will still be waiting for a second message coming from `user`.
76+
77+
A working example would be the following:
78+
```js
79+
messages: [
80+
{ role: "system", content: "You are a meteorologist"}
81+
{ role: "user", content: "Why does it rain?"}
82+
]
83+
```
84+
85+
By setting the `system` instructions, the model now will know that, to every query it receives, it needs to remember that it _is a meterologist_.
86+
87+
The other confusing element could be the value that we receive:
88+
```js
89+
chat.choices[0].message.content
90+
```
91+
92+
Every time we call the completion endpoint it will return an array of choices, by default, **we will always receive a single choice**, so, unless that you manually change this (because you want to compare results), you _should always obtain the element 0 in the array_.
93+
94+
`message` is a wrapper with 3 values, `content`, which is the answer, `role`, which we just saw before. In this case, it would be `assistant`, and finally `tool_calls`, but we won’t see that here (you can see it in [the documentation](https://platform.openai.com/docs/guides/function-calling)).
95+
96+
And that’s it! You can now start experimenting with your model.

0 commit comments

Comments
 (0)
Please sign in to comment.