Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
## Quick Start

This short tutorial will set you up to start using Apify SDK in a minute or two.
If you want to learn more, proceed to the [Apify Platform](https://docs.apify.com/sdk/js/docs/guides/apify-platform)
If you want to learn more, proceed to the [Apify Platform](https://docs.apify.com/sdk/js/docs/concepts/actor-lifecycle)
guide that will take you step by step through running your Actor on Apify's platform.

Apify SDK requires [Node.js](https://nodejs.org/en/) 16 or later. Add Apify SDK to any Node.js project by running:
Expand All @@ -33,7 +33,7 @@ await Actor.setValue('OUTPUT', {
await Actor.exit();
```

> You can also install the [`crawlee`](https://npmjs.org/crawlee) module, as it now provides the crawlers that were previously exported by Apify SDK. If you don't plan to use crawlers in your Actors, then you don't need to install it. Keep in mind that neither `playwright` nor `puppeteer` are bundled with `crawlee` in order to reduce install size and allow greater flexibility. That's why we manually install it with NPM. You can choose one, both, or neither. For more information and example please check [`documentation.`](https://docs.apify.com/sdk/js/docs/guides/apify-platform#running-crawlee-code-as-an-actor)
> You can also install the [`crawlee`](https://npmjs.org/crawlee) module, as it now provides the crawlers that were previously exported by Apify SDK. If you don't plan to use crawlers in your Actors, then you don't need to install it. Keep in mind that neither `playwright` nor `puppeteer` are bundled with `crawlee` in order to reduce install size and allow greater flexibility. That's why we manually install it with NPM. You can choose one, both, or neither. For more information and example please check [`documentation.`](https://docs.apify.com/sdk/js/docs/concepts/actor-lifecycle#running-crawlee-code-as-an-actor)

## Support

Expand All @@ -42,7 +42,7 @@ For questions, you can ask on [Stack Overflow](https://stackoverflow.com/questio

## Upgrading

Visit the [Upgrading Guide](https://docs.apify.com/sdk/js/docs/upgrading) to find out what changes you might want to make, and, if you encounter any issues, join our [Discord server](https://discord.gg/jyEM2PRvMU) for help!
Visit the [Upgrading Guide](https://docs.apify.com/sdk/js/docs/upgrading/upgrading-to-v3) to find out what changes you might want to make, and, if you encounter any issues, join our [Discord server](https://discord.gg/jyEM2PRvMU) for help!

## Contributing

Expand Down
4 changes: 2 additions & 2 deletions docs/02_concepts/02_request_storage.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ const crawler = new CheerioCrawler({
});
```

To see more detailed example of how to use the request queue with a crawler, see the [Puppeteer Crawler](/docs/examples/puppeteer-crawler) example.
To see more detailed example of how to use the request queue with a crawler, see the [Puppeteer Crawler](/docs/guides/puppeteer-crawler) example.

## Request list

Expand Down Expand Up @@ -86,7 +86,7 @@ const crawler = new PuppeteerCrawler({
});
```

To see more detailed example of how to use the request list with a crawler, see the [Puppeteer with proxy](/docs/examples/puppeteer-with-proxy) example.
To see more detailed example of how to use the request list with a crawler, see the [Puppeteer with proxy](/docs/guides/puppeteer-with-proxy) example.

## Which one to choose?

Expand Down
2 changes: 1 addition & 1 deletion docs/03_guides/call_actor.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ and sends it to your email using the [`apify/send-mail`](https://apify.com/apify

To make the example work, you'll need an [Apify account](https://console.apify.com/).
Go to the [Settings - Integrations](https://console.apify.com/account?tab=integrations) page to obtain your API token
and set it to the [`APIFY_TOKEN`](/docs/guides/environment-variables#APIFY_TOKEN) environment variable,
and set it to the [`APIFY_TOKEN`](/docs/concepts/environment-variables#apify_token) environment variable,
or run the script using the Apify CLI. If you deploy this actor to the Apify platform, you can do things like set
up a scheduler to run your actor early in the morning.

Expand Down
2 changes: 1 addition & 1 deletion docs/03_guides/puppeteer_with_proxy.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ import CrawlSource from '!!raw-loader!roa-loader!./puppeteer_with_proxy.ts';

This example demonstrates how to load pages in headless Chrome / Puppeteer over [Apify Proxy](https://docs.apify.com/proxy).

To make it work, you'll need an Apify account with access to the proxy. Visit the [Apify platform introduction](/docs/guides/apify-platform) to find how to log into your account from the SDK.
To make it work, you'll need an Apify account with access to the proxy. Visit the [Apify platform introduction](/docs/concepts/actor-lifecycle) to find how to log into your account from the SDK.

:::tip

Expand Down
2 changes: 1 addition & 1 deletion src/actor.ts
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ export interface ApifyEnv {
* Defines the path to a local directory where KeyValueStore, Dataset, and RequestQueue
* store their data. Typically, it is set to ./storage. If omitted, you should define the
* APIFY_TOKEN environment variable instead. See more info on combination of this and
* APIFY_TOKEN [here](https://docs.apify.com/sdk/js/docs/guides/environment-variables#combinations-of-apify_local_storage_dir-and-apify_token)(CRAWLEE_STORAGE_DIR)
* APIFY_TOKEN [here](https://docs.apify.com/sdk/js/docs/concepts/environment-variables#combinations-of-apify_local_storage_dir-and-apify_token)(CRAWLEE_STORAGE_DIR)
*/
localStorageDir: string | null;

Expand Down
11 changes: 2 additions & 9 deletions website/docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -47,17 +47,10 @@ module.exports = {
items: [
{
type: 'doc',
docId: 'overview',
docId: 'introduction/introduction',
label: 'Docs',
position: 'left',
activeBaseRegex: 'guides|overview',
},
{
type: 'doc',
docId: '/examples',
label: 'Examples',
position: 'left',
activeBaseRegex: 'examples',
activeBaseRegex: 'guides|overview|introduction',
},
{
to: 'reference',
Expand Down
2 changes: 1 addition & 1 deletion website/versioned_docs/version-1.3/examples/call_actor.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ sends it to your email using the [`apify/send-mail`](https://apify.com/apify/sen

To make the example work, you'll need an [Apify account](https://my.apify.com/). Go to the
[Account - Integrations](https://my.apify.com/account#/integrations) page to obtain your API token and set it to the
[`APIFY_TOKEN`](/docs/guides/environment-variables#APIFY_TOKEN) environment variable, or run the script using the Apify CLI. If you deploy this actor
[`APIFY_TOKEN`](/docs/concepts/environment-variables#apify_token) environment variable, or run the script using the Apify CLI. If you deploy this actor
to the Apify Cloud, you can do things like set up a scheduler to run your actor early in the morning.

To see what other actors are available, visit the [Apify Store](https://apify.com/store).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ id: puppeteer-with-proxy
---

This example demonstrates how to load pages in headless Chrome / Puppeteer over [Apify Proxy](https://docs.apify.com/proxy). To make it work, you'll
need an Apify account with access to the proxy. Visit the [Apify platform introduction](/docs/guides/apify-platform) to find how to log into your
need an Apify account with access to the proxy. Visit the [Apify platform introduction](/docs/concepts/actor-lifecycle) to find how to log into your
account from the SDK.

> To run this example on the Apify Platform, select the `apify/actor-node-puppeteer-chrome` image for your Dockerfile.
Expand Down
6 changes: 3 additions & 3 deletions website/versioned_docs/version-1.3/guides/request_storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ id: request-storage
---

The Apify SDK has several request storage types that are useful for specific tasks. The requests are stored either on local disk to a directory defined by the
`APIFY_LOCAL_STORAGE_DIR` environment variable, or on the [Apify platform](/docs/guides/apify-platform) under the user account identified by the API token defined by the `APIFY_TOKEN` environment variable. If neither of these variables is defined, by default Apify SDK sets `APIFY_LOCAL_STORAGE_DIR` to `./apify_storage` in the current working directory and prints a warning.
`APIFY_LOCAL_STORAGE_DIR` environment variable, or on the [Apify platform](/docs/concepts/actor-lifecycle) under the user account identified by the API token defined by the `APIFY_TOKEN` environment variable. If neither of these variables is defined, by default Apify SDK sets `APIFY_LOCAL_STORAGE_DIR` to `./apify_storage` in the current working directory and prints a warning.

Typically, you will be developing the code on your local computer and thus set the `APIFY_LOCAL_STORAGE_DIR` environment variable. Once the code is ready, you will deploy it to the Apify platform, where it will automatically set the `APIFY_TOKEN` environment variable and thus use cloud storage. No code changes are needed.

Expand Down Expand Up @@ -51,7 +51,7 @@ const crawler = new Apify.CheerioCrawler({
});
```

To see more detailed example of how to use the request queue with a crawler, see the [Puppeteer Crawler](/docs/examples/puppeteer-crawler) example.
To see more detailed example of how to use the request queue with a crawler, see the [Puppeteer Crawler](/docs/guides/puppeteer-crawler) example.

## Request list

Expand Down Expand Up @@ -84,7 +84,7 @@ const crawler = new Apify.PuppeteerCrawler({
});
```

To see more detailed example of how to use the request list with a crawler, see the [Puppeteer with proxy](/docs/examples/puppeteer-with-proxy) example.
To see more detailed example of how to use the request list with a crawler, see the [Puppeteer with proxy](/docs/guides/puppeteer-with-proxy) example.

## Which one to choose?

Expand Down
2 changes: 1 addition & 1 deletion website/versioned_docs/version-2.3/examples/call_actor.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ and sends it to your email using the [`apify/send-mail`](https://apify.com/apify

To make the example work, you'll need an [Apify account](https://my.apify.com/).
Go to the [Account - Integrations](https://my.apify.com/account#/integrations) page to obtain your API token
and set it to the [`APIFY_TOKEN`](/docs/guides/environment-variables#APIFY_TOKEN) environment variable,
and set it to the [`APIFY_TOKEN`](/docs/concepts/environment-variables#apify_token) environment variable,
or run the script using the Apify CLI. If you deploy this actor to the Apify Cloud, you can do things like set
up a scheduler to run your actor early in the morning.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ title: Puppeteer with proxy
This example demonstrates how to load pages in headless Chrome / Puppeteer
over [Apify Proxy](https://docs.apify.com/proxy).
To make it work, you'll need an Apify account with access to the proxy.
Visit the [Apify platform introduction](/docs/guides/apify-platform) to find
Visit the [Apify platform introduction](/docs/concepts/actor-lifecycle) to find
how to log into your account from the SDK.

> To run this example on the Apify Platform, select the `apify/actor-node-puppeteer-chrome` image for your Dockerfile.
Expand Down
6 changes: 3 additions & 3 deletions website/versioned_docs/version-2.3/guides/request_storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Request Storage
---

The Apify SDK has several request storage types that are useful for specific tasks. The requests are stored either on local disk to a directory defined by the
`APIFY_LOCAL_STORAGE_DIR` environment variable, or on the [Apify platform](/docs/guides/apify-platform) under the user account identified by the API token defined by the `APIFY_TOKEN` environment variable. If neither of these variables is defined, by default Apify SDK sets `APIFY_LOCAL_STORAGE_DIR` to `./apify_storage` in the current working directory and prints a warning.
`APIFY_LOCAL_STORAGE_DIR` environment variable, or on the [Apify platform](/docs/concepts/actor-lifecycle) under the user account identified by the API token defined by the `APIFY_TOKEN` environment variable. If neither of these variables is defined, by default Apify SDK sets `APIFY_LOCAL_STORAGE_DIR` to `./apify_storage` in the current working directory and prints a warning.

Typically, you will be developing the code on your local computer and thus set the `APIFY_LOCAL_STORAGE_DIR` environment variable. Once the code is ready, you will deploy it to the Apify platform, where it will automatically set the `APIFY_TOKEN` environment variable and thus use cloud storage. No code changes are needed.

Expand Down Expand Up @@ -50,7 +50,7 @@ const crawler = new Apify.CheerioCrawler({
});
```

To see more detailed example of how to use the request queue with a crawler, see the [Puppeteer Crawler](/docs/examples/puppeteer-crawler) example.
To see more detailed example of how to use the request queue with a crawler, see the [Puppeteer Crawler](/docs/guides/puppeteer-crawler) example.

## Request list

Expand Down Expand Up @@ -83,7 +83,7 @@ const crawler = new Apify.PuppeteerCrawler({
});
```

To see more detailed example of how to use the request list with a crawler, see the [Puppeteer with proxy](/docs/examples/puppeteer-with-proxy) example.
To see more detailed example of how to use the request list with a crawler, see the [Puppeteer with proxy](/docs/guides/puppeteer-with-proxy) example.

## Which one to choose?

Expand Down
2 changes: 1 addition & 1 deletion website/versioned_docs/version-3.0/examples/call_actor.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ and sends it to your email using the [`apify/send-mail`](https://apify.com/apify

To make the example work, you'll need an [Apify account](https://console.apify.com/).
Go to the [Settings - Integrations](https://console.apify.com/account?tab=integrations) page to obtain your API token
and set it to the [`APIFY_TOKEN`](/docs/guides/environment-variables#APIFY_TOKEN) environment variable,
and set it to the [`APIFY_TOKEN`](/docs/concepts/environment-variables#apify_token) environment variable,
or run the script using the Apify CLI. If you deploy this actor to the Apify Cloud, you can do things like set
up a scheduler to run your actor early in the morning.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ import CrawlSource from '!!raw-loader!./puppeteer_with_proxy.ts';

This example demonstrates how to load pages in headless Chrome / Puppeteer over [Apify Proxy](https://docs.apify.com/proxy).

To make it work, you'll need an Apify account with access to the proxy. Visit the [Apify platform introduction](/docs/guides/apify-platform) to find how to log into your account from the SDK.
To make it work, you'll need an Apify account with access to the proxy. Visit the [Apify platform introduction](/docs/concepts/actor-lifecycle) to find how to log into your account from the SDK.

:::tip

Expand Down
6 changes: 3 additions & 3 deletions website/versioned_docs/version-3.0/guides/request_storage.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ import ApiLink from '@site/src/components/ApiLink';
import { CrawleeApiLink } from '@site/src/components/CrawleeLinks';

The Apify SDK has several request storage types that are useful for specific tasks. The requests are stored either on local disk to a directory defined by the
`APIFY_LOCAL_STORAGE_DIR` environment variable, or on the [Apify platform](/docs/guides/apify-platform) under the user account identified by the API token defined by the `APIFY_TOKEN` environment variable. If neither of these variables is defined, by default Apify SDK sets `APIFY_LOCAL_STORAGE_DIR` to `./storage` in the current working directory and prints a warning.
`APIFY_LOCAL_STORAGE_DIR` environment variable, or on the [Apify platform](/docs/concepts/actor-lifecycle) under the user account identified by the API token defined by the `APIFY_TOKEN` environment variable. If neither of these variables is defined, by default Apify SDK sets `APIFY_LOCAL_STORAGE_DIR` to `./storage` in the current working directory and prints a warning.

Typically, you will be developing the code on your local computer and thus set the `APIFY_LOCAL_STORAGE_DIR` environment variable. Once the code is ready, you will deploy it to the Apify platform, where it will automatically set the `APIFY_TOKEN` environment variable and thus use cloud storage. No code changes are needed.

Expand Down Expand Up @@ -53,7 +53,7 @@ const crawler = new CheerioCrawler({
});
```

To see more detailed example of how to use the request queue with a crawler, see the [Puppeteer Crawler](/docs/examples/puppeteer-crawler) example.
To see more detailed example of how to use the request queue with a crawler, see the [Puppeteer Crawler](/docs/guides/puppeteer-crawler) example.

## Request list

Expand Down Expand Up @@ -86,7 +86,7 @@ const crawler = new PuppeteerCrawler({
});
```

To see more detailed example of how to use the request list with a crawler, see the [Puppeteer with proxy](/docs/examples/puppeteer-with-proxy) example.
To see more detailed example of how to use the request list with a crawler, see the [Puppeteer with proxy](/docs/guides/puppeteer-with-proxy) example.

## Which one to choose?

Expand Down
2 changes: 1 addition & 1 deletion website/versioned_docs/version-3.1/examples/call_actor.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ and sends it to your email using the [`apify/send-mail`](https://apify.com/apify

To make the example work, you'll need an [Apify account](https://console.apify.com/).
Go to the [Settings - Integrations](https://console.apify.com/account?tab=integrations) page to obtain your API token
and set it to the [`APIFY_TOKEN`](/docs/guides/environment-variables#APIFY_TOKEN) environment variable,
and set it to the [`APIFY_TOKEN`](/docs/concepts/environment-variables#apify_token) environment variable,
or run the script using the Apify CLI. If you deploy this Actor to the Apify Cloud, you can do things like set
up a scheduler to run your Actor early in the morning.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ import CrawlSource from '!!raw-loader!roa-loader!./puppeteer_with_proxy.ts';

This example demonstrates how to load pages in headless Chrome / Puppeteer over [Apify Proxy](https://docs.apify.com/proxy).

To make it work, you'll need an Apify account with access to the proxy. Visit the [Apify platform introduction](/docs/guides/apify-platform) to find how to log into your account from the SDK.
To make it work, you'll need an Apify account with access to the proxy. Visit the [Apify platform introduction](/docs/concepts/actor-lifecycle) to find how to log into your account from the SDK.

:::tip

Expand Down
6 changes: 3 additions & 3 deletions website/versioned_docs/version-3.1/guides/request_storage.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ import ApiLink from '@site/src/components/ApiLink';
import { CrawleeApiLink } from '@site/src/components/CrawleeLinks';

The Apify SDK has several request storage types that are useful for specific tasks. The requests are stored either on local disk to a directory defined by the
`APIFY_LOCAL_STORAGE_DIR` environment variable, or on the [Apify platform](/docs/guides/apify-platform) under the user account identified by the API token defined by the `APIFY_TOKEN` environment variable. If neither of these variables is defined, by default Apify SDK sets `APIFY_LOCAL_STORAGE_DIR` to `./storage` in the current working directory and prints a warning.
`APIFY_LOCAL_STORAGE_DIR` environment variable, or on the [Apify platform](/docs/concepts/actor-lifecycle) under the user account identified by the API token defined by the `APIFY_TOKEN` environment variable. If neither of these variables is defined, by default Apify SDK sets `APIFY_LOCAL_STORAGE_DIR` to `./storage` in the current working directory and prints a warning.

Typically, you will be developing the code on your local computer and thus set the `APIFY_LOCAL_STORAGE_DIR` environment variable. Once the code is ready, you will deploy it to the Apify platform, where it will automatically set the `APIFY_TOKEN` environment variable and thus use cloud storage. No code changes are needed.

Expand Down Expand Up @@ -53,7 +53,7 @@ const crawler = new CheerioCrawler({
});
```

To see more detailed example of how to use the request queue with a crawler, see the [Puppeteer Crawler](/docs/examples/puppeteer-crawler) example.
To see more detailed example of how to use the request queue with a crawler, see the [Puppeteer Crawler](/docs/guides/puppeteer-crawler) example.

## Request list

Expand Down Expand Up @@ -86,7 +86,7 @@ const crawler = new PuppeteerCrawler({
});
```

To see more detailed example of how to use the request list with a crawler, see the [Puppeteer with proxy](/docs/examples/puppeteer-with-proxy) example.
To see more detailed example of how to use the request list with a crawler, see the [Puppeteer with proxy](/docs/guides/puppeteer-with-proxy) example.

## Which one to choose?

Expand Down
Loading