You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: documentation/DCP-documentation/advanced_configuration.md
+8-5
Original file line number
Diff line number
Diff line change
@@ -4,6 +4,7 @@ We've tried very hard to make Distributed-CellProfiler light and adaptable, but
4
4
Below is a non-comprehensive list of places where you can adapt the code to your own purposes.
5
5
6
6
***
7
+
7
8
## Changes you can make to Distributed-CellProfiler outside of the Docker container
8
9
9
10
***Location of ECS configuration files:** By default these are placed into your bucket with a prefix of 'ecsconfigs/'.
@@ -29,14 +30,16 @@ This value can be modified in run.py .
29
30
***Distributed-CellProfiler version:** At least CellProfiler version 4.2.4, and use the DOCKERHUB_TAG in config.py as `bethcimini/distributed-cellprofiler:2.1.0_4.2.4_plugins`.
30
31
***Custom model: If using a [custom User-trained model](https://cellpose.readthedocs.io/en/latest/models.html) generated using Cellpose, add the model file to S3.
31
32
We use the following structure to organize our files on S3.
32
-
```
33
+
34
+
```text
33
35
└── <project_name>
34
36
└── workspace
35
37
└── model
36
38
└── custom_model_filename
37
39
```
38
-
***RunCellpose module:**
39
-
* Inside RunCellpose, select the "custom" Detection mode.
40
-
In "Location of the pre-trained model file", enter the mounted bucket path to your model.
40
+
41
+
***RunCellpose module:**
42
+
* Inside RunCellpose, select the "custom" Detection mode.
43
+
In "Location of the pre-trained model file", enter the mounted bucket path to your model.
41
44
e.g. **/home/ubuntu/bucket/projects/<project_name>/workspace/model/**
42
-
* In "Pre-trained model file name", enter your custom_model_filename
45
+
* In "Pre-trained model file name", enter your custom_model_filename
UPLOAD_FLAGS='--acl bucket-owner-full-control --metadata-directive REPLACE'# Examples of flags that may be necessary
43
48
```
44
49
45
50
## Permissions setup
51
+
46
52
If you are reading from a public bucket, no additional setup is necessary.
53
+
Note that, depending on the configuration of that bucket, you may not be able to mount the public bucket so you will need to set `DOWNLOAD_FILES='True'`.
47
54
48
-
If you are reading from a non-public bucket or writing to a bucket, you wil need further permissions setup.
55
+
If you are reading from a non-public bucket or writing to a bucket that is not yours, you wil need further permissions setup.
49
56
Often, access to someone else's AWS account is handled through a role that can be assumed.
50
57
Learn more about AWS IAM roles [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html).
51
58
Your collaborator will define the access limits of the role within their AWS IAM.
52
59
You will also need to define role limits within your AWS IAM so that when you assume the role (giving you access to your collaborator's resource), that role also has the appropriate permissions to run DCP.
53
60
54
61
### In your AWS account
62
+
55
63
In AWS IAM, for the role that has external bucket access, you will need to add all of the DCP permissions described in [Step 0](step_0_prep.md).
56
64
57
-
You will also need to edit the trust relationship for the role so that ECS and EC2 can assume the role.
65
+
You will also need to edit the trust relationship for the role so that ECS and EC2 can assume the role.
58
66
A template is as follows:
59
-
```
67
+
68
+
```json
60
69
{
61
70
"Version": "2012-10-17",
62
71
"Statement": [
@@ -80,6 +89,7 @@ A template is as follows:
80
89
```
81
90
82
91
### In your DCP instance
92
+
83
93
DCP reads your AWS_PROFILE from your [control node](step_0_prep.md#the-control-node).
84
94
Edit your AWS CLI configuration files for assuming that role in your control node as follows:
85
95
@@ -95,4 +105,4 @@ In `~/.aws/credentials`, copy in the following text block at the bottom of the f
Copy file name to clipboardExpand all lines: documentation/DCP-documentation/overview.md
+2
Original file line number
Diff line number
Diff line change
@@ -3,6 +3,7 @@
3
3
**How do I run CellProfiler on Amazon?** Use Distributed-CellProfiler!
4
4
5
5
Distributed-CellProfiler is a series of scripts designed to help you run a Dockerized version of CellProfiler on [Amazon Web Services](https://aws.amazon.com/) (AWS) using AWS's file storage and computing systems.
6
+
6
7
* Data is stored in S3 buckets.
7
8
* Software is run on "Spot Fleets" of computers (or instances) in the cloud.
8
9
@@ -12,6 +13,7 @@ Docker is a software platform that packages software into containers.
12
13
In a container is the software that you want to run as well as everything needed to run it (e.g. your software source code, operating system libraries, and dependencies).
13
14
14
15
Dockerizing a workflow has many benefits including
16
+
15
17
* Ease of use: Dockerized software doesn't require the user to install anything themselves.
16
18
* Reproducibility: You don't need to worry about results being affected by the version of your software or its dependencies being used as those are fixed.
Copy file name to clipboardExpand all lines: documentation/DCP-documentation/overview_2.md
+12-7
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
##What happens in AWS when I run Distributed-CellProfiler?
1
+
# What happens in AWS when I run Distributed-CellProfiler?
2
2
3
3
The steps for actually running the Distributed-CellProfiler code are outlined in the repository [README](https://github.com/DistributedScience/Distributed-CellProfiler/blob/master/README.md), and details of the parameters you set in each step are on their respective Documentation pages ([Step 1: Config](step_1_configuration.md), [Step 2: Jobs](step_2_submit_jobs.md), [Step 3: Fleet](step_3_start_cluster.md), and optional [Step 4: Monitor](step_4_monitor.md)).
4
4
We'll give an overview of what happens in AWS at each step here and explain what AWS does automatically once you have it set up.
@@ -8,6 +8,7 @@ We'll give an overview of what happens in AWS at each step here and explain what
8
8
**Step 1**:
9
9
In the Config file you set quite a number of specifics that are used by EC2, ECS, SQS, and in making Dockers.
10
10
When you run `$ python3 run.py setup` to execute the Config, it does three major things:
11
+
11
12
* Creates task definitions.
12
13
These are found in ECS.
13
14
They define the configuration of the Dockers and include the settings you gave for **CHECK_IF_DONE_BOOL**, **DOCKER_CORES**, **EXPECTED_NUMBER_FILES**, and **MEMORY**.
@@ -25,6 +26,7 @@ In the Config file you set the number and size of the EC2 instances you want.
25
26
This information, along with account-specific configuration in the Fleet file is used to start the fleet with `$ python3 run.py startCluster`.
26
27
27
28
**After these steps are complete, a number of things happen automatically**:
29
+
28
30
* ECS puts Docker containers onto EC2 instances.
29
31
If there is a mismatch within your Config file and the Docker is larger than the instance it will not be placed.
30
32
ECS will keep placing Dockers onto an instance until it is full, so if you accidentally create instances that are too large you may end up with more Dockers placed on it than intended.
@@ -59,6 +61,7 @@ Read more about this and other configurations in [Step 1: Configuration](step_1_
59
61
## How do I determine my configuration?
60
62
61
63
To some degree, you determine the best configuration for your needs through trial and error.
64
+
62
65
* Looking at the resources your software uses on your local computer when it runs your jobs can give you a sense of roughly how much hard drive and memory space each job requires, which can help you determine your group size and what machines to use.
63
66
* Prices of different machine sizes fluctuate, so the choice of which type of machines to use in your spot fleet is best determined at the time you run it.
64
67
How long a job takes to run and how quickly you need the data may also affect how much you're willing to bid for any given machine.
@@ -67,12 +70,14 @@ However, you're also at a greater risk of running out of hard disk space.
67
70
68
71
Keep an eye on all of the logs the first few times you run any workflow and you'll get a sense of whether your resources are being utilized well or if you need to do more tweaking.
69
72
70
-
## What does this look like on AWS?
73
+
## What does this look like on AWS?
74
+
71
75
The following five are the primary resources that Distributed-CellProfiler interacts with.
72
76
After you have finished [preparing for Distributed-CellProfiler](step_0_prep), you do not need to directly interact with any of these services outside of Distributed-CellProfiler.
73
77
If you would like a granular view of what Distributed-CellProfiler is doing while it runs, you can open each console in a separate tab in your browser and watch their individual behaviors, though this is not necessary, especially if you run the [monitor command](step_4_monitor.md) and/or have DS automatically create a Dashboard for you (see [Configuration](step_1_configuration.md)).
Copy file name to clipboardExpand all lines: documentation/DCP-documentation/passing_files_to_DCP.md
+12-11
Original file line number
Diff line number
Diff line change
@@ -4,12 +4,13 @@ Distributed-CellProfiler can be told what files to use through LoadData.csv, Bat
4
4
5
5
## Metadata use in DCP
6
6
7
-
Distributed-CellProfiler requires metadata and grouping in order to split jobs.
8
-
This means that, unlikely a generic CellProfiler workflow, the inclusion of metadata and grouping are NOT optional for pipelines you wish to use in Distributed-CellProfiler.
9
-
- If using LoadData, this means ensuring that your input CSV has some metadata to use for grouping and "Group images by metdata?" is set to "Yes".
10
-
- If using batch files or file lists, this means ensuring that the Metadata and Groups modules are enabled, and that you are extracting metadata from file and folder names _that will also be present in your remote system_ in the Metadata module in your CellProfiler pipeline.
11
-
You can pass additional metadata to CellProfiler by `Add another extraction method`, setting the method to `Import from file` and setting Metadata file location to `Default Input Folder`.
12
-
Metadata of either type can be used for grouping.
7
+
Distributed-CellProfiler requires metadata and grouping in order to split jobs.
8
+
This means that, unlikely a generic CellProfiler workflow, the inclusion of metadata and grouping are NOT optional for pipelines you wish to use in Distributed-CellProfiler.
9
+
10
+
- If using LoadData, this means ensuring that your input CSV has some metadata to use for grouping and "Group images by metdata?" is set to "Yes".
11
+
- If using batch files or file lists, this means ensuring that the Metadata and Groups modules are enabled, and that you are extracting metadata from file and folder names _that will also be present in your remote system_ in the Metadata module in your CellProfiler pipeline.
12
+
You can pass additional metadata to CellProfiler by `Add another extraction method`, setting the method to `Import from file` and setting Metadata file location to `Default Input Folder`.
13
+
Metadata of either type can be used for grouping.
13
14
14
15
## Load Data
15
16
@@ -25,14 +26,14 @@ Some users have reported issues with using relative paths in the PathName column
25
26
You can create this CSV yourself via your favorite scripting language.
26
27
We maintain a script for creating LoadData.csv from Phenix metadata XML files called [pe2loaddata](https://github.com/broadinstitute/pe2loaddata).
27
28
28
-
You can also create the LoadData.csv in a local copy of CellProfiler using the standard input modules of Images, Metadata, NamesAndTypes and Groups.
29
+
You can also create the LoadData.csv in a local copy of CellProfiler using the standard input modules of Images, Metadata, NamesAndTypes and Groups.
29
30
More written and video information about using the input modules can be found [here](broad.io/CellProfilerInput).
30
31
After loading in your images, use the `Export`->`Image Set Listing` command.
31
32
You will then need to replace the local paths with the paths where the files can be found in S3 which is hardcoded to `/home/ubuntu/bucket`.
32
33
If your files are nested in the same structure, this can be done with a simple find and replace in any text editing software.
33
34
(e.g. Find '/Users/eweisbar/Desktop' and replace with '/home/ubuntu/bucket')
34
35
35
-
More detail: The [Dockerfile](https://github.com/DistributedScience/Distributed-CellProfiler/blob/master/worker/Dockerfile) is the first script to execute in the Docker.
36
+
More detail: The [Dockerfile](https://github.com/DistributedScience/Distributed-CellProfiler/blob/master/worker/Dockerfile) is the first script to execute in the Docker.
36
37
It creates the `/home/ubuntu/` folder and then executes [run_worker.sh](https://github.com/DistributedScience/Distributed-CellProfiler/blob/master/worker/run-worker.sh) from that point.
37
38
run_worker.sh makes `/home/ubuntu/bucket/` and uses S3FS to mount your S3 bucket at that location. (If you set `DOWNLOAD_FILES='True'` in your [config](step_1_configuration.md), then the S3FS mount is bypassed but files are downloaded locally to the `/home/ubuntu/bucket` path so that the paths are the same as if it was S3FS mounted.)
38
39
@@ -53,7 +54,7 @@ To use a batch file, your data needs to have the same structure in the cloud as
53
54
54
55
### Creating batch files
55
56
56
-
To create a batch file, load all your images into a local copy of CellProfiler using the standard input modules of Images, Metadata, NamesAndTypes and Groups.
57
+
To create a batch file, load all your images into a local copy of CellProfiler using the standard input modules of Images, Metadata, NamesAndTypes and Groups.
57
58
More written and video information about using the input modules can be found [here](broad.io/CellProfilerInput).
58
59
Put the `CreateBatchFiles` module at the end of your pipeline and ensure that it is selected.
59
60
Add a path mapping and edit the `Local root path` and `Cluster root path`.
@@ -71,8 +72,8 @@ Note that if you do not follow our standard file organization, under **#not proj
71
72
72
73
## File lists
73
74
74
-
You can also simply pass a list of absolute file paths (not relative paths) with one file per row in `.txt` format.
75
-
These must be the absolute paths that Distributed-CellProfiler will see, aka relative to the root of your bucket (which will be mounted as `/bucket`.
75
+
You can also simply pass a list of absolute file paths (not relative paths) with one file per row in `.txt` format.
76
+
These must be the absolute paths that Distributed-CellProfiler will see, aka relative to the root of your bucket (which will be mounted as `/bucket`.
0 commit comments