If you would like to contribute please read OpenTelemetry core Collector contributing guidelines before you begin your work with the contrib Collector.
To manually test your changes, follow these steps to build and run the contrib Collector locally. Ensure that you execute these commands from the root of the repository:
- Build the Collector:
make otelcontribcol- Run the contrib Collector with a local configuration file:
./bin/otelcontribcol_<os>_<arch> --config otel-config.yamlThe actual name of the binary will depend on your platform. For example, on Linux x64, use ./bin/otelcontribcol_linux_amd64.
Replace otel-config.yaml with the appropriate configuration file as needed.
-
Verify that your changes are reflected in the contrib Collector's behavior by testing it against the provided configuration.
-
Lint your changes:
- For the entire project:
make golint- For specific components (e.g., Elasticsearch exporter):
cd exporter/elasticsearchexporter/
make lint- Run the unit tests:
- Run tests for the whole project from the project root:
make gotest- Alternatively, run tests for the affected components. For example, to run the Elasticsearch exporter tests:
cd exporter/elasticsearchexporter/
make testThere are two auto generated Changelogs for this repository:
CHANGELOG.mdis intended for users of the collector and lists changes that affect the behavior of the collector.CHANGELOG-API.mdis intended for developers who are importing packages from the collector codebase.
They are autogenerated from .yaml files in the ./.chloggen directory.
- Create an entry file using
make chlog-new. This generates a file based on your current branch (e.g../.chloggen/my-branch.yaml) - Fill in all fields in the new file
- Run
make chlog-validateto ensure the new file is valid - Commit and push the file
During the collector release process, all ./chloggen/*.yaml files are transcribed into CHANGELOG.md and CHANGELOG-API.md and then deleted.
Pull requests that contain user-facing changes will require a changelog entry. Keep in mind the following types of users:
- Those who are consuming the telemetry exported from the collector
- Those who are deploying or otherwise managing the collector or its configuration
- Those who are depending on APIs exported from collector packages
- Those who are contributing to the repository
Changes that affect the first two groups should be noted in CHANGELOG.md. Changes that affect the third or fourth groups should be noted in CHANGELOG-API.md.
If a changelog entry is not required, start your pull request title with [chore], or ask a maintainer or approver will add the Skip Changelog label to the pull request.
Examples
Changelog entry required:
- Changes to the configuration of the collector or any component
- Changes to the telemetry emitted from and/or processed by the collector
- Changes to the prerequisites or assumptions for running a collector
- Changes to an API exported by a collector package
- Meaningful changes to the performance of the collector
Judgement call:
- Major changes to documentation
- Major changes to tests or test frameworks
- Changes to developer tooling in the repo
No changelog entry:
- Typical documentation updates
- Refactorings with no meaningful change in functionality
- Most changes to tests
- Chores, such as enabling linters, or minor changes to the CI process
The title for your pull-request should contain the component type and name in brackets, plus a short statement for your change. For instance:
[processor/tailsampling] fix AND policy
Alternatively, if you have already written a changelog entry, you can set your PR title to as per changelog and a
GitHub Action will automatically generate the PR title and description from your changelog entry YAML file(s). This
avoids duplicating effort between the changelog entry and the PR description.
When linking to an open issue, if your PR is meant to close said issue, please prefix your issue with one of the
following keywords: Resolves, Fixes, or Closes. More information on this functionality (and more keyword options) can be found
here.
This will automatically close the issue once your PR has been merged.
See issue-triaging.md for more information on the issue triaging process.
In order to facilitate proper label usage and to empower Code Owners, you are able to add labels to issues via comments. To add a label through a comment, post a new comment on an issue starting with /label, followed by a space-separated list of your desired labels. Supported labels come from the table below, or correspond to a component defined in the CODEOWNERS file.
The following general labels are supported:
| Label | Label in Comment |
|---|---|
arm64 |
arm64 |
good first issue |
good-first-issue |
help wanted |
help-wanted |
discussion needed |
discussion-needed |
needs triage |
needs-triage |
os:mac |
os:mac |
os:windows |
os:windows |
waiting for author |
waiting-for-author |
waiting-for-code-owners |
waiting-for-code-owners |
bug |
bug |
priority:p0 |
priority:p0 |
priority:p1 |
priority:p1 |
priority:p2 |
priority:p2 |
priority:p3 |
priority:p3 |
Stale |
stale |
never stale |
never-stale |
Skip Changelog |
skip-changelog |
To delete a label, prepend the label with -. Note that you must make a new comment to modify labels; you cannot edit an existing comment.
Example label comment:
/label receiver/prometheus help-wanted -exporter/prometheus
PR authors can rerun failed GitHub Actions workflows by commenting /rerun on the pull request. This will automatically rerun all failed workflow runs for the PR's latest commit.
Example rerun comment:
/rerun
Members of the triagers, approvers, or maintainers teams can approve pending GitHub Actions workflow runs for outside contributors by commenting /workflow-approve on the pull request. This will approve all workflow runs with an action_required conclusion for the PR's latest commit.
Example approve comment:
/workflow-approve
In order to ensure compatibility with different operating systems, code should be portable. Below are some guidelines to follow when writing portable code:
-
Avoid using platform-specific libraries, features etc. Please opt for portable multi-platform solutions.
-
Avoid hard-coding platform-specific values. Use environment variables or configuration files for storing platform-specific values.
For example, avoid using hard-coded file path
filePath := "C:\Users\Bob\Documents\sampleData.csv"Instead environment variable or configuration file can be used.
filePath := os.Getenv("DATA_FILE_PATH")or
filePath := Configuration.Get("data_file_path") -
Be mindful of
- Standard file systems and file paths such as forward slashes (/) instead of backward slashes (\) in Windows. Use the
path/filepathpackage when working with filepaths. - Consistent line ending formats such as Unix (LF) or Windows (CRLF).
- Standard file systems and file paths such as forward slashes (/) instead of backward slashes (\) in Windows. Use the
-
Test your implementation thoroughly on different platforms if possible and fix any issues.
With above guidelines, you can write code that is more portable and easier to maintain across different platforms.
See the 'Donating new components' document.
When introducing new metrics, attributes or entity attributes to components, ensure that Semantic Conventions' compatibility guidelines are taken into account.
Following these steps for contributing additional metrics to existing receivers.
- Read instructions here on how to
fork, build and create PRs. The only difference is to change repository name from
opentelemetry-collectortoopentelemetry-collector-contrib - Edit
metadata.yamlof your metrics receiver to add new metrics, e.g.:redisreceiver/metadata.yaml - To generate new metrics on top of this updated YAML file.
- Run
cd receiver/redisreceiver - Run
go generate ./...
- Run
- Review the changed files and merge the changes into your forked repo.
- Create PR from Github web console following the instructions above.
Below are some recommendations that apply to typical components. These are not rigid rules and there are exceptions but in general try to follow them.
- Avoid introducing batching, retries or worker pools directly on receivers and exporters. Typically, these are general cases that can be better handled via processors (that also can be reused by other receivers and exporters).
- When implementing exporters try to leverage the exporter helpers from the core repo, see exporterhelper package. This will ensure that the exporter provides zPages and a standard set of metrics.
replacestatements ingo.modfiles can be automatically inserted by runningmake crosslink. For more information on thecrosslinktool see the README here.
See the OpenTelemetry membership guide for information on how to become a member of the OpenTelemetry organization and the different roles available. In addition to the roles listed there we also have a Collector-specific role: code owners.
A Code Owner is responsible for a component within Collector Contrib, as indicated by the CODEOWNERS file. That responsibility includes maintaining the component, triaging and responding to issues, and reviewing pull requests.
Sometimes a component may be in need of a new or additional Code Owner. A few reasons this situation may arise would be:
- The existing Code Owners are actively looking for more help.
- A previous Code Owner stepped down.
- An existing Code Owner has become unresponsive.
Code Ownership does not have to be a full-time job. If you can find a couple hours to help out on a recurring basis, please consider pursuing Code Ownership.
If you would like to help and become a Code Owner you must meet the following requirements:
- Be a member of the OpenTelemetry organization.
- (Code Owner Discretion) It is best to have resolved an issue related to the component, contributed directly to the component, and/or review component PRs. How much interaction with the component is required before becoming a Code Owner is up to any existing Code Owners.
Code Ownership is ultimately up to the judgement of the existing Code Owners and Collector Contrib Maintainers. Meeting the above requirements is not a guarantee to be granted Code Ownership.
To become a Code Owner, open a PR with the following changes:
- Add your GitHub username to the active codeowners entry in the component's
metadata.yamlfile. - Run the command
make update-codeowners.- Note: A GitHub personal access token must be configured for this command to work.
- If this command is unsuccessful, manually update the component's row in the CODEOWNERS file, and then run
make generateto regenerate the component's README header.
Be sure to tag the existing Code Owners, if any, within the PR to ensure they receive a notification.
Contributors who are unable to meet the responsibilities of their role are encouraged to move to emeritus. In case of long temporary absences, contributors are encouraged to let maintainers know on the CNCF Slack (e.g. on the #otel-collector-dev channel or privately via DM) and to mark themselves as 'Busy' on Github. In the event that a contributor becomes inactive without prior notice, the maintainers will attempt to contact the contributor via GitHub, the CNCF Slack, or other available communication channels (such as email or through a coworker) to confirm their status.
If the contributor has not replied to maintainer communications after two weeks, they may be removed from the Github review auto-assignment. If the contributor does not respond within a period of two months, they may be moved to emeritus status at the discretion of the maintainers, following a majority vote among the maintainers (possibly excluding the contributor in question).
The OpenTelemetry community strives to foster and maintain a high-trust community. As result, rules below are more discretionary than strictly procedural.
It's highly encouraged for Code Owners who know they will be unavailable for a prolonged period of time (1+ months) to inform other Code Owners for their components in advance. If a Code Owner expects they may be unavailable for a long, undetermined period of time, they should consider moving themselves to emeritus status, and may request to be made active again once they can devote time to maintaining the component.
If a Code Owner has not replied to communications from a maintainer or another Code Owner after two weeks, they may be moved to emeritus status following the majority vote of other Code Owners and with the agreement of a maintainer. If a majority cannot be reached because of unresponsive Code Owners, the active Code Owners can move the unresponsive Code Owners to emeritus status after a 6 week period with no reply, following a majority vote of known-active Code Owners and the agreement of a maintainer.
If a component is seen as at risk of being unmaintained by maintainers, the maintainers may reach out to Code Owners to ensure they are still active. If none of a component's Code Owners respond to communication after a two week period, maintainers may add a new Code Owner to the component at their discretion. Similar to the policy in the preceding paragraph, the unresponsive Code Owners may be removed if a response has not been received after an additional four weeks. This is to ensure the ongoing maintenance of components within the repository.
Following the steps outlined in the documentation for the unmaintained stability status, if no code owners are responsive for the documented period of time and there is not another contributor available to become a Code Owner, the component may be marked as unmaintained. In this situation, all existing code owners will be moved to emeritus status and the component will be open for new Code Owners.
Code Owners who are moved to emeritus status without their direct involvement are welcome to request to be moved back to an active status.
When adding or modifying the Makefile's in this repository, consider the following design guidelines.
Make targets are organized according to whether they apply to the entire repository, or only to an individual module.
The Makefile SHOULD contain "repo-level" targets. (i.e. targets that apply to the entire repo.)
Likewise, Makefile.Common SHOULD contain "module-level" targets. (i.e. targets that apply to one module at a time.)
Each module should have a Makefile at its root that includes Makefile.Common.
Module-level targets SHOULD NOT act on nested modules. For example, running make lint at the root of the repo will
only evaluate code that is part of the go.opentelemetry.io/collector module. This excludes nested modules such as
go.opentelemetry.io/collector/component.
Each module-level target SHOULD have a corresponding repo-level target. For example, make golint will run make lint
in each module. In this way, the entire repository is covered. The root Makefile contains some "for each module" targets
that can wrap a module-level target into a repo-level target.
Whenever reasonable, targets SHOULD be implemented as module-level targets (and wrapped with a repo-level target). However, there are many valid justifications for implementing a standalone repo-level target.
- The target naturally applies to the repo as a whole. (e.g. Building the collector.)
- Interaction between modules would be problematic.
- A necessary tool does not provide a mechanism for scoping its application. (e.g.
portocannot be limited to a specific module.) - The "for each module" pattern would result in incomplete coverage of the codebase. (e.g. A target that scans all file, not just
.gofiles.)
The default module-level target (i.e. running make in the context of an individual module), should run a substantial set of module-level
targets for an individual module. Ideally, this would include all module-level targets, but exceptions should be made if a particular
target would result in unacceptable latency in the local development loop.
The default repo-level target (i.e. running make at the root of the repo) should meaningfully validate the entire repo. This should include
running the default common target for each module as well as additional repo-level targets.