Description
Requirements
We are a small team within a large company (~20k) aiming to deploy a FeatureFlag solution across the company. Our solution aims to centralize feature flagging between hundreds of teams, with about a dozen users each, where each team may have hundreds of flags over many different projects over several environments (eg: dev, stage, prod).
Our internal solution works, but Go-Feature-Flag is a promising replacement that offers to reduce our technical debt tremendously at our expense of several medium architectural changes to our side that we are happy to adopt. After some experimentation, we've found two simple but important hurdles for using go-feature-flag at larger scales as described.
Use case
Members in an arbitrary company team can create a project at any point for any tool within their domain of work. They expect to be able to add flags of any name (different projects in a team (or different teams) may therefore have the same flag name, but these are categorically different flags) to their projects and be able to access them through any client/server provider.
- Each project has a unique UUID.
- Each project will always exist in a Dev, Stage, AND Prod environment (all three).
- Each Dev, Stage, and Prod environment for a given project has a unique UUID.
- The users have access to both these UUIDs from our front end to use in their providers.
- In the back end all Dev, Stage, and Prod data exist in its respective AWS cluster in a S3 bucket.
- The S3 bucket is shared by all projects in that environment, but each project has a special unique (hashed name) file that is generated automatically for that project, and all flags for that project exist in that unique file.
Features
With this setup/scale, two limitations of go-feature-flag inhibit what feels could be a seamless integration into this system
Accept multiple/all files in a given bucket (or folder in bucket)
Each time a user creates a project, a file is made for them to store their flags. This is done so no two projects have flags in a file together, both for security benefits and maintainability reasons.
The current s3 bucket retriever is designed for a single file within a bucket. For each new file, a new retriever needs to be added to the values.yaml
. This is not ideal as it would mean adding
retriever:
kind: s3
bucket: bucket-name
item: new-project-file.yaml
to the relay-proxy configuration every time, and then propagating to dev/stage/prod and approving the MR. This is difficult to automate and not very maintainable. A better solution would be to be able to provide a list of files such as
retriever:
kind: s3
bucket: bucket-name
item: [ file1.yaml, file2.yaml, file3.yaml, file4.yaml, ... , fileN.yaml ]
or
retriever:
kind: s3
bucket: bucket-name
item:
- file1.yaml
- file2.yaml
- file3.yaml
- ...yaml
- fileN.yaml
-
or better yet, being able to to access all files in a wildcard such as item: *.yaml
or item: folder1/folder2/*.yaml
. I'm not sure how the back end of go-feature-flag is configured but a bucket.list()
could get all items and then parse matching files to count them all as retrievers.
Similar functionality to support easier addition to several files for other retrievers would also be suggested to be consistent.
Handle flags with the same name from different retrievers.
Adding many retrievers (in the above S3 case, files) where users can define any flag name they want introduces collision concerns when two different users in two different project use the same flag name in their project. Let's see an example:
Imagine a file named 6054e20a-d705-4d28-bcde-96dc7afd0a6c.yaml
containing
flag1:
variations:
is-admin: true
not-admin: false
defaultRule:
variation: not-admin
and a second file named 49dd101b-91bb-4400-adde-7f3dd9e39950.yaml
containing
flag1:
variations:
enabled: true
disabled: false
defaultRule:
variation: enabled
These two flags would be for different projects, accessible by different users, but by chance could have the same name. go-feature-flag would not know which flag1
to read from, and currently raises errors in go-feature-flag.
What feels like a simple solution to this would be defining a retriever
key defined in the OpenFeature EvaluationContext of a provider that has special weighting to go-feature-flag.
{
"targetingKey": "someUniqueTargetingKey",
// ...
"gofeatureflag": {
"retriever": "6054e20a-d705-4d28-bcde-96dc7afd0a6c"
....
}
}
Since retriever
is a special object in go-feature-flag, specifying a retriever in the EvaluationContext would then cause go-feature-flag to look only in the retriever 6054e20a-d705-4d28-bcde-96dc7afd0a6c
for its flags. How this could easily be done without larger changes in how go-feature-flag stores flags in memory would be to name flags after their retriever to begin with.
For both the above flags, with or without specifying a retriever
key in the EvaluationContext, the request would look normal to the provider, as:
flagResponse, err := client.BooleanValue(context.TODO(), "flag1", true, evaluationCtx)
but behind the scenes, these flags would actually be called 6054e20a-d705-4d28-bcde-96dc7afd0a6c_flag1
and 49dd101b-91bb-4400-adde-7f3dd9e39950_flag1
from the retriever the flag is from. If the retreiver
key is not specified in the EvaluationContext, all retrievers are scanned for a flag called flag1
(rather ending in flag1
as shown), and any collision would cause whatever the current behaviour is in such a case. If retriever
IS defined as a key in the EvaluationContext, then only UUID_flag1
is searched explicitly.
I guess this could be extended to giving retrievers a name
field during configuration that has to be unique, but that's beyond this scope.
Conclusion
Overall, we're looking for a solution regarding use cases where new teams/projects may come and go, and as such the sources for the retrievers can also vary. Being able to read in all files in a bucket and somehow scan for new ones added might be all that's needed for such a use case, but flag naming collisions between teams would need to be addressed. Any suggestions, ideas, or modifications pertaining to such a use case is greatly appreciated, and our team is happy to help work with the community to develop a solution if it's agreed to be a beneficial addition to go-feature-flag.