What it means to contribute to the ecosystem #51
Replies: 3 comments
-
|
I agree this question is critical. I think it would be great to try and create a shortlist of these kinds of organizations to gauge interest in being involved in some kind of collective fund / new institution. I think the three listed here could be great starting points, would also add (from related thread here #57) scholarly / academic organizations, other peer production-y projects, etc. I think it is plausible that if one or more large organizations adopted the preference signals (ideally in unison) some actors in the AI space might legitimately pay into a commons fund, but at first it will likely be a small amount and it will definitely be unstable (as expected and mentioned in the framework docs). However, my hunch is that it will be best to have some support in the framework for transitioning from preference signals to a formal contract. An exemplar to look towards here might be the voluntary Google > Wikimedia payments which later led to the creation of Wikimedia enterprise. For the immediate future, this might mean trying to get some voluntary payments off the ground via organization adoption of preference signals, and then later on the organizations that adopted the preference signals together might put their heads together to offer a similar "enterprise program". Having adopted Preference Signals first (and if the signals acquire some brand and norms value), could give the creator orgs slightly more "soft leverage". Of course, this would not give them as much "hard leverage" as simply keeping more content off the public web, under paywall, adding data poisoning / obfuscation, etc. But many of these orgs won't take their content off the web anyway, especially examples like Wikimedia projects or open access academic papers so this at least provides some path forward. Down the line, I would love to see full integration with some thinking on (1) collective-level data valuation by AI labs (related thread here: #33) and (2) detailed governance plans for the institution that would receive commons fund, but I think this may over complicate at this stage. One other question this raises:
|
Beta Was this translation helpful? Give feedback.
-
|
Addendum: I mean that including any specific choices related to either data valuation techniques or specific governance practices might not fit within the Preference Signals framework, but it does seem like figuring these things out to a pretty high level of detail will be necessary to actually successfully receive a payment from an AI developer and have that money flow to an organization to spent on commons. So, the data valuation details and institutional governance details do need to figured out relatively in unison with the preference signals framework. One idea that would involve updating the preference signals framework might be to let creators associate their preference signal with a collective identifier to simplify some of the "back and forth" (such that an AI developer can make a payment associated with "Commons Collective 123", send them that payment, and then the collective decides what to do with it). |
Beta Was this translation helpful? Give feedback.
-
|
The word "ecosystem" seems misleading, both in the proposed signals document and here, since the actual ecosystem of e.g. the CC licenses is already known: the commons. Wording like "bundle the interests of many creators", "ecosystem fund", "Do we need new institutions? How should payments flow?" are IMHO intrusively implying a universal consent of creators to establishing such a machinery. This consent does not exist, and many creators have already made it clear that they have no sympathy for re-defining what the CC has come to mean to them, for the benefit of AI corporations. But the machinery would of course be very much welcome from the POV of those AI corporations: drop some petty cash into a "fund", get to claim all creators "consented" to having their work ingested, avoid all further legal hassles, profit, be invited to events and tell people how much you care for the "commons". Plus, of course, a few people would be more than willing to seize another opportunity to "serve" on the board of new institutions, get to discuss and decide about polices, and about where the money will flow. Essentially, this is a disrespectful concept that fantasizes about creators "consenting" to being exploited. It's like selling compulsory collectivization as a dream come true. You can create something like that, and see which creators would consider participating. Doing it under an established umbrella that has "Commons" in its name, though, looks like carelessly destroying a reputation that took many years to build. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
The discussions so far reveal a core tension: individual creators and orgs lack the bargaining power to meaningfully influence how AI developers use their content, even with preference signals.
I want to double-click on the proposed idea of ecosystem contributions, because I believe it starts to address this problem.
Ecosystem contributions are fundamentally more powerful than direct individual contributions, because they bundle the interests of many creators into collective value propositions that AI developers can't easily ignore or undervalue. When AI companies engage with individual creators one-by-one, they can effectively "divide and conquer" the commons, offering minimal compensation to each creator while extracting enormous aggregate value from the combination of all their data. But when creators organize collectively as an ecosystem, AI developers must account for the true value they're extracting and reciprocate at a scale that actually sustains the ecosystem as a public good.
I'm curious what people think an ecosystem fund should look like in practice. How should existing organizations be involved (like Wikimedia Foundation, Internet Archive, or Common Crawl)? Do we need new institutions? How should payments flow? How do we ensure the fund stays aligned with creator interests?
This proposal aligns closely with RadicalxChange's work on "data bargaining" and trusted data intermediaries, which tries to rebalance power between information producers and AI developers across the entire economy. Here is a memo on its relevance to CC Signals: https://docs.google.com/document/d/15ONBx0kR4aaT9rty0jfkz0PqBKSQ7H9sefNKwVwJBzU/edit?usp=sharing
Beta Was this translation helpful? Give feedback.
All reactions