Skip to content

Add AI policy for contributors #1211

@tseli0s

Description

@tseli0s

Describe the feature

Add a file or section in the README stating a clear "do"s and "don't"s about AI generated code contributed to XLibre.

A handful of other important projects (For example, Linux and Loupe) have also discussed the matter. The latter now provides an official policy: https://discourse.gnome.org/t/loupe-no-longer-allows-generative-ai-contributions/27327 (I am not sure whether Linux also came up with something or they're just discussing it).

Inevitably, AI-generated code will slip into any open source project as time goes on and the AI boom continues. Better to make clear the XLibre project's stance on the issue early on rather than having unexpected surprises down the road.

It should be implemented because

Making it clear if and how code generated by LLMs or other generative AI software is accepted can improve quality checks and improve trust between the author and the XLibre developers. There are also numerous concerns about using AI in general that can be discussed separately (like the massive water usage for a model, or humans getting replaced by AIs, and so on. Not really the place to discuss the ethics of AI though)

What are the alternatives?

Simply ignoring whether AI was used in a contribution and taking a look at whether code does the job or not. The only concern there would be copyright - AIs might provide code or algorithms protected by copyright or other licenses that require attribution, but that can be worked on.

Additional context

No response

Extra fields

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentation

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions