-
Notifications
You must be signed in to change notification settings - Fork 195
Description
Describe the feature
Add a file or section in the README stating a clear "do"s and "don't"s about AI generated code contributed to XLibre.
A handful of other important projects (For example, Linux and Loupe) have also discussed the matter. The latter now provides an official policy: https://discourse.gnome.org/t/loupe-no-longer-allows-generative-ai-contributions/27327 (I am not sure whether Linux also came up with something or they're just discussing it).
Inevitably, AI-generated code will slip into any open source project as time goes on and the AI boom continues. Better to make clear the XLibre project's stance on the issue early on rather than having unexpected surprises down the road.
It should be implemented because
Making it clear if and how code generated by LLMs or other generative AI software is accepted can improve quality checks and improve trust between the author and the XLibre developers. There are also numerous concerns about using AI in general that can be discussed separately (like the massive water usage for a model, or humans getting replaced by AIs, and so on. Not really the place to discuss the ethics of AI though)
What are the alternatives?
Simply ignoring whether AI was used in a contribution and taking a look at whether code does the job or not. The only concern there would be copyright - AIs might provide code or algorithms protected by copyright or other licenses that require attribution, but that can be worked on.
Additional context
No response
Extra fields
- I have checked the existing issues
- I have read the Contributing Guidelines
- I'd like to work on this issue