If you use a Large Language Model or other Generative AI tool(s) for writing your proposal, do so responsibly, following The Netherlands Code of Conduct for Research Integrity. This code of conduct is based on five core values: honesty, diligence, transparency, independence and responsibility. See: Netherlands Code of Conduct for Research Integrity | NWO.
Large Language Models such as ChatGPT and Bard can undermine these values as argued in the links below.
Your proposal will be read by colleagues from your field. They spend a lot of time on a voluntary basis evaluating the proposals and giving invaluable feedback. Such human interaction is a key part of the Lorentz Center’s methodology, and contributes to high quality, impactful workshops. We therefore expect the same human commitment from workshop organizers.
Living guidelines on the responsible use of generative AI in research (EU)
Generative AI and Research Integrity (by Mark Dingemanse – Radboud University)