AI policy
Position on the use of generative Artificial Intelligence (AI)
In the context of our editorial focus on scholarship in humanities, the editorial committee of Acta Academica considers generative AI, specifically Large Language Models (LLMs), a technology that is parasitical on the labour of others, and inherently limited in its capacity to produce novel research contributions. We consider questioning, thinking, and writing from a situated, embodied – and therefore inimitable perspective, to be an essential element in humanities scholarship. We are therefore deeply opposed to outsourcing any of these faculties to a disembodied device that can only offer a dull (and at times faulty) aggregation of perspectives.
That being said, we realise that LLMs have the ability to mimic human effort to such a degree that it takes considerable time and effort to detect this dissimulation. We have faith in the ability of our peer reviewers to ultimately eliminate submissions that do not make a novel contribution to a particular field. Nevertheless, we respectfully ask all prospective contributors not to waste our, or our reviewers’ time by expecting us to read and engage with “scholarship” that cannot be attributed to any scholar.
We ask that you please accompany your submission with a declaration to the effect that you have taken note of our position on the use of LLMs, and your willingness to adhere thereto. If you should make limited use of AI nevertheless, we would expect you to make a detailed declaration that describes how the tool was used (including the prompts), provides the name, creator and version of the AI-tool, and specifies the date it was consulted. The editorial committee reserves the discretion to reject any submission if the specified use of AI is deemed to undermine the scholarly objectives of the journal.


