AI Tools Usage Policy

AI Tools Usage Policy

BETA-BAREKENG : Journal of Mathematics and Computer Science recognizes that artificial intelligence (AI) tools may assist authors in certain stages of manuscript preparation. However, the use of such tools must remain ethical, transparent, limited, and fully supervised by human authors in order to preserve originality, accountability, and research integrity.

1. Introduction

BETA-BAREKENG : Journal of Mathematics and Computer Science acknowledges the growing use of artificial intelligence tools in scholarly communication, including writing support, language improvement, coding assistance, visualization, and literature exploration.

This policy is intended to ensure that the use of AI tools remains consistent with academic integrity, transparency, originality, and author accountability.

Back to top

2. Definition of AI Tools

For the purpose of this policy, AI tools include software, systems, or platforms that use artificial intelligence, machine learning, natural language processing, or generative models to produce, revise, summarize, translate, analyze, or modify content.

Examples include, but are not limited to:

  • Generative AI and large language models such as ChatGPT, Gemini, or Claude
  • Writing support tools such as Grammarly or DeepL Write
  • AI-assisted coding or analysis tools for Python, R, MATLAB, or similar environments
  • AI-supported visualization or diagram-generation tools
  • AI-assisted literature search or citation support tools

Back to top

3. Acceptable Use of AI Tools

Authors may use AI tools in a limited and responsible manner for tasks such as:

  • correcting grammar, spelling, punctuation, and language clarity;
  • improving readability and sentence flow;
  • assisting with reference formatting;
  • conducting preliminary keyword identification or literature exploration;
  • generating preliminary code snippets for analysis or visualization, provided the code is fully checked and validated by the authors;
  • creating basic diagrams or visualizations, provided that copyright and accuracy are fully respected.

Any use of AI must remain under meaningful human oversight, and the authors must verify the accuracy and appropriateness of all outputs.

Back to top

4. Prohibited Use of AI Tools

AI tools must not be used to:

  • generate the entire manuscript or substantial scholarly arguments without critical human intellectual contribution;
  • fabricate, falsify, or manipulate data, results, proofs, citations, references, or experimental records;
  • produce mathematical claims, theorems, proofs, algorithms, or interpretations without rigorous validation by the authors;
  • paraphrase or summarize existing literature in a way that constitutes plagiarism or disguised copying;
  • generate peer review reports or editorial decisions on behalf of humans;
  • create misleading authorship, false citations, or inaccurate source attributions.

Back to top

5. Responsibilities of Authors

Authors remain fully and solely responsible for the entire content of their submitted manuscript, including all parts that were supported, revised, or suggested by AI tools.

  • Authors must verify the accuracy, originality, and validity of all AI-assisted content.
  • Authors must check for hallucinations, conceptual errors, false references, and invalid reasoning.
  • Authors must ensure that AI-assisted text does not infringe copyright or publication ethics.
  • Authors accept full accountability for any errors or misconduct arising from AI use.

Back to top

6. Authorship and AI

AI tools, chatbots, and language models cannot be listed as authors or co-authors. Authorship is reserved exclusively for human individuals who meet the journal’s authorship criteria.

AI tools also must not be cited as if they were accountable scholarly authors.

Back to top

7. Disclosure Requirements

Authors must disclose the use of generative AI or AI-assisted technologies whenever such tools are used beyond routine spelling and grammar checking.

The disclosure should include:

  • the name of the tool;
  • the version, if available;
  • the provider or developer;
  • the specific purpose of use; and
  • a statement confirming that the authors reviewed, edited, and validated the output.
Suggested disclosure statement:
“During the preparation of this manuscript, the author(s) used [Tool Name, Version, Provider] for [specific purpose]. After using this tool, the author(s) reviewed and edited the content as necessary and take full responsibility for the content of the publication.”

Back to top

8. Location of Disclosure in Manuscripts

Disclosure of AI use should appear in the manuscript in one of the following places, depending on the purpose:

  • Methods section — if AI was used for coding, analysis, modeling, or data-related procedures;
  • Acknowledgments section — if AI was used mainly for language support, translation, or formatting;
  • Dedicated statement section — authors are encouraged to include a section titled “Statement on AI Tool Usage” immediately before the references.

Back to top

9. Journal Internal AI Threshold

As an internal editorial safeguard, the journal expects that AI-assisted textual contribution should not exceed 25% of the manuscript’s narrative content.

This threshold is a journal-specific operational rule intended to preserve human intellectual contribution and originality. It is not an official percentage standard issued by COPE or Elsevier.

Even when AI-assisted content is below 25%, authors must still disclose relevant use and remain fully accountable. Conversely, content above this threshold may trigger editorial concern, clarification requests, revision requirements, or rejection, depending on the extent and nature of the use.

Important: this percentage does not replace editorial judgment. The journal evaluates AI use based on context, disclosure, originality, scholarly integrity, and meaningful human contribution.

Back to top

10. Use of AI by Editors and Reviewers

Editors and reviewers must preserve manuscript confidentiality at all times. Submitted manuscripts, peer review reports, and editorial correspondence must not be uploaded to public generative AI tools if doing so could compromise confidentiality, proprietary content, or personal data.

Generative AI must not be used by editors or reviewers to make scholarly judgments, peer review evaluations, or editorial decisions in place of human assessment.

Limited internal administrative use of privacy-protective tools may be allowed for screening or workflow support, but final decisions remain entirely under human responsibility.

Back to top

11. Consequences of Non-Compliance

Failure to comply with this policy may result in:

  • request for clarification or correction;
  • mandatory revision of the manuscript;
  • rejection before or during peer review;
  • retraction after publication in serious cases; or
  • notification to relevant institutions or funding bodies where warranted.

Back to top

12. Appeals and Policy Updates

Authors who disagree with an editorial decision related to AI use may submit a written appeal to the Editor-in-Chief, supported by a clear explanation and relevant evidence.

This policy may be revised periodically in response to technological developments and evolving standards in scholarly publishing ethics. Authors are responsible for consulting the latest version of this policy before submission.