Those are a few examples of unwanted outcomes. But people are also already using AI for nefarious ends, such as to create deepfakes and spread disinformation. While AI-edited or AI-generated videos and images have intriguing use cases—such as filling in for voice actors after they leave a show or pass away—generative AI has also been used to make deepfake porn, adding famous faces to adult actors, or used to defame everyday individuals. And AI has been used to flood the web with disinformation, though fact-checkers have turned to the technology to fight back.
As AI systems grow more powerful, they will rightly invite more scrutiny. Government use of software in areas such as criminal justice is often flawed or secretive, and corporations like Meta have begun confronting the downsides of their own life-shaping algorithms. More powerful AI has the potential to create worse problems, for example by perpetuating historical biases and stereotypes against women or Black people. Civil-society groups and even the tech industry itself are now exploring rules and guidelines on the safety and ethics of AI.
But the hype around generative models suggests we still haven’t learned our lesson when it comes to AI. We need to calm down; understand how it works and when it doesn’t; and then roll out this tool in a careful, considered manner, mitigating concerns as they’re raised. AI has real potential to better—and even extend—our lives, but to truly reap the benefits of machines getting smarter, we’ll need to get smarter about machines.
Tom Simonite is a former senior editor who edited WIRED’s business coverage. He previously covered artificial intelligence and once trained an artificial neural network to generate seascapes. Simonite was previously San Francisco bureau chief at MIT Technology Review, and wrote and edited technology coverage at New Scientist magazine in London. ... Read more
WIRED is where tomorrow is realized. It is the essential source of information and ideas that make sense of a world in constant transformation. The WIRED conversation illuminates how technology is changing every aspect of our lives—from culture to business, science to design. The breakthroughs and innovations that we uncover lead to new ways of thinking, new connections, and new industries.
© 2025 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
Generative AI has had a great impact on the creation and use of content in various forms, such as text, music, and art. However, using this technology also involves copyright issues, raising potential legal uncertainty. Developments in AI-driven tools are happening faster than the law can keep pace. So many aspects are still unclear. For example, it could be argued that using content to make datasets in an educational setting can often be seen as "fair use" in US copyright law or fair dealing in Hong Kong. Publishers and copyright owners though have the right to challenge the use and seek compensation for intellectual property violation through the courts. If you use AI-generated content without checking if the generated content is based on copyrighted works, there is a chance of copyright infringement. Further AI tools have the potential to infringe copyright in existing works by generating outputs that closely resemble them.
Given the uncertainty surrounding copyright and AI, as well as the need for clarification on other topics related to the use of AI tools, it is crucial to be aware of the potential risks and take measures to protect ourselves and our works. Here are some recommended guidelines and best practices for utilizing AI in academic and scholarly fields.
Led by The Chinese University of Hong Kong (CUHK) and five other partnering higher education institutions in Hong Kong, this project proposes a collaborative community of seasoned educators and technical experts to provide practitioners with the necessary support and resources to leverage AI tools for innovative pedagogies.
“The Quick Start Guide provides an overview of how ChatGPT works and explains how it can be used in higher education. It also raises some of the main challenges and ethical implications of AI in higher education and offers practical steps that higher education institutions can take.” (UNESCO, 2023)
AI's impact on the creative landscape and copyright laws is a global concern. Cases in South Korea, the United States, and China mentioned in the article highlight the evolving legal landscape and its implications for copyright protection. (Copyright Agent, January 2024)
GenAI has made a significant impact on higher education. Ithaka S+R has been cataloging GenAI applications specifically useful for teaching, learning and research in the higher education context. The content will be continuously updated to reflect the latest developments. (ITHAKA S+R)
Background: The recent development of Large Language Models (LLMs) and Generative AI (GenAI) presents new challenges and opportunities in scholarly communication. This has resulted in diverse policies of journals, publishers, and funders around the use of AI tools. Research studies, including surveys, suggest that researchers are already using AIs at a significant scale to create or edit manuscripts and peer review reports. Yet AI accuracy, effectiveness, and reproducibility remain uncertain. This toolkit aims to promote responsible and transparent use of AI by editors, authors, and reviewers, with links to examples of current policies and practices. As AIs are fast evolving, we will monitor and update these recommendations as new information becomes available. Please contact us and share any opinions, policies, and examples that could help us improve this guide.
We strongly recommend that editors, journals, and publishers develop policies related to the use of AI in their publishing practices and that they publish those policies on the journal’s website. For instance, policies for authors should be listed in the journal’s ‘Instructions to Authors’ or ‘Submission Guidelines’, while policies for reviewers should be in the journals’ ‘Reviewer Guidelines’. The policies should be clearly communicated to authors and reviewers through email communication and in the online submission system. The policies should also include information on how parties should raise concerns about possible policy infringement, consequences that any party might face in case of infringement, and possible means to appeal to journal decisions regarding these policies. Additionally, the policies should be supplemented with educational resources or links to information on the responsible use of AI (see for example Guidelines on the responsible use of generative AI in research developed by the European Research Area Forum). Editors should also consider announcing the release or update of AI policies through published editorials.
We acknowledge that there may be disciplinary or operational difficulties (e.g., submission system limitations) that could affect the development and implementation of AI policies. However, clear checks and declaration forms can be created to collect information on AI use and any potential conflicts of interest associated with their use (e.g., disclosing if the AI used was developed by the publisher).
We strongly recommend that all information regarding AI use for a particular manuscript are declared in appropriate manuscript sections, a separate AI declaration section (see an example here), or through the use of publication facts labels (see our detailed recommendations below). Finally, we strongly recommend monitoring adherence to the journal’s AI policy and providing regular reports on that adherence and any policy infringements.
Authorship/Contributionship: We strongly recommend that AIs should not be listed as co-authors on publications. Editors should consider the World Association of Medical Editors (WAME), the International Committee of Medical Journal Editors (ICMJE), or the STM association (STM) materials for guidance on this topic. For instance, the ICMJE states that AIs cannot be authors: “because they cannot be responsible for the accuracy, integrity, and originality of the work, and these responsibilities are required for authorship”.
Citations and literature review: We strongly recommend that AI outputs should not be cited as primary sources for backing up specific claims. Research has shown that citations and information provided by AIs can be inaccurate or fabricated. Authors and reviewers should be reminded to always read and verify the information they cite, as they are the responsible party for the information they present. Additionally, AI outputs may not be reproducible at a later time, so editors should consider whether authors need to capture and time stamp any outputs they mention or cite (see an example of authors showcasing responses of chatGPT for a specific topic).
Data collection, cleaning, analysis, and interpretation: We strongly recommend that any use of AI for data collection, cleaning, analysis, or interpretation be disclosed in the Methods section of the manuscript or in an AI declaration section (see an example here). These statements should ideally be accompanied by appropriate robustness and reliability indicators, as well as steps to ensure their reproducibility.
Free AI ToolsData or code generation: We strongly recommend that any use of AI for data or code generation be disclosed in the Methods sections of the manuscript, as well as data and code declaration sections that some journals have, or in an AI declaration section (see an example here). Editors should be aware that generated data or code can be an excellent resource for educational purposes, but could also be misused for creation of fake data for hypothesis testing or other analyses. Furthermore, it might be difficult to distinguish between author(s) generating (part of) the code or data using an AI and then editing the AI output themselves, vs creating the code or collecting the data themselves and then using an AI for editing.
Visualisation – creation of tables, figures, images, videos, or other outputs: We strongly recommend that any use of AI for visualisation is disclosed in the Method section of the manuscript and in the captions or legends of those outputs. AI generated visualisations may require additional checks to insure their validity, as well as steps to ensure their reproducibility. Editors should consider an example of a policy banning the use of AI for these purposes, and an example of a retraction of a paper due to a “nonsensical” image).
Writing, language, and style editing: We strongly recommend specifying whether and how authors should declare their use of AI for writing, language, or style editing. Editors can consider recommending that authors declare such use in the Acknowledgements section, or in an AI declaration section (see an example here). Alternatively, editors could specify that such use, similar to use of spell checking software, does not need to be declared. Editors should be aware that it might be difficult to distinguish between an author generating a (part of) text using an AI and then editing it, vs writing the initial text draft and then editing it using an AI. For a helpful overview of publishers’ policies related to this issue see this Scholarly Kitchen post (compiled in spring 2024).
Other research uses: We strongly recommend disclosing any AI use, even those not covered in the above sections. Such use should be disclosed in an appropriate section (e.g., Acknowledgments, Method section, or an AI declaration section). For example, authors might choose to employ self-check AI tools for checking research reporting recommendations, image, code, or data integrity.
Peer Review: We strongly recommend specifying whether reviewers are allowed to use AI tools during peer review, and to consider distinguishing between the use of AI for language or style editing of the reviewer’s written comments versus AI creation of the review comments. Furthermore, journals should specify if and how they will use any tools that check if part(s) of the review were written by an AI, what are potential consequences of those findings, as well as what authors should do in cases when they suspect use of AI for those purposes (for instance, see a case of an author who raised concerns about review reports being written by AI, and the COPE’s discussion document on Artificial intelligence (AI) in decision making). Any AI peer review policy should be highlighted in the review invitation emails and in the online submission platform. We are aware that several journals, publishers, and funding agencies have prohibited the use of AI tools by peer reviewers (e.g., Royal Society, National Institutes of Health, Elsevier) due to potential risks of bias, confidentiality concerns, and their unproven accuracy, effectiveness, and reliability. However, such bans are hard to implement, and it is not clear what, if any, repercussions for their use will be. For additional considerations regarding use of AI in peer review see here.
Editorial Work: We strongly recommend that any use of AI by the editor or editorial staff be disclosed on the journal’s website and in communications with authors and reviewers, including use of any screening tools that detect if (parts of) manuscripts or review reports were generated or edited by AI. We also recommend that editors, journals, and publishers consider declaring any checks performed by AI using publication facts labels. Finally, the journal’s AI policies should include information on how parties should raise concerns about possible AI policy infringement, consequences that any party might face in case of infringement, and possible means to appeal journal decisions regarding these policies.
Before submitting a manuscript, we strongly recommend authors check journal or publisher policies on the use of AI in scholarly communication (for example, by checking the journal website or directly contacting the journal). When a journal does not have an AI policy, or when that policy does not cover specific aspects of AI use of an interested party, we strongly recommend authors check and follow the EASE recommendations on how AI use should be declared. If the journal’s policies do not align with the authors’ own AI use, we recommend that authors do not deceive the journal, but rather contact the journal and ask for explanations or exceptions, or consider another journal as an outlet for their work. Authors should be aware that journals or their co-authors might use tools that detect if part(s) of the manuscript were generated or edited by an AI tool. As a self-checklist, authors might also consider using such tools.
When authors suspect infringement of the journal’s AI policy, or that review reports or editor’s comments were generated by AI we advise that authors contact the editor with a clear description of that suspicion. Authors might also consider running the comments through the tools that detect AI involvement and include reports of such tools in their communication as possible evidence of AI use (see an example of a researcher who raised concerns about receiving AI written reports, as well as COPE’s discussion document on Artificial intelligence (AI) in decision making. If authors’ follow-up with the editor and the editorial office doesn’t get any attention, we advise authors to contact the journal publisher or society with a copy of their previous communication to the editor and editorial office, or if those don’t exist, contacting the Committee on Publication Ethics (COPE), or STM Integrity Hub.
When considering a peer review invitation from a journal, reviewers should check journal or publisher policies on the use of AI in speer review (for example, by checking the journal website and the review invite email). When the journal does not have an AI policy, or when that policy does not cover specific aspects of AI use of an interested party, we strongly recommend reviewers check and follow the EASE recommendations on how AI use should be declared and used in peer review. If the journal’s policies do not align with the reviewer’s AI use, we recommend that reviewers do not deceive the journal, but rather contact the journal and ask for explanations or exceptions, or consider rejecting the review invitation. Reviewers should be aware that journals or authors might run their review reports through tools that detect if part(s) of that report were created or edited by an AI. As a self-checklist reviewers might also consider using such tools.
When reviewers suspect infringement of the journal’s AI policy, or that a manuscript or another reviewer’s comments were generated by an AI tool, we advise that reviewers contact the editor with a clear description of their suspicion. Reviewers might also consider running those outputs through the tools that detect AI use and include reports of such tools in their communication as possible evidence of AI use (see an example of a researcher who raised concerns about receiving AI written reports, as well as COPE’s discussion document on Artificial intelligence (AI) in decision making). If reviewers’ follow-up with the editor and the editorial office doesn’t get any attention, we advise they contact the journal publisher or society with a copy of their communication to the editor and editorial office, or if those don’t exist, contacting the Committee on Publication Ethics (COPE) or STM Integrity Hub.
Choosing the right AI writing tool is all about finding one that fits your style. You'll need a tool that lets you tweak things here and there so your articles really sound like they're coming from you, not a robot.
Please write a blog post paragraph for this header. Please make it really easy to read, use simple words, and make it sound like a human writes it. Don't use "delve" or “it’s like having”. And don't make it cheeky.
And here's a cool fact: when more people share and talk about your article on social media, it can help your article do better in search results. It's a win-win. You get more eyes on your work, and your article gets a boost in visibility.
For example, if your article is about healthy eating, the AI might suggest questions like, "What are some easy healthy meals?" or "How does eating well help your body?" Answer these in a simple way to help your readers understand better.
From picking the right tool to giving your article new life on social media, each step is a building block to a better writing experience. Using an AI writer to generate text doesn't mean losing your personal touch; it's about making your job easier and your articles better.
Meet Millie Pham - an SEO content marketer and video editor who loves exploring the latest tech and AI tools. She provides honest reviews and demystifies the world of AI, SEO, and blogging, making these complex topics accessible and easy to understand for everyone. Her work has been featured on Marin Software, jobillico, Nicereply, and other sites.
0 تعليقات