Large Language Models (LLM) and Generative Artificial Intelligence (AI) are pervasive; they are integrated into many areas of the internet and are being deployed into a wide variety of sectors. It is recognized that “[r]esponsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure.”[1] However, on the other side, “irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation . . . stifle competition; and pose risks to national security.”[2] With the world of AI developing so rapidly and the worlds of regulation and compliance struggling to catch up, it often falls upon users to responsibly use these tools.
Genuine concerns regarding confidentiality arise, especially with publicly accessible AI. Data that is fed into an AI, especially data used to train an AI, could be considered to be out in the public domain.[3] This can raise issues concerning intellectual property, security concerns, and research integrity, to name a few. Furthermore, even if the AI is not public, the security of the system and the possibility of data breaches must be accounted for.[4] This raises concerns for information related to, intellectual property, industrial security, export control, controlled unclassified information, and personally identifiable information, to name a few. It is important to follow existing organizational policies pertaining to security and information technology, especially if a dedicated AI policy is not available, and to develop AI-specific internal policies. When using generative AI, do not enter any confidential information or information that you do not want out in the public domain.[5]
Scientific and academic integrity can be called into question with the use of AI. The line between too much use of AI in scholarly and academic work is blurry.[6] Relying too heavily on AI can result in unoriginal work. In addition, generative AI is only as good as the data available to it; relying on generative AI without fact-checking risks false or inaccurate results or even “hallucinations” that have no basis in reality.[7] While not specific to the U.S., the European Commission’s Living Guidelines on the Responsible Use of Generative AI in Research are based on key principles of reliability, honesty, respect and accountability that are applicable to any AI user; using the AI resource reliably, honesty via transparency, respect for all participants including society, and accountability in use.[8] Thorough review and fact-checking of generative-AI output is imperative, otherwise you could end up like the lawyer who used AI to write a brief in which every citation was invented by ChatGPT.[9] Remaining aware of AI’s limitations, potential biases, and requisite fact-checking is key to using generative AI responsibly.[10] In addition, transparency about the use of AI tools is also crucial, disclosing what AI system was used and in what manner.[11]
These principles and recommendations, and more, are summarized in the Fact Sheet: Biden-Harris Administration Announces Key Actions to Advance Tech Accountability and Protect the Rights of the American Public.[12] While these core principles based on the Blueprint for an AI Bill of Rights describe protections consumers should be entitled to and actions the government is taking such as data privacy, notice and explanation, safe and effective systems, these standards also provide guidance for those generating information from AI tools.
Generative AI systems can have substantial impacts on productivity and efficiency. However, the responsible and thoughtful use of these systems is imperative.
Learn more – 2024 SRAI Annual Meeting
Interested in learning more about the ethical considerations of AI use in research administration? Join Emily Njus for a live discussion on the topic Monday, Oct. 28, during the 2024 SRAI Annual Meeting.
Attain Partners – Research Administration Experts
If your institution is struggling with developing policies around research integrity, Attain Partners can help. Our firm is focused on strategy, technology, and compliance and understands the principles of research integrity and the importance of robust misconduct identification processes. Our assessments identify gaps in existing policies and procedures, offer meaningful recommendations for improvement, and ensure your resulting practices align with best practices and relevant regulations.
Ms. Emily Njus, JD is a Senior Consultant with Attain Partners’ Research Enterprise practice, specializing in research contract review and negotiation, and is based out of Kailua, Hawai’i. Ms. Njus has over 15 years of compliance and research administration experience in both higher education and non-profit hospital system settings. Prior to consulting, she was the manager of pre-award research administration at a mid-sized state institute of higher education overseeing a central office team of pre-award professionals and students in submitting the University’s sponsored funding requests. Her experience also includes the facilitation of contract negotiations with outside entities and University general counsel for all research related contractual matters. At the university, she developed a pre-award training program and served as the University’s Alternative Facility Security Officer. Ms. Njus has taught contracts courses to first year law students through the University’s Academic Success Program.
[1] Executive Order 14110, 88 FR 75191 (November 1, 2023).
[2] Id at 75193.
[3] Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the United States Patent and Trademark Office, 89 FR 25609 (April 11, 2024).
[4] Id at 25617.
[5] Danielle Braff, Approach with Caution, ABA Journal, June-July 2024, at 12.
[6] See generally Ian Bogost, The First Year of AI College Ends in Ruin, The Atlantic (May 16, 2023), https://www.theatlantic.com/technology/archive/2023/05/chatbot-cheating-college-campuses/674073/ and Ian Bogost, AI Cheating is Getting Worse, The Atlantic (Aug. 19, 2024), https://www.theatlantic.com/technology/archive/2024/08/another-year-ai-college-cheating/679502/
[7] See e.g., Karen Weise & Cade Metz, When A.I. Chatbots Hallucinate, N.Y. Ties (May 1, 2023).
[8] Living Guidelines on the Responsible Use of Generative AI in Research, European Commission, https://european-research-area.ec.europa.eu/news/living-guidelines-responsible-use-generative-ai-research-published (last visited Sept. 4, 2024)
[9] Benjamin Weiser, Here’s What Happens When Your Lawyer Uses ChatGPT, The New York Times (May 27, 2023), https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html.
[10] Smita Rajmohan, The Legal Issues to Consider when Adopting AI: Learn how to protect your corporate data and intellectual property, IEEE Spectrum (May 21, 2024), https://spectrum.ieee.org/legal-issues-to-consider-ai.
[11] Id.
[12] Fact Sheet: Biden-Harris Administration Announces Key Actions to Advance Tech Accountability and Protect the Rights of the American Public