
Artificial intelligence (AI) has shifted from being a distant idea to something many of us interact with every day. While conversations about AI often focus on major institutional investments, there are many practical ways to use the tools we already have to make our work easier. Most of us rely on AI without thinking about it when we use voice assistants, fitness trackers, targeted ads, or curated playlists. The real opportunity now is to look at how these familiar technologies can support our workflows and help us work more efficiently in research administration.
Understanding AI in Context
For research administrators, AI is most useful when we stop thinking of it as a technical concept and view it as a set of tools that can simplify everyday work. Rather than focusing on algorithms, it helps to look at how AI can support tasks we already manage: interpreting dense policy guidance, coordinating with faculty, and navigating compliance requirements.
AI shines in areas where the work is repetitive, data-heavy, or requires quick synthesis. It can scan a solicitation and identify deadlines or key requirements, summarize a long email thread, help refine a justification, or a draft SOP. These tasks still require human judgment, but AI can significantly reduce the time required to reach a strong first draft.
AI can also improve consistency across teams. Research administration requires balancing sponsor rules, institutional policies, and evolving regulations, which means that even well-established processes can vary from person to person. AI-generated templates, checklists, and summaries help standardize routine activities and provide staff with a reliable starting point, especially for those new to the field.
Beyond operational tasks, AI can function as a thought partner. When administrators need to reframe guidance for different audiences, build training materials, or brainstorm improvements to a workflow, AI can offer alternative perspectives and help spark creative problem-solving. It does not replace expertise, but rather supports it by removing some of the early, time-consuming steps.
By treating AI as a practical tool rather than a complex technology, research administrators can more easily recognize where it fits into day-to-day responsibilities and how it can elevate both efficiency and the quality of work.
Using AI Responsibly
With a new opportunity comes responsibility. Research administrators regularly handle confidential proposal materials, unpublished research data, personally identifiable information, and sensitive institutional processes. When incorporating AI into daily work, avoid entering protected, proprietary, or confidential information into public tools. AI should not be used to process sensitive research topics or content that could compromise compliance obligations.
Many institutions are beginning to roll out approved AI tools that offer built-in safeguards around data protection, intellectual property, and compliance. These options give research administrators a safer way to explore AI without introducing unnecessary risk. Tools like Microsoft Copilot can provide data protection and ensure that generated content remains the property of the institution while still delivering the capabilities people expect from widely available generative AI platforms.
As institutional policies continue to evolve, research administrators should stay informed about what tools are approved and what guardrails are in place. Here are a few helpful questions to keep in mind.
- Is the data I am using appropriate for an AI environment?
- How transparent is the tool’s process?
- Who is accountable if the AI-generated content contains an error?
- What institutional policies govern the use of generative AI?
Recognizing AI’s Limitations
Even advanced AI tools have well-known limitations. They may produce confident but incorrect statements, misinterpret negative or exception-based language, or generate inconsistent results due to inherent randomness. AI may also reflect bias present in its training data. These limitations reinforce the importance of human review and sound judgment, especially in a field like research administration, where accuracy and compliance are essential.
Getting Started: Practical Tips
For those beginning to explore AI in their workflows, it helps to start small. Summarizing text, rephrasing emails, drafting routine documents, or turning complex policy guidance into accessible language are all good entry points. Clear prompting leads to stronger results. Effective techniques include:
Providing context instead of offering vague requests
Example: “Draft a procedure for prior approvals for federal grants that includes roles, timelines, and documentation requirements.”
Using role-based or scenario-specific instructions
Example: “You are a research administrator. Write a follow-up email to a PI reminding them about their late effort certification.”
Breaking complex tasks into steps
Example:
Step 1: “Summarize the NIH policy on prior approval for foreign components.”
Step 2: “Draft language for a PI training handout based on that summary.”
Requesting examples or specific output formats
Example: “Provide three examples of subrecipient monitoring activities in bulleted format.”
These approaches help guide AI tools toward outputs that are accurate, relevant, and ready for refinement.
Moving Forward
AI will not replace research administrators, but it will increasingly enhance their ability to work efficiently, accurately, and strategically. By understanding AI’s capabilities, limitations, and ethical considerations—and by using approved tools thoughtfully—research administrators can streamline their workloads and focus more fully on the human-centered, decision-driven aspects of the profession.
About the Author

Kathleen Halley-Octa is a Manager within the Attain Research practice of Attain Partners. Kathleen is an experienced research administrator who specializes in pre-award administration, training development, and eRA system implementation. Prior to joining Attain, Kathleen served as the Director of the Office of Research & Sponsored Projects for the College of Education & Human Development at Georgia State University, where she oversaw both pre- and post-award operations for the college’s research portfolio. While at GSU, she co-founded the Access to Careers in Research Administration Program, a cohort-based research administration training program for graduate students wishing to prepare for careers in research administration. Previously, she also worked as a departmental administrator in the George W. Woodruff School of Mechanical Engineering at the Georgia Institute of Technology and later ran Georgia Tech’s Research Admin Training Program. She has been an active member of NCURA since 2011 and has presented at both regional and national conferences.










