Alabama Colleges Grapple with AI Policies & Classroom Integration
Across Alabama, higher education institutions are actively grappling with the transformative impact of generative artificial intelligence on academic integrity and pedagogical practices. Calhoun Community College and Athens State University, for instance, are among those diligently working to refine their policies regarding AI usage among students as these advanced models continue to evolve and gain widespread attention.
The challenge lies in defining the precise boundaries of academic dishonesty in an era where AI tools can produce sophisticated written materials and complex code. While most colleges readily identify clear restrictions, the line often remains blurry. At Calhoun Community College, current guidelines categorize AI use into three distinct areas: restricted, limited, and integrated. For instance, the college’s code of conduct explicitly prohibits the use of AI to generate automated written assignments. However, other departments, such as Computer Information Systems (CIS), actively encourage AI integration, reflecting a nuanced approach to the technology.
Jeremy Blevins, chair of Calhoun’s CIS department, highlights a divergent perspective within the academic landscape. While some departments primarily focus on preventing AI-driven plagiarism, CIS prioritizes preparing students for a workforce where leveraging AI will be essential. Given the burgeoning cybersecurity sector in North Alabama, the department aims to equip students with relevant technological experience while simultaneously instilling strong ethical guidelines for AI use. Blevins emphasizes the critical need for educational institutions to align their curricula with industry demands, ensuring students gain directly applicable skills rather than just broad theoretical concepts. He acknowledges the inherent challenge of academia keeping pace with the rapid evolution of commercial technology, striving to balance teaching current trends with avoiding constant curriculum overhauls based on every new AI development.
In practical application, CIS classes at Calhoun are already integrating AI in innovative ways. A network security course, for example, utilizes AI to generate various code examples, allowing students to identify vulnerabilities and understand “bad code” even if they are not yet proficient in multiple programming languages. Another assignment involves students crafting resumes and cover letters for job postings, then using AI to enhance these documents. This exercise helps them understand how Applicant Tracking Systems (ATS)—which often employ AI-assisted filtering—evaluate submissions, teaching students to optimize their applications for modern recruitment processes. Beyond formal assignments, Blevins notes that students can ethically leverage AI for personal learning, such as asking models to explain complex concepts or create study quizzes.
However, the integration of AI comes with significant caveats. Blevins cautions students against uncritically accepting AI-generated information, invoking the principle of “garbage in, garbage out” (GIGO). Since AI models are trained on human-produced content, they can perpetuate biases and errors. Furthermore, AI models are prone to “hallucinating,” or generating factually incorrect or illogical responses. He stresses the importance of verification, advising students to “trust but verify” by checking sources, a sentiment he attributes to Ronald Reagan’s famous quote.
Instructors, too, are adapting. Blevins and his colleagues have learned to identify tell-tale signs of AI-generated work, particularly in written assignments. Often, the grammar is “too good,” the language overly precise, or the technical terminology beyond what a community college student would typically employ. Similarly, submissions that significantly exceed a student’s demonstrated skill level are often flagged as potentially AI-assisted. To address these challenges and deepen understanding of ethical AI use, Calhoun instructors have participated in specialized training at Auburn University’s Biggio Center. The Alabama Community College System (ACCS) confirms that its member institutions are developing AI policies tailored to their specific student populations and industry needs, with further professional development sessions planned to support faculty in navigating this evolving technological landscape.
Independent AI consultant Randy Sparkman offers practical advice for schools navigating policy development. He suggests that institutions need not start from scratch, but rather adapt existing computer use policies. He also advocates for a collaborative, community-driven approach, recommending that schools establish committees of interested faculty and staff to collectively determine sensible AI guidelines. Ultimately, Sparkman underscores the importance of cultivating AI literacy among all stakeholders, from educators to students, to ensure a responsible and effective integration of this powerful technology.