As A Student When You Ask AI A Question Can You Believe Its Answer?

0
Two Australian academics provide advice on how students should engage AI to harness its capacity to help them learn. (Image credit: 334313935 © Ivan Usenko | Dreamstime.com)

How many of us understand what lies ahead as artificial intelligence (AI) and artificial general intelligence (AGI) become predominant? How many of us can claim we are artificial intelligence (AI) savvy? If you ask a chatbot a question, can you recognize whether its response is accurate or not?

Universities are increasingly finding that digital literacy in students doesn’t translate to AI literacy. Students today are digital natives. They have been born into an era of smartphones, tablets, laptops and virtual reality devices. They are addicted to small screens and apps, but can they recognize when an AI is telling the truth or hallucinating? The answer to the last question appears to be “No!”

Afrooz Purarjomandlangrudi and Amir Ghapanchi are academics at Victoria University in Australia. In University World News, they have written a guide for students when using AI, providing a dozen literacy skills needed to master core competencies, tool literacy, academic integrity and ethics, and learning and workflow integration.

I confess to using AI tools because they make it easier to do background research before writing the articles I post to this blog. In looking at Purarjomandlangrudi and Ghapanchi’s skill sets, I see much of what I have incorporated into my interactions with AI.

For me, the critical step is to know something about a subject before asking an AI for assistance. If a university student, it could mean going to the library to find books and journal articles.  Students may not do this anymore. The second line of gathering background that they prefer may be asking Google or Bing.

I have also found that not all AGIs are equal. Back in February 2024, on this site, I asked three AIs, “Why did Russia invade Ukraine?” The answers showed the differences in how each AI was trained on different datasets. Analyzing and predicting sequences of words comes from AI being exposed to vast quantities of information, much of it grabbed from the Internet. Not all human knowledge resides on the Internet. That’s why the library should never be overlooked.

Another observation from using AIs in my research is that most of the answers they provide mirror what you ask. When you read that mirrored response, does it accurately reflect your question?

If new to using AI, a word of advice. Treat any answer to a question as a hypothesis until you can confirm its accuracy by using authoritative, verifiable third-party sources.

Ask an AI what it would recommend you need to do to become AI-savvy, and it recommends:

  • Well-informed questions that are concise, clear, specific, and nuanced. Remember the reporter’s questioning mantra. Begin every question with W5 + how: who, what, where, when, why, or how.
  • Recognizing that AI responses need to be tested using criteria to determine accuracy (correctness), relevance (direct response to prompt), completeness (all aspects covered), and clarity (easy to read).
  • Being aware of AI’s ability to read social cues gleaned from interacting with you and feed answers it expects you want to see. Also, be aware of other forms of bias reflected in interactions with AI. For example, a polished AI response may be an effective way for the technology to hide inaccuracies and hallucinations.

The issue of hallucinations is about bad training data and bad algorithms. Would you recognize an AI hallucination?

Typical examples include: faked facts, fake citations with broken links, unsourced quotes and unverifiable content, erroneous explanations and misidentified images and objects. Every fact should be backed by two sources, another good reporter’s rule of thumb.

Purarjomandlangrudi and Ghapanchi’s 4 categories with 12 AI literacy skills provide excellent guidance for students attempting to responsibly integrate the technology:

Category 1: Core Technical Competencies

Skill 1: Prompt engineeringStudents must learn to craft effective prompts that elicit useful responses from AI systems. This involves providing context, specifying desired formats, and iterating for better results. Compare these two different requests:

“Tell me about climate change” versus Explain the relationship between ocean acidification and climate change, focusing on peer-reviewed research from the last five years, with clear topic sentences for each major point.”

Skill 2: AI fact-checking and source verificationStudents need systematic approaches for verifying AI-generated information against reliable sources, treating AI outputs like Wikipedia entries, useful starting points requiring independent verification by finding and reading original research, government reports and academic papers.

Skill 3: AI output evaluation and quality assessmentStudents must develop judgement skills for assessing the quality, completeness and relevance of AI responses, recognizing when AI responses are superficial and don’t meet academic standards for depth and critical thinking.

Skill 4: Iterative collaboration techniquesStudents must learn to build complexity through successive interactions with an AI rather than expecting comprehensive responses from single prompts. That means asking follow-up questions to guide the AI to increasingly sophisticated analyses.

Category 2: AI Systems Knowledge and Tool Literacy

Skill 5: Algorithm awareness and bias recognitionStudents need to know how AI systems get trained and the implications of training data limitations. This includes recognizing potential biases that perpetuate harmful stereotypes or outdated information.
For example, when using AI to research historical events, students should recognize that the training data may reflect dominant cultural narratives and overlook marginalized voices.

Skill 6: AI Tool selection and comparisonStudents must develop skills for evaluating and selecting appropriate AI tools for different academic tasks, understanding the strengths and limitations of various AI models, knowing when to use general-purpose versus specialized AI applications, and making informed decisions about AI services. For instance, students might choose Claude for writing feedback, ChatGPT for research assistance, and specialized tools like Elicit for literature reviews.

Skill 7: Multimodal AI literacyStudents increasingly encounter AI-generated content beyond text, including images, videos and data visualizations. For example, students studying contemporary politics should be able to discern between AI-generated protest images and synthetic audio clips and real events.

Category 3: Academic Integrity and Ethics

Skill 8: Ethical AI use and attributionStudents must develop frameworks for when and how to acknowledge AI assistance in academic work. For example, students might use AI to brainstorm essay topics and receive draft feedback while ensuring final arguments and analysis represent their own critical thinking.

Skill 9: Data privacy and security awarenessStudents must understand the implications of sharing academic work, research data, intellectual property and research confidentiality with AI platforms. For example, students should not share unpublished findings with public-use AI systems, but rather should only use institutional tools with appropriate privacy protections.

Skill 10: AI accessibility and inclusion awarenessStudents should understand how AI tools can support diverse learning needs while recognizing potential barriers and biases that might disadvantage certain populations. For example, students should recognize that AI writing assistants might benefit non-native English speakers while creating advantages for those with premium AI access, requiring thoughtful consideration of fairness.

Category 4: Learning and Workflow Integration

Skill 11: AI-human workflow integrationStudents must learn to design academic workflows that recognize where AI excels (brainstorming, initial drafts) versus areas requiring human judgment (critical analysis, original argumentation). For example, students might use AI for initial literature searches and organizing draft content, but rely on human input to analyze sources and develop original arguments.

Skill 12: Meta-learning with AIStudents should learn to use AI as a tool for understanding and improving their own learning processes. For example, students struggling with statistical concepts might use AI to generate practice problems with solutions and receive feedback on problem-solving approaches while ensuring a genuine understanding of the subject rather than AI dependency.

A recent issue of The Economist looks at AI superintelligence, describing it as producing “unprecedented upheaval.” A sentient AI that accelerates the progress of education rather than leads to the dumbing down of students who depend on the technology as a crutch, would be a much-welcome addition to the arsenal of academic learning tools.