GenAI has been observed to produce content with questionable quality or outright misinformation. This raises ethical concerns regarding the responsible use of AI technology and the potential impact of false or misleading information. To address this, students must acquire the necessary skills to discern the reliability of information generated by GenAI technologies. Otherwise, there could be potential misuse of AI tools for academic dishonesty. Students could also find difficulties in differentiating between AI-generated and human-authored text. (Chan, 2023)
Students should understand how to determine the accuracy, reliability, and biases present in content produced by AI systems. They need to be aware of the potential risks and challenges associated with the misuse or misinterpretation of AI-generated information, as it can have significant consequences on individuals, society, and decision-making processes.
Meanwhile, the more data you feed into the algorithm, the more accurate and personalized the content it generates becomes. However, this also means that personal data is being used, which can be a cause for concern. Companies often limit access to personal information, but an AI can be queried in dozens of different ways. If this data falls into the wrong hands, it could be used for malicious purposes, such as identity theft, cyberattacks, and social engineering scams. Therefore, during a responsible inquiry about generative AI or co-working with generative AI, students also need to consider data privacy and security issues. Students should protect and not disclose personal data in the data input process, and ensure the security of AI applications (Chan & Lee, 2023). More discussions on privacy, security and safety on AI in education can be found in literature, such as Nguyen et al. (2023)