Journal article
The Security of Using Large Language Models: A Survey with Emphasis on ChatGPT
IEEE/CAA journal of automatica sinica, Vol.12(1), pp.1-26
01/2025
Metrics
1 Record Views
Abstract
ChatGPT is a powerful artificial intelligence (AI) language model that has demonstrated significant improvements in various natural language processing (NLP) tasks. However, like any technology, it presents potential security risks that need to be carefully evaluated and addressed. In this survey, we provide an overview of the current state of research on security of using ChatGPT, with aspects of bias, disinformation, ethics, misuse, attacks and privacy. We review and discuss the literature on these topics and highlight open research questions and future directions. Through this survey, we aim to contribute to the academic discourse on AI security, enriching the understanding of potential risks and mitigations. We anticipate that this survey will be valuable for various stakeholders involved in AI development and usage, including AI researchers, developers, policy makers, and end-users.
Details
- Title
- The Security of Using Large Language Models: A Survey with Emphasis on ChatGPT
- Creators
- Wei Zhou - Swinburne University of TechnologyXiaogang Zhu - University of AdelaideQing-Long Han - Swinburne University of TechnologyLin Li - Southern Cross UniversityXiao Chen - University of Newcastle AustraliaSheng Wen - Swinburne University of TechnologyYang Xiang - Swinburne University of Technology
- Publication Details
- IEEE/CAA journal of automatica sinica, Vol.12(1), pp.1-26
- Publisher
- IEEE
- Identifiers
- 991013285342202368
- Academic Unit
- Faculty of Science and Engineering
- Language
- English
- Resource Type
- Journal article