Taeksoo Kwon and Connor Kim
Adv. Artif. Intell. Mach. Learn., 4 (4):3125-3134
Taeksoo Kwon : University of California, Irvine
Connor Kim : Brigham Young University
DOI: https://dx.doi.org/10.54364/AAIML.2024.44179
Article History: Received on: 11-Sep-24, Accepted on: 21-Dec-24, Published on: 28-Dec-24
Corresponding Author: Taeksoo Kwon
Email: henryk@algorix.io
Citation: Taeksoo Kwon, Connor Hunjoon Kim. (USA) (2024). Efficacy of Utilizing Large Language Models to Detect Public Threat Posted Online. Adv. Artif. Intell. Mach. Learn., 4 (4 ):3125-3134
This paper examines the efficacy of utilizing large language models (LLMs) to detect public threats posted online. Amid rising concerns over the spread of threatening rhetoric and advance notices of violence, automated content analysis techniques may aid in early identification and moderation. Custom data collection tools were developed to amass post titles from a popular Korean online community, comprising 500 non-threat examples and 20 threats. Various LLMs (GPT-3.5, GPT-4, PaLM) were prompted to classify individual posts as either "threat" or "safe." Results indicate promising performance, with GPT-4 achieving the highest F1 score of 0.960, followed by PaLM2 (0.934) and GPT-3.5 (0.726). All models demonstrated high recall for threat detection, while precision varied. This study highlights the potential of LLMs in automating threat detection in online communities, particularly in non-English contexts. However, it also underscores the need for careful model selection, prompt engineering, and consideration of cost-effectiveness in real-world applications. Future research directions include improving multilingual capabilities and refining prompts for enhanced reliability in threat detection scenarios.