arXiv:2411.13207v3 Announce Type: replace-cross
Abstract: The popularity of large language models (LLMs) continues to grow, and LLM-based assistants have become ubiquitous. Information security awareness (ISA) is an important yet underexplored area of LLM safety. ISA encompasses LLMs’ security knowledge, which has been explored in the past, as well as their attitudes and behaviors, which are crucial to LLMs’ ability to understand implicit security context and reject unsafe requests that may cause an LLM to unintentionally fail the user. We introduce LISAA, a comprehensive framework to assess LLM ISA. The proposed framework applies an automated measurement method to a comprehensive set of 100 realistic scenarios covering all security topics in an ISA taxonomy. These scenarios create tension between implicit security implications and user satisfaction. Applying our LISAA framework to leading LLMs highlights a widespread vulnerability affecting current deployments: many popular models exhibit only medium to low ISA levels, exposing their users to cybersecurity threats, and models that rank highly in cybersecurity knowledge benchmarks sometimes achieve relatively low ISA ranking. In addition, we found that smaller variants of the same model family are significantly riskier. Furthermore, while newer model versions demonstrated notable improvements, meaningful gaps in their ISA persist, suggesting that there is room for improvement. We release an online tool that implements our framework and enables the evaluation of new models.
Depression subtype classification from social media posts: few-shot prompting vs. fine-tuning of large language models
BackgroundSocial media provides timely proxy signals of mental health, but reliable tweet-level classification of depression subtypes remains challenging due to short, noisy text, overlapping symptomatology,




