Background: The digital health literacy instrument (DHLI) was developed in 2017 to measure individuals’ ability to access, understand, evaluate, and apply online health information. Since that time, digital health has shifted from desktop-based internet use to mobile devices, and there has been a rapidly expanding range of health apps. Additionally, heightened privacy and data security requirements have increased the complexity of user competencies needed to engage with digital health tools. These developments underscore the need to update the original DHLI. Objective: This study aimed to create an updated version of the DHLI (DHLI 2.0) that reflects current digital health practices and to examine its reliability and validity by exploring associations with user characteristics. Additionally, we aimed to develop a short-form version to facilitate broader use in research and practice. Methods: The instrument was iteratively updated and pilot-tested to retain the original theoretical framework while reflecting current digital health practices, devices, and emerging challenges such as mobile use and data security. Several items were reworded and a new 2-item subscale on digital safety was added. The full DHLI 2.0 comprises 24 items across 8 skill domains. A 16-item short form was developed by iteratively removing 1 or 2 items per subscale based on the “α if item deleted” criterion, while retaining the same subscale structure as the full form. Data to validate the new version of the instrument were collected in June 2024 through an online survey among members of a representative citizen panel in Friesland, a province in the Netherlands (N=2728). Sociodemographics, internet and health-related internet use, general health literacy (measured with the Single Item Literacy Screener), self-reported health, and health care use were assessed. Internal consistency was evaluated using Cronbach α, and construct validity was assessed via Spearman ρ correlations with related constructs. Results: Internal consistency was high for both the full (α=0.94) and short-form (α=0.90) scales. Most subscales showed satisfactory to excellent reliability (α=0.71–0.93), while “Securing privacy” and “Using security measures” demonstrated moderate reliability (α=0.65-0.66). The DHLI 2.0 total scores were approximately normally distributed (skewness –0.5; kurtosis 0.4). As expected, digital health literacy was negatively correlated with age (ρ=−0.39, <.001) and positively correlated with education (ρ=0.22, <.001), income (ρ=0.27, <.001), time spent online (ρ=0.32, <.001), and general health literacy (ρ=−0.42, <.001). Conclusions: The DHLI 2.0 provides an updated, reliable, and valid measure of digital health literacy covering 8 key domains, including data security. The 16-item short form offers a concise alternative suitable for research and possibly practical applications in health and eHealth contexts.
Assessing nurses’ attitudes toward artificial intelligence in Kazakhstan: psychometric validation of a nine-item scale
BackgroundArtificial intelligence (AI) is increasingly integrated into healthcare, yet the attitudes and knowledge of nurses, who are the key mediators of AI implementation, remain underexplored.


