This paper introduces and validates two measures (AI Dread and AI Controllability Concern) to explain public perceptions of AI risks. Using data from Canada and Japan, the authors find that trust in scientists, conspiracy thinking, and job impact concerns are significant predictors of AI attitudes across both contexts.
Artificial intelligence has spurred important innovation, affecting politics, the economy, and society in unpredictable ways. How then do citizens perceive AI and its risks? We propose that perceived dread and controllability concerns are central to understanding public opinion about AI and its associated risks. In this paper, we introduce a theoretical framework outlining these dimensions and validate novel measures -- the AI Dread and AI Controllability Concern Measures –- using data from two distinct cases (Canada and Japan). Findings reveal a multidimensional structure of AI attitudes, with trust in scientists, conspiracy thinking, and job impact concerns being key cross-national predictors. We encourage researchers to adopt these two measures in their work on AI.