I am concerned about the risk of malicious use of AI systems through exploitation of vulnerabilities or misuse of the systems capabilities to the detriment of humanity. While AI poses other risks, I think the risk of malicious use is more immediate and malicious actors can exploit the systems easily especially in the current early stages where model security and regulations have not yet been established.
There seems to be little emphasis on AI systems being software and as with all software it can be hacked and unsuspecting users can be exploited. User education on the vulnerabilities of AI systems as well as precautions to take to ensure they do not expose their data seems to be lacking. The attack surface is increasing as AI systems become adopted into real-world applications and there is need to create user awareness similar to cyber awareness.
I reckon the emphasis on existential risk and other future risks from AI systems get more attention because of their potential impact. However, current, LLMs don't really offer any information that a dedicated person could not find online to ochestrate the catastrophies theorized.
I may be biased to be more concerned about current risks and problems as the need to solve current problems causing human suffering is often glaring at me everyday. I could update these views as systems become more agentic and better understanding of the inner workings of neural networks.
You might be interested in the information security sphere, which some in the EA community focus on, especially in the context of safe AI. This 80,000 Hours podcast from 2022 is a good overview: https://80000hours.org/podcast/episodes/nova-dassarma-information-security-and-ai-systems/