MIT published their list back in January.
- More huge data breaches
- Ransomware in the cloud
- The weaponization of AI
- Cyber-physical attacks
- Mining cryptocurrencies
- Hacking elections
Out of these, hacking elections is probably less interesting in the commercial space, but “the weaponization of AI” is a worrying trend.
MIT reported back in 2016 on the use of AI techniques to better distinguish between real attacks and poorly worded emails or other content-triggering false positives. An overly sensitive security system can breed resentment among users as it impedes their ability to conduct business. (The local government and citizen in England were in a fix when first-generation filters decided that anything to do with Essex, Sussex, or Wessex county councils should be blocked as pornography purely based on the ‘sex’ character match!)
In the ongoing arms race between the bad guys and system administrators, this is a timely reminder of the human dimension of cybersecurity. There is a need to educate people in your organization about safe behaviors online, how to spot potentially hazardous items (clickbait, spear phishing, and suspicious attachments), and know how to deal with them.
Dealing with these items should not be just at the individual level. Do you have a reporting mechanism that people can use to alert your security operations team about potential threats? Being able to spot an emerging pattern of attempted attacks is a valuable early warning, and an opportunity to take preemptive or corrective actions against the attacks. This becomes even more important when AI-driven messages are better crafted to look like the legitimate content or communications, making it much more likely that automated systems will not always spot them or react to them the first time around.