An ever-present threat to any given country’s national security is that of cybersecurity. There are always hackers that want to use technology for malicious purposes, not to say the long list of adversaries that a country can pile up along the years. In a national security context, the cybersecurity hazard increases greatly, as well as the potential harm. That’s so as what it is at stake is millions of sensible data from citizens, companies, directories, senior officers
and members of the government, state’s papers and more.
Unfortunately, not all Governments take this peril as seriously as they should, and the efforts towards creating cyber-defense strategies – in most countries – lack budget, personnel and even real, field knowledge. Before this absence of real policies, Artificial Intelligence might be well seen as a good starting point where to build the walls that keep out any possible threats.
Governments and countries held sensible data of millions of unaware citizens, though their cyber-defense strategies leave much to be desired. According to an article published by Mckinsey Global Institute , “Many countries have yet to clarify their cyber-defense strategies across all dimensions of cybersecurity or to impose a single governance structure.” As such, that lack of clarity can result in a confused response to crises and inefficient use of limited
resources.
According to Mckinsey’s article, an efficient national cybersecurity strategy should be centralized and properly designed. They state: “a single organization should have overall responsibility for cybersecurity, bringing operational activity and policy together with
clear governance arrangements and a single stream of funding.”
In fact, most advanced countries have begun to introduce AI within their departments in an effort to develop effective strategies against cyber attacks. For example, applications are being developed to use AI in the fields of fighting crime, for surveillance and for the military, as well as in politics.
This strategy should account for many varied forms of cyberattacks, as not all of them are obvious. A particularly worrisome issue that emerged in this regard in recent months has been how the electoral process in both the UK and the USA were potentially impacted on by manipulative use of platforms such as Facebook. It is even believed that such efforts influenced election results, and there are also concerns that large scale fraud could occur through the use of these types of technologies, targeting other State’s
areas.
If a relatively small and privately owned organization such as Cambridge Analytica could interfere in the general elections of the USA, other countries with better-funded teams could do much more harm. Nonetheless, using private networks – like Facebook – to
impact elections is just the tip of the iceberg from a national security perspective.
As we can see, the threats posed by cybersecurity issues are not insignificant. Inside Big Data reported in late 2018 , it was stated that “there were 5.99 billion recorded malware attacks in the first half of 2018, which doubled the number in 2017 over the same
period.”
Governments, and society as a whole, should find ways to reduce these numbers and to ensure that cybersecurity is robust. AI solutions, such as machine learning, can become critical in the near future as cybersecurity attacks towards national databases have increased in recent years. These solutions can help countries to develop a successful backstop to these cyber attacks if properly inserted in a joint national security strategy.
Examples can be found across the globe: the United Kingdom’s National Cyber Security Centre (NCSC) is one of top-level models while the Estonia’s Cyber Security Council is one of the most advanced. These have adopted AI based software to their many other countermeasures, and have been proven as two of the most advanced nation-wide strategies.
AI and machine learning solutions are being developed to try and add extra layers of defense to these networks. The idea behind this is that by inputting massive volumes of data into a
system, machine learning can occur and the system can learn how to spot certain types of issues and potential threats. This can lead to alerts being generated which can help organisations better protect themselves against attacks. However, this is clearly not a
“plug and play” type of situation, as huge amounts of data are needed to help artificial intelligence do its job in the first place, and this takes time. Equally, threats change over time, as hackers learn new ways to get past security systems.
This latter threat is somewhat mitigated against by the fact that hackers tend to build new solutions on the ways in which they created previous threats. This means that using machine learning to identify new attacks based on what was learned about previous attacks does definitely have its merits, at least to some degree. It does have the potential to lead to threats being identified faster and in a more efficient manner. This has important benefits in terms
of minimising the impact of attacks, since pinpointing the start of a problem can lead to remedial or protective action being taken before other parts of a system become affected, or before other organisations are attacked. It is believed that this could be helpful
in terms of stopping IT teams from having to deal with the smaller issues. This would free them up to enable them to deal with the bigger challenges – perhaps such as anticipating potential new types of attack, for example.
A challenge still remaining to be solved, is how to get AI to solve problems in the same ways that humans can. To date, AI has been programmed to solve specific problems, and to learn from the past (machine learning). However, finding ways to get machines to think like humans is complex, and is an issue that has not yet been resolved. While these narrower types of issues can be resolved, the main problem exists with regard to artificial general intelligence,
and it is thought that we are many years away from solutions that see these types of AI implemented within them. That said, new techniques are starting to be developed that help to address these types of problems.
For example, deep learning is being applied to help machines to understand how different types of decisions were made in areas that affect society and its functioning. These include the making of decisions relating to criminals, and deciding on whether a person or a business should receive financing or not. At
the same time, a solution called “transfer learning” is proving to be helpful in initially training a machine how to deal with a particular activity, but then applying what was learned to a different but similar type of activity.
These types of AI solutions could be helpful for addressing threats faced by cybersecurity issues. However, AI’s potential can be a double-edged sword: what works for organizations and governments also works for the ones that want to hurt them. It doesn’t help either the ongoing race amongst companies.
The demand for the latest, shiny AI software is overwhelming and, as a result, AI solutions are being developed in such a pace that developers can’t barely keep up with. This fact means that risks may be taken and corners cut, presenting unpolished solutions that may lead to cybersecurity risks. Because, yes, when we talk about software, there is always cybersecurity
risks, and AI is no different at all.
Artificial intelligence (AI) offers tremendous opportunities for the world in general and for the development of national cybersecurity strategies in particular. It has led to the development of a whole array of different in-app solutions towards fostering growth in productivity, increasing efficiency and, above of it all, providing essential tools to smoothen processes up within Governments and its public institutions.
Governments need to make sure that cybersecurity is considered a top priority and also need to ensure that AI is developed responsibly. If such a step is not taken, malicious use, bias or
privacy violations could lead to the public losing any confidence it may have had in such technologies. Finding ways to strengthen security protections will be particularly important in this regard.
Aghiath Chbib – Established executive with close to 2 decades of proven successes driving business development and Sales across Europe, Middle East, and North Africa. Expert knowledge of cybersecurity, lawful inceptions, digital forensics, blockchain, data protection, data, and voice encryptions, and data center. Detail-oriented, diplomatic, highly-ethical thought leader and change agent equipped with the ability to close multi-million-dollar projects allowing for rapid market expansion. Business-minded professional adept at cultivating and maintaining strategic relationships with senior government officials, business leaders, and stakeholders. Passionate entrepreneur with an extensive professional network comprised of hundreds of customers with access to major security system integrators and resellers.