11
Cyberterrorism in the Cloud: Through a Glass Darkly

Barry Cartwright1, George R. S. Weir2, and Richard Frank1

1Simon Fraser University, Burnaby, BC, Canada

2University of Strathclyde, Glasgow, UK

11.1 Introduction

A 2002 article in the Washington Post carried the headline: “Cyber‐Attacks by Al Qaeda Feared; Terrorists at Threshold of Using Internet as Tool of Bloodshed, Experts Say” (Gellman 2002). In the same year, a Global Information Assurance Certification Paper appeared, bearing the title “Ghosts in the Machine: The Who, Why, and How of Attacks on Information Security” (Barker 2002). Indeed, from the late 1990s onward – shortly after the emergence of the World Wide Web – governments, law enforcement agencies, and nongovernmental observers alike were sounding early alarm bells regarding the prospects of information security breaches and terrorist attacks in cyberspace (Luiijf 2014). As Thomas (2003) observed in his article about Al Qaeda's love for the internet, “people are afraid of things that are invisible and things that they don't understand.”

Much the same can be said for the oft‐stated concerns about vulnerabilities in the Cloud. If you ask the average person on the street (or even the average first‐year university student) to explain the origin of the term cloud computing, many will respond by pointing upward and saying that “it's in the Cloud,” or by saying that it involves satellite technology (and therefore, that “it's in the clouds”), or by opining that it's called cloud computing because of Apple's iCloud. Few, if any, will know that it is called cloud computing because computer engineers have for decades drawn a picture of a cloud in the center of their flow charts and diagrams, with the cloud representing the internet, surrounded by the servers, databases, corporate local area networks (LANs), wireless networks, mobile devices, and other digital devices too numerous to mention, all of which are connected to the internet, or the Cloud (Weinberger 2015). But while users (or observers) of cloud services might feel that it is all very mysterious, and be worried about unknown vulnerabilities in the Cloud, it could be said that the Cloud has so far proven to be less vulnerable to attack than in‐house information technology (IT) services, perhaps because it involves comparatively new and sophisticated technology (Beazer 2016). Moreover, because of the finances required to implement a cloud host, they tend to be well defended.

The precise origins of the term cloud computing are a bit up in the air, with some claiming that it first appeared in a New York Times article in 2001, wherein the internet was described as a “cloud of computers”; others claiming that it originated at a 1996 meeting at Compaq in Houston; and still others pointing out that telecom engineers were putting a cloud in the center of their diagrams as far back as the 1970s and 1980s. However, the term cloud computing did not truly enter the common lexicon until a 2006 conference, at which Eric Schmidt of Google stated that Google's services belonged “in a cloud somewhere” (Fogarty 2012). Nowadays, despite not thinking about it or understanding exactly how cloud technology works, billions of people make daily use of cloud‐based services such as YouTube, Facebook, and Flickr (Gayathri et al. 2012). That said, if cyberterrorism and cloud computing are mentioned in the same sentence, it is likely to evoke a fear‐based response.

In this chapter, we will explore the degree to which cyberterrorism and cloud computing are interrelated (or not, as the case may be). To accomplish this, we will first consider whether there have been any meaningful incidents of cyberterrorism to date, and offer a definition against which future incidents purporting to be representative of cyberterrorism can be measured. We will then consider how terrorists make use of cyberspace, and ask whether there is anything that renders cloud‐based services more vulnerable or more amenable to terrorist activities in cyberspace. We will also explore the nexus between cyberlaw and cyberterrorism, paying attention to jurisdictional issues and the problems that invariably crop up when dealing with politically charged, transnational events. Finally, we will consider future directions that cyberterrorism might take, and whether the Cloud might be a facilitator or a target of such attacks.

11.2 What Is Terrorism?

According to the Oxford English Dictionary, the term terrorism (terrorisme) first appeared in 1795, when it was used by Thomas Paine to describe the “reign of terror” carried out by the political leaders of France during the French Revolution. Since then, it has been applied to a vast range of seemingly unconnected scenarios, from being used by G. J. Adler in 1854 to describe the manner in which the “terrorism of a narrow‐minded clique” of academics in New York contributed to the subjugation of university students, to the “social terrorism” committed by trade unions in the United Kingdom in 1863, to being used by an American newspaper in 1935 to describe the 28 “terrorism suspects” who were arrested in connection with a coal miners' strike in the United States (www.oed.com).

It is only since the late 1960s that the term terrorism has been widely used to describe politically motivated attacks by disenfranchised or disenchanted fringe groups, who deliberately set out to inflict maximal physical destruction and/or casualties on a civilian population, ostensibly for the purpose of creating terror in the general population, but with the underlying intention of sending a powerful message to their political leaders. The concept of terrorism as we know it today sprang into the public imagination as the world watched in July 1968, when “Palestinian terrorists” hijacked el Al flight 426 from Rome to Tel Aviv and diverted it to Rome with 10 crew members and 38 passengers on board (Jenkins and Johnson 1975), and again in September 1972, when eight “Arab terrorists” kidnapped the members of the Israeli team at the Olympics in Munich, leading to the death of 11 Israelis and four of the hostage‐takers (Binder 1972). The 9/11 attack on the World Trade Center in New York, orchestrated by Al Qaeda, serves as a more recent example of this type of terrorism.

Sometimes, terrorist activities are orchestrated by political leaders, in an effort to instill terror in their own civilian population for the purpose of maintaining or restoring social order, and ultimately, to protect the power and privilege of the leaders. To illustrate, we need look no further than Bashar al‐Assad and his bombings of and chemical attacks on the Syrian people. Al‐Assad insists that the Syrian regime and its Russian allies are working together on the front lines, busily fighting ISIS “terrorists” (TASS 2017), and has even invited the US to join his “fight against terrorism” (Solomon 2017). At the same time, al‐Assad has claimed that “the West, mainly the United States, is hand‐in‐glove with the terrorists” (Knox and Hodge 2017). On the other hand, British Foreign Secretary Boris Johnson has accused al‐Assad of being an “arch terrorist” following the chemical attacks in the province of Idlib (Chaplain 2017), while a Spanish court is presently investigating “Syrian ‘state terrorism’ by the Assad regime” in connection with the kidnapping, torture, and death of a truck driver whose sister lives in Spain (Jones 2017).

Thus, we can say that terms such as terrorism and terrorist tend to be applied in a highly subjective fashion. Terrorists are typically portrayed as the very personification of evil, while those who designate others as terrorists are portrayed as the rightfully elected upholders of the law – as the defenders of life, liberty, and freedom. Wherever possible, leaders of the “civilized world” refer to insurgents as “unlawful combatants,” thereby conferring legitimacy onto their own actions, while denying legitimacy to those who are fighting back against superior and in many cases overwhelming military force (van Baarda 2009). As Howard Becker (1963) pointed out in his renowned work on labelling theory, deviance is not so much the quality of the act itself, but rather, the consequence of rules and sanctions being applied to the offender by rule makers and rule enforcers. To extrapolate from this, the likelihood of an act being defined as terrorism – or a “freedom fighter” being labeled successfully as an “unlawful combatant” – depends very much on who commits the act, who believes that they are being harmed, and who has the power to impose (or to deflect) the label.

To look at it from a different angle, one person's terrorist might well be regarded as another person's freedom fighter. Consider for the moment that the United States and its allies are wont to characterize Al Qaeda and its various subsidiaries as terrorists. On the other hand, Al Qaeda and its various subsidiaries are as wont to characterize the United States and its allies as terrorists. Indeed, in a 1998 interview with ABC‐TV, Osama Bin‐Laden stated, “the worst terrorists are the Americans” (National Commission of Terrorist Attacks upon the United States 2004). To follow this line of thinking, the inhabitants of Afghanistan and Iraq might legitimately ask, “Who is invading whose territories by land, sea and air? Who is bombing and killing whose civilians in the greatest number?” These are uncomfortable questions, often glossed over or ignored in the discourse on terrorism (Jarvis et al. 2016).

Arguably, cyberterrorism – to the degree that it exists – is simply a contemporary manifestation of the sort of asymmetrical warfare employed and enjoyed by weaker forces throughout history when confronted with seemingly overwhelming military might. Often, insurgents or guerrillas are accused by the superior forces of not fighting fairly, or of not playing by the agreed‐upon rules of military conflict (Svete 2009). History is replete with examples of asymmetrical warfare, including hijackings, suicide bombings, and improvised explosive devices, not to mention guerrilla attacks that appear suddenly from the mountains, forests, or jungles, and then disappear just as suddenly when the superior forces get their boots on the ground (Sexton 2011). American forces learned about the effectiveness of this type of asymmetrical warfare during the Vietnam War, much to their chagrin. So did English forces when confronting William Wallace and his much smaller and more lightly armed group of Scottish rebels, back in the thirteenth century. While Wallace was much reviled by the English and, once captured, was put to a gruesome death by his English captors, he became a national hero of Scotland (Stevens 2013) and the subject of the internationally renowned film, Braveheart. Terrorism and asymmetrical warfare are not necessarily part and parcel of each other, but terrorism is often employed as a tactic by the weaker forces (Heickerö 2014).

This is not to suggest that the notion of terrorism should be dismissed out of hand or treated lightly. In truth, innocent civilians are maimed or killed all too frequently in terrorist attacks, usually while going about their routine daily activities, such as walking or commuting to and from work, attending sporting events, going out for dinner and drinks, or simply engaging in some leisurely sightseeing. Just as the general population has been impacted by technology, specifically the rise of computers, the internet, and telecommunications, so too have terrorists. They are able to take advantage of the anonymity, speed, and safety of the internet to carry out their activities. However, we should stop and ask ourselves how many cyberterrorist attacks have been perpetrated to date, who the perpetrators of the main attacks have been, and whether these attacks (if any) have resulted in the killing of innocent civilians or in significant damage to physical infrastructure.

11.3 Defining Cyberterrorism

The term cyber terrorism was first coined in 1982 by Barry Collin, a research fellow at the Institute for Security and Intelligence in the United States, who simply defined it at that time as “the convergence of cybernetics and terrorism” (Awan 2014; Luiijf 2014). So far, however, it could be said there has yet to be a universally agreed‐upon definition of cyberterrorism (Archer 2014). When it comes down to it, there seems to be no agreement among the experts as to whether it should be called cyber terrorism (two words) cyberterrorism (one word) or cyber‐terrorism (a hyphenated word).

One of the challenges in arriving at a precise definition of cyberterrorism is that incidents offered to elucidate the concept are often intertwined with elements of cyberwarfare and cyberespionage. The much‐referenced case of the 2009 Stuxnet worm attack on Iran's nuclear enrichment facilities, for example, has sometimes been mistaken for (or misconstrued as) an incident of cyberterrorism, or if not, then presented as a dire warning about the direction in which cyberterrorism could be heading (cf. Awan 2014; Helms et al. 2012).

With Stuxnet, it could be argued that the three ingredients of cyberterrorism, cyberwarfare, and cyberespionage were all present to one degree or another. The Stuxnet attack was purportedly carried out by Israel, possibly with assistance from the United States, although neither country has ever acknowledged responsibility. If this attribution of responsibility is correct, however, then this would more accurately be classified as an act of cyberwarfare, committed by one or more countries against another country, aimed at reducing the targeted country's ability to wage war. The Stuxnet worm was imported into the air‐gapped nuclear enrichment facility at Nantanz, on infected Universal Serial Bus (USB) jump drives carried by unsuspecting engineers. Stuxnet targeted the programmable logic controllers (PLCs) that ran the centrifuges at the facility, causing them to alter velocity, and in many cases destroying them (Kenney 2015; Wattanajantra 2012). This prelude to the Stuxnet attack would actually be more consistent with cyberespionage than cyberterrorism, in that it involved infiltration of the enemy's military infrastructure and the collection of enough secret, insider information to plan the attack (Rid 2011). Nevertheless, Stuxnet has often been mobilized as an example of cyberterrorism, despite lack of evidence that the political leaders, nuclear engineers, or Iranian populace experienced any significant degree of fear, panic, or terror as a consequence, or even suffered any casualties. Apart from that, it might be politically inexpedient to suggest that the governments of the United States and Israel would be willing to engage in terrorist activities (or unprovoked cyberwarfare) against other countries.

The absence of physical casualties invariably presents a challenge when it comes to defining cyberterrorism and enumerating the dangers that it supposedly presents. Those who warn against the dire consequences of cyberterrorism are hard pressed to come up with concrete examples of incidents where lives have been threatened or lost, or populations have truly been terrorized. The question also remains as to whether the computer has to be the deadly weapon, or if it is sufficient for the computer to be a “facilitator” (Awan 2012). That said, some of the examples discussed later in this chapter – e.g. the 2013 cyberattack on France's TV5Monde and the 2015 cyberattack on the Ukrainian power grid – do come considerably closer than Stuxnet to approximating what a future cyberterrorist attack might look like.

For the moment, we can say that for an act to qualify as cyberterrorism, it should be politically, religiously, or ideologically motivated; it should take place in cyberspace; it should involve the use of a computer, computer system, or computer network, either as a weapon used to commit the act or as a target of the act (ideally both); and it should involve civilian casualties or damage to critical infrastructure. At a minimum, to qualify as cyberterrorism, the act should cause genuine terror and large‐scale, lasting damage, well beyond the sort of fright, inconvenience, or expense associated with the various quasi‐cyberterrorist incidents reported to date (Ayres and Maglaras 2016; Cohen 2014).

11.4 Cyberterrorism vs. Terrorist Use of Cyberspace

When wielded by terrorists, the computer could be either a facilitator or a deadly weapon in a cyberterrorism event, but not all uses of the computer by terrorists are considered cyberterrorist events. A cyberterrorist event is one that delivers terror, either through cyberspace or other digital means toward members of the public, and is usually politically motivated. If it meets the definition of terrorism, then it is assumed that such an event is perpetrated by a terrorist. However, a terrorist can use cyberspace for many other purposes. Researchers from the Institute for Security Technology Studies examined dozens of websites from terrorist/extremist organizations and found that terrorist uses of cyberspace fall into six categories (Conway 2005; McPherson 2004):

  • Cyberspace allows terrorists to spread radical messages through websites that deliver their group's propaganda to anyone who will listen.
  • Cyberspace allows for international communication with and recruitment of new members to the cause in a very passive and inexpensive fashion. A group puts up a website, and motivated visitors stumble across the site and, if so inclined, reach out to the terrorist group, after which communication is taken offline and the visitor is recruited to join the violent Jihad.
  • Group members disseminate and seek further training material through instructional videos or websites, as well as propaganda magazines created by the terrorist groups (such as the ISIS‐authored magazine Dabiq).
  • Cyberspace allows for the group to solicit funds from supporters internationally, which is now facilitated by digital cryptocurrencies such as Bitcoin, which enable peer‐to‐peer transfers that disregard international borders and/or financial laws (Fanusie 2017).
  • Cyberspace allows members of terrorist organizations to communicate within the group, to share resources, and to provide moral and financial support to each other.
  • Cyberspace allows terrorist organizations to conduct targeting exercises, intelligence gathering, and online surveillance of potential targets, using open source intelligence tools that may be as simple as Google Maps.

In short, terrorist organizations make extensive use of the internet, just like non‐terrorist users, because of the simplicity of information gathering and the ease of transferring financial resources.

While terrorists use cyberspace to support terrorism in the listed examples, none of these are pure cyberterrorism events. Some have argued that using computers for recruitment, propaganda, and dissemination of information subsequently used in terrorist attacks rises to the level of cyberterrorism, but others have insisted that to meet the definition, computer technology and cyberspace must actually be used to inflict civilian casualties or, at a minimum, cause significant damage to critical infrastructure (Bearse 2015). As noted earlier, the more widely accepted stance is that while there are cyber activities that evidently support terrorism and cyberterrorism, an actual cyberterrorism event must cause damage that is similar in effect to the damage that would be caused by a traditional terrorist act.

11.5 Cyberterrorism in the Cloud

11.5.1 The Cloud Context

Understanding the nature and operation of the Cloud is a critical element in appreciating its putative role in terrorist exploits. The Cloud facilitates a variety of different services, applications, and resources. A useful perspective on cloud characteristics is provided by the US National Institute of Standards and Technology (NIST) (Mell and Grance 2011). This account includes a description of typical service and deployment models, which are best understood in relation to the essential characteristics attributed to the Cloud. These characteristics are: (i) on‐demand self‐service access to services and facilities; (ii) network access supported from a range of heterogeneous clients; (iii) pooling of resources to service multiple clients without locational constraints; (iv) elasticity of provision to achieve quick changes in scale and service access according to demand; and (v) service usage being automatically measured to facilitate resource management, and to provide insight on provision and customer billing (Mell and Grance 2011).

The three common service models of the Cloud are outlined next. First, with Software‐as‐a‐Service (SaaS), the end user purchases access to remote software services that are implemented on the cloud service provider's infrastructure. These services extend from access to data storage, through hosting of websites and database systems, to provision of web service components such as RESTful applications (Shaikh et al. 2008), containers (Richardson and Ruby 2007), and other microservices (Sill 2016). Second, with Platform‐as‐a‐Service (PaaS), the cloud service provider's software infrastructure is used by customers to run their own programs, wherein the customers have remote access to a software computing platform. Finally, with Infrastructure‐as‐a‐Service (IaaS), a greater degree of flexibility is afforded to the customer, whereby they purchase access to a virtual hardware platform on which they may install proprietary software, including their own choice of operating system and applications (Mell and Grance 2011, p. 3).

11.5.2 How Terrorists Might Use the Cloud

Given the many attractions of using cloud‐based services, we may consider how terrorists could seek to gain advantage from such deployment. To simplify the context, we will focus on two varieties of cloud usage. In the first of these, the cloud service is employed solely as a data repository. This is our repository scenario. The second variety of cloud usage requires the service as a means of computation. This is our application scenario. In the following discussion, we consider the plausibility of these scenarios as a basis for terrorist activity in the Cloud.

The scope for significant terrorist advantage in the repository scenario may seem slight, but nevertheless it has some potential. Aside from the obvious appeal of the service provider's secure backup and the data resilience from offsite file storage, an organization may benefit through use of a remote file‐exchange service. This only requires our repository scenario and at least one registered user account (to be shared across all operatives). Potentially, the cloud storage facility serves as a central distribution point for advice, forged documents, extremist propaganda, and information pertaining to planning and recruitment. Since the cloud service acts as a data drop, there is an additional advantage, since this requires no direct contact or communication between operatives.

Clearly, greater opportunity exists within the application scenario. In principle, the terrorist can seek to use the benefits of any available software, but aside from the general benefits from cloud usage, this offers little advantage over conventional networked computing facilities. Indeed, there are many examples of state‐sponsored agencies deploying conventional network resources to further their objectives (cf. Al‐Rawi 2014). Our application scenario seems more appealing as a launch point for exploits against targets that are opposed to the beliefs of the terrorists. We should consider what the nature of such terrorist exploits might be.

Although our focus is the potential for terrorist use of cloud facilities, the scope for cloud‐based exploits seems to be limited to the distribution of propaganda and conventional hacking activities. The former may be achieved through web hosting, blogs, and e‐mail distribution, with each of these employing cloud‐based services as the distribution platform. Given the increasing focus in many quarters on obstructing terrorism, such applications are likely to be speedily detected and curtailed through intervention by the cloud service provider. This leaves hacking‐type activities as a basis for terrorism‐related cloud deployment.

The beliefs that motivate terrorists may differ radically between individual hackers, hacking groups, and state‐sponsored agents, but the motivation makes no difference to the means available to further their goals. The technical activities that may be directed toward these goals, such as denial of service, social engineering, and network intrusion are usually accomplished through malware as a basis for the creation of botnets and distributed denial of service (DDOS) attacks; e‐mail as a basis for spam, phishing, and social engineering attacks; web services as a basis for spoofed websites and social engineering; and remote network access as a basis for network intrusion through Trojan malware or software vulnerabilities. In turn, while some of these technical ingredients may be situated in the Cloud (such as e‐mail or web services), others have nothing to gain from being cloud‐based. Indeed, denial of service attacks and network intrusion often rely upon malware infection and hijacked systems as a launch point for their related exploits (cf. Alomari et al. 2012).

The main prospects for cloud‐based activities with respect to the terrorist‐related objectives (set out in Table 11.1) are the hosting of web and e‐mail services. Such services can facilitate the distribution of propaganda, disinformation, and malware, as well as the hosting of spoofed websites, in support of social engineering exploits. From a protagonist perspective, the advantages of deploying such resources in the Cloud are no more than the standard cloud benefits of cost, reliability, resiliency, and extensibility (discussed later). Furthermore, such illicit use of cloud services would quickly be traced and disabled by the service provider, because the customer has breached the standard contract conditions of use.

Table 11.1 Exploit objectives and constituent technologies.

Objective Likely exploit Likely technical means
Data theft Social engineering, malware Phishing, malware, software vulnerabilities
Financial fraud Social engineering, malware Phishing, malware, software vulnerabilities
Service disruption Social engineering, network attack DDOS, malware, software vulnerabilities
Infrastructure damage Social engineering, network attack DDOS, phishing, software vulnerabilities
Propaganda Spam, network attack Spam, software vulnerabilities
Disinformation Spam, network attack Spam, software vulnerabilities

As an extension to the idea of terrorist attacks on critical infrastructure, cloud installations may themselves become the target of network‐based extremist action. There is some basis for considering the possibility of denial of service in the context of software‐defined networks (SDNs), as found in some cloud configurations (cf. Yan and Yu 2015). Even here, the prospect of such attacks is mitigated by rapid recovery through the reinstancing that is a feature of cloud‐based services. A related downside to this rapid‐recovery mechanism is that cloud forensic readiness may be inadequate to capture evidence that could be used by investigators to pinpoint the culprit in the event of such malevolent action. Fortunately, there are mechanisms available to ensure that evidence of user activity is acquired and securely logged for post‐event analysis (Nasreldin et al. 2017; Weir and Aßmuth 2017), and that data can be protected against such attacks (Weir et al. 2017). Nevertheless, we cannot discount the possibility that, as they extend to more critical functions, cloud services may themselves become the target of cyberterrorist activities.

11.6 The Benefits of the Cloud to Cyberterrorists

Many reasons for cloud adoption are common to all prospective users, whether their ambitions are commercial, academic, or more nefarious. As indicated in Weir and Aßmuth (2017), the principal benefits are:

  • Cost
  • Reliability
  • Resilience
  • Technical extensibility

Specifically, cloud services can prove to be cost effective, with the reduced requirement to purchase and maintain local facilities. The reliability of cloud provisioning may be guaranteed through service‐level contracts. Resilience is addressed through fast reinstatement of any failed service, while the extensible nature of the cloud offering ensures that changes in the customer's demands are easily accommodated. Finally, cloud services are usually backed by large, stable organizations that have the financial capacity and human resources needed to defend their infrastructure against physical or virtual attacks.

Even with these resources at their disposal, cloud services can be attacked. These privately owned services present both a target and an opportunity. Since the infrastructure is used to support multiple organizations that live on that service, bringing down that infrastructure will take all of the organizations offline, making this a very tempting target. As a general rule, these services are superbly defended, both in terms of attacks against bandwidth (such as DDoS attacks) and against applications (using viruses or malware) – so much so that they are even being used by the US military. Given the massive financial and knowledge‐based resources behind these online services, attacks against them have been rare, although they are increasing (Raywood 2017). One such attack method used a cross‐site scripting (XSS) attack to crash an Azure cloud‐hosted website and then attack its troubleshooting system to escape the sandbox, thus gaining access to the underlying cloud infrastructure and compromising everything that was running on that infrastructure (Dale 2016). Attacks have also used these services in other ways. For example, cloud services have been used to host botnet command‐and‐control servers, or software‐infringement sites (such as The Pirate Bay, hosted on the Cloud), because the resilient cloud infrastructure allows them to operate on a very stable and relatively inexpensive platform. Provided that the monthly subscription fee is paid, these platforms are available for use by anyone. Thus, it is entirely conceivable that terrorists or malicious state actors have successfully co‐opted existing cloud services and have made or are in the process of making preparations for cyberterrorist attacks.

Of course, there are further considerations that may attract terrorists to the Cloud. For instance, they may seek a software platform that permits them to obscure their identity and location. When using the Cloud, the cloud service provider is effectively an intermediary between the terrorist agency and any target. Prospective targets may trace the origin of any cloud‐based exploit back to the cloud service provider, but not beyond (that it is to say, not without engaging directly with the cloud service provider and the corresponding jurisdiction). This reduces the likelihood that an individual agent behind a terrorist activity will be identified. Furthermore, the true location of the agent is concealed. This introduces scope for plausible deniability for the perpetrator.

This motivation for deploying terrorist activity via a cloud service is less plausible if the target of an exploit is a foreign government or major institution. In such cases, there is a real prospect that international security or law enforcement services will approach the cloud service provider to reveal the true source and recorded identity of the agency behind the exploit. While there may be no major impact upon the terrorist organization, the cloud service provision is likely to be terminated once the service provider is apprised of the customer's behavior. Since the registration and payment details for the customer may not have been genuine, termination of the cloud service may simply be a minor inconvenience until such time as a suitable replacement cloud service is identified and contracted.

A scenario of this type came to light recently, in which alleged Russian agents used servers rented from a UK company to launch several criminal exploits: phishing attacks on the German parliament, diverting traffic meant for a Nigerian government website, and targeting Apple devices. To conceal their identities, the culprits used “bogus identities, virtual private networks, and hard‐to‐trace payment systems” (Vallance 2017).

11.6.1 The Challenges of the Cloud for Cyberterrorists

The damage caused by traditional terrorism is usually obvious: physical damage to some (critical) infrastructure, death of civilians, or the downing of an airplane, for example. While the repercussions of these actions cannot be predicted, the target (or intended target) and the identity of the perpetrator are usually clear. However, the same cannot be said for cyberterrorism, as there are three significant challenges to taking terrorism online.

First, the damage inflicted by cyberterrorism often cannot be targeted so precisely, because cyberweapons cannot be controlled to the same degree as physical weapons (Heckman et al. 2015). With traditional terrorism, for example, a bomb placed at a specific location can reasonably be expected to detonate in that location, whereas a cyberweapon (such as a virus or worm) can cause friendly fire casualties by damaging unintended and untargeted friendly systems while missing the intended target. Stuxnet caused friendly fire victims when it escaped via the internet and roamed around the world in search of similar targets, eventually infecting over 100 000 machines (Lindsay 2013), including Chevron's corporate networks in the United States. Just as most software contains some sort of logic error (i.e. a bug), the same holds true for malware, even those of the caliber of Stuxnet, meaning that unintended consequences can and do occur. Any malware wielded by cyberterrorists would similarly be uncontrollable – a trait that might actually be desirable within the context of cyberterrorism, in view of its thirst for maximal collateral damage.

Second, cloud systems are being implemented privately by large organizations that are of sufficient size to support such an infrastructure. For example, the US Department of Defense is de‐siloing its existing segregated data storages, along with their many associated business benefits (such as centralized security, cost efficiencies, standardized security assessment, authorization, and outsourced monitoring and support) (Verge 2015; ViON/Hitachi Data Systems Federal 2015). While these benefits are certainly attractive to businesses and governments, they at the same time pose an increasing challenge for cyberterrorists, in that the attack surface is significantly decreased. Instead of being able to take advantage of security holes in many disparate and possibly unpatched systems, they are increasingly faced with a single unified set of security standards that by design decrease the number of vulnerabilities they might attempt to exploit (Serbu 2017).

Third, outside of the more common cloud‐like environments, such as Amazon Web Services (AWS) and Microsoft Azure, a malicious actor can coopt available unprotected computers from the public to build a custom cloud of computers (a botnet) that could be used in the same sense as an Amazon cloud platform. To accomplish this, the malicious user would need software tools capable of infecting vulnerable computers. This has posed a challenge in the past, but now such tools are commonly available for purchase or rent on the internet or the dark web (Dupont 2017). Some software of this nature is even available for free, provided the user splits any financial gain with the author of the malware (Dimov and Juzenaite 2017). This arrangement, called malware‐as‐a‐service, allows any malicious user, including a would‐be cyberterrorist, to assemble custom cloud‐like computing facilities that are capable of carrying out cyber or cyberterrorist attacks. Such attacks could take other cloud platforms offline, in order to disable large organizations that are running on those platforms, or to overwhelm their internet connections, leading to a failure of critical infrastructure. Thus, using malware, aspiring cyberterrorists can cheaply and anonymously create the computing platforms needed to launch massive cyberterrorist attacks from abroad. With the proliferation of internet‐enabled small devices (the Internet of Things [IoT]), which are usually designed and manufactured with cost as the priority and security as an afterthought, ever‐more‐capable platforms can now be assembled with even less effort. One recent example is the IoT botnet Mirai, which brought down large sections of the internet across the East coast of North America in October 2016 (The Economist 2016; Kolias et al. 2017). What made this attack special was both its severity and that it used insecure IoT devices to carry out the attack.

11.7 Cyberlaw and Cyberterrorism

Presently, there are no international laws or conventions that deal effectively with cyberterrorism (Fidler 2016). This is not entirely surprising, given that nation states have thus far been unable to come up with an agreed‐on definition of what constitutes hate speech, let alone what constitutes cyberterrorism. Indeed, this lack of agreement was the primary reason that hate speech was not included in the 2001 European Convention on Cybercrime (Garland and Chakraborti 2012). The subsequent EU Council Framework Decision on Combating Terrorism of 2002/2008 attempted to address this deficiency by setting general ground rules for member states when interdicting the use of computers and computer systems for the purposes of disseminating racist and xenophobic materials, and/or for the making of racist and xenophobic threats (Seiber 2010). Also, Clean IT, a European internet policing research initiative, sought to shut down websites that disseminate terrorist information (Rediker 2015).

As observed at various junctures in this chapter, terrorist organizations do indeed make wide use of the internet, albeit for much the same purposes as other organizations, including those of mainstream political parties: for propaganda, information sharing, planning, coordination, recruitment, and fund‐raising. In other words, they use the internet for getting their message out and trying to increase their number of followers (Argomaniz 2015). But when we start talking about which message or messages should be permitted in cyberspace, and which message or messages should be suppressed, it quickly boils down to an issue of freedom of speech, a cornerstone of most Western‐style democracies. Where does censorship start, where does it end, and who gets to be the censor? Do Western‐style democracies truly wish to move in the direction of the Great Firewall of China or the Supreme Council of Cyberspace in Iran (Spinello 2017), where the political elite get to determine what their citizens can and cannot be exposed to?

Before turning briefly to the subject of cyberfatwas, we should bear in mind that the majority of fatwas published in cyberspace concern religious rulings (or scholarly legal opinions on Islamic law), and simply provide legal and spiritual guidance on aspects of everyday life, such as social norms and acceptable behavior (Weimann 2011). On the other hand, an appreciable number of cyberfatwas, or calls to cyberjihad, could quite easily be regarded as hate speech or terrorist propaganda. In many cases, these jihadist cyberfatwas clearly state that it is acceptable to wage war on noncombatant civilians, the preferred target of terrorists, and in particular, to wage war on civilians of the Christian or Jewish faiths (Weimann 2011). With their unprecedented use of the internet in general, and social media in particular, Al Qaeda and ISIS have taken such cyber‐facilitated tactics and strategies to new heights (Fidler 2016). Essentially, organizations like Al Qaeda and ISIS mobilize the internet (and cyberjihad) in an effort to inspire “lone wolf” terrorism, by tapping into the sense of disillusionment and resentment experienced by many Muslims throughout the world (Haykel 2016). It has been estimated that the UK government annually identifies and removes upwards of 15 000 items from the internet that are deemed to meet the government's definition of jihadist propaganda (Awan 2017).

Article 10 of the European Convention on Human Rights guarantees freedom of expression, which includes the right to hold opinions and to freely receive or impart such opinions without political interference (Rediker 2015). That said, Germany, France, and the United Kingdom tend to adopt comparatively hardline approaches toward hate speech, and extremist speech in particular, whereas many analogous forms of extremist or hate speech seem to enjoy greater latitude in other European countries (Garland and Chakraborti 2012). The differences become even more pronounced when comparing Europe with North America (Spinello 2017). To illustrate, Jayda Fransen of the far‐right Britain First organization was criminally charged in the United Kingdom for using threatening and abusive speech, while Donald Trump, the President of the United States, apparently felt quite comfortable when he retweeted some of Fransen's (allegedly illegal) anti‐Islam videos to his many followers on Twitter (Weaver et al. 2017). On the Canadian front, when presented with an alleged case of criminal harassment on social media, a Toronto court judge ruled at considerable length that any number of distasteful or unpopular expressions are constitutionally protected in Canada, up to and including some forms of false news and hate propaganda (McDougall 2015). Whether we label it as terrorist speech, extremist speech, or hate speech, we should be reminded that such labels are often in the eye of the beholder and are most likely determined subjectively, on the basis of who is best positioned to affix the negative label and who is best positioned to deflect or resist that label.

The legal discourse pertaining to cyberterrorism involves issues other than the right to freedom of speech, and where that freedom might reasonably end, two of those being identifying the actual perpetrator (attribution) and the question of legal jurisdiction. A case in point might be the 2013 cyberattack on France's TV5Monde (the national broadcaster), wherein TV screens were switched to display jihadist messages and an image of The CyberCaliphate, ostensibly in retaliation for the French army's involvement in Syria and Iraq. In addition to taking over the TV screen, the hackers were able to block broadcasts and hack into the broadcaster's website and social media sites (Chrisafis and Gibbs 2015; Fidler 2016). This might arguably be construed as an incident of cyberterrorism, in that it was intended to instill terror in the French populace and, at the same time, deliver a political message to the French government. As is the case with almost all such purported incidents of cyberterrorism, however, there was no damage to the critical infrastructure, and there were no civilian casualties.

Initially, it was assumed by the French government that an ISIS‐linked terrorist group had successfully targeted the TV station, because the caption included the name of ISIS and the picture was consistent with other ISIS propaganda on the internet (Chrisafis and Gibbs 2015). However, subsequent investigations revealed that the hacks appeared to originate from a Kremlin‐linked group in Russia, perhaps with the objective of supporting Russia's Syrian ally, Bashar al‐Assad. This group of Russian hackers, known variously as APT28 or Pawn Storm, had previously attempted to hack into NATO computers and into the computers of the White House, and was thought to have targeted pro‐Ukrainian activists and Russian dissidents (Lichfield 2015). In this particular case, it seems highly unlikely that investigators will ever prove conclusively who orchestrated the attack on France's TV5Monde and, if the perpetrators were indeed Russian, even more unlikely that French or European authorities will ever succeed in persuading the Russian government to extradite them to face trial in France. Russia is the only European country so far that has refused to sign the European Convention on Cybercrime, insisting that certain sections of the Convention would violate Russian sovereignty and national security (Ruvic 2017).

There are, of course, international laws that deal with real‐world terrorism, for example the UN Convention for the Suppression of Unlawful Seizure of Aircraft and the UN Protocol for the Suppression of Unlawful Acts of Violence at Airports Serving International Civil Aviation, which could be invoked in the event of a cyberterrorist attack on an aircraft's onboard computer or an air traffic control system. Another example is the UN Convention for the Suppression of Acts of Nuclear Terrorism, which could be invoked in the event of an attack on the computerized control system of a nuclear power plant (Seiber 2010). This latter convention on nuclear terrorism might well have been applied in the case of the Stuxnet attack on the Iranian nuclear facilities, except for the fact that nobody has been able to prove who orchestrated the attack. And if the Stuxnet attack was orchestrated by the United States and/or Israel, as has been widely speculated (cf. Kenney 2015), then it seems doubtful that either country would ever consent to extraditing one or more of its citizens to face trial in an Iranian court of law (or any court of law, for that matter).

As seen here, when it comes to cyberterrorism and jurisdictional issues, it can be difficult to determine in which state the act originated and, assuming that the state is inclined to investigate, difficult for the state to ascertain whether the act originated within its own borders. Even if the state is able to conclude that the act originated within its own borders (and thus within its legal jurisdiction), this does not necessarily prove that the act was committed by one of its own nationals (Tehrani and Manap 2013). Given the widespread proliferation of botnets and proxy servers, cyberattacks can originate from just about anywhere on the face of the earth, with the identity of the original perpetrators hidden from the view of all but the most skilled and determined of investigators. And as with the Stuxnet attack and the attack on TV5Monde, the act may not be against the law in the country in which it originated; it may be state‐sponsored or, at a minimum, state‐sanctioned (cf. Tehrani and Manap 2013).

11.8 Conclusion: Through a Glass Darkly

In his presentation to a 1996 cyberlaw conference at the University of Chicago, Frank Easterbrook – who was at that time a senior lecturer in the Law School at the University of Chicago and a circuit court judge for the US Court of Appeals – remarked that we were no more likely to see a law course on cyberlaw than we were to see a course on “the law of the horse.” While we may have laws regulating the sale of horses and the licensing of race horses, not to mention laws against the theft of horses and the fixing of horse races, he felt it unlikely that these could or ever would be gathered into a unified law course. Judge Easterbrook further opined that the lawyers and politicians who drafted and promulgated laws knew little about computers and even less about the direction in which computer technology might be headed (Easterbrook 1996). This notion of the law of the horse was subsequently elaborated upon by Lawrence Lessig, a law professor at Stanford University. According to Lessig, social norms, market forces, the architecture of the internet, and protocol (or the power of code) would likely prove more effective in regulating cyberspace than any new, cyber‐specific laws. As Lessig pointed out, we have an abundance of existing laws that regulate activity in the real world, any number of which could be used to regulate activity in cyberspace (Lessig 1999). To express it differently, theft is theft, and fraud is fraud, whether it takes place in the real world or in cyberspace. We already have laws against theft and fraud, so why not enforce them? And why should we think that such laws would be any more enforceable in cyberspace if we recast them as cybertheft and cyberfraud?

Clearly, one lesson to be taken from Judge Easterbrook's speech is that judges, law professors, and any like‐minded crystal‐ball readers should exercise considerable caution when it comes to predicting what the future might hold. Since Judge Easterbrook's 1996 speech at the cyberlaw conference, there have been any number of cyber‐specific laws, such as the 2000 Children's Internet Protection Act in the United States and the 2014 Protecting Canadians from Online Crime Act in Canada (Cartwright 2017). We have also seen the introduction of the 2001 European Convention on Cybercrime, as well as the subsequent EU Council Framework Decision on Combating Terrorism (Seiber 2010). Moreover, many law schools around the world now offer courses on cyberlaw, including the Law School at the University of Chicago (where Judge Easterbrook was teaching), which offers courses on cybercrime and electronic commerce law; the Faculty of Law at the University of Ottawa, which offers courses on the regulation of internet communication and the regulation of cyber commerce; and the Faculties of Law at the University of Birmingham and the University of Leeds, both of which offer a course on technology and the law (i.e. cyberlaw).

Nevertheless, there is considerable merit to the notion set forth in the law of the horse that cyberspace and computer technology are changing so quickly that any new cyberlaw initiated today would be obsolete by the time it went through all the drafts and committees, circulated through the various legislative bodies for reading and amendment, and was finally promulgated and enforceable. With all that is going on in the fields of computer technology and cybercommunications, and the seemingly endless capacity of the computer generation to move rapidly from one innovation to the next, it is easy to lose sight of the fact that Facebook did not appear on the horizon until 2004, Twitter did not appear until 2006 (Fidler 2016), and the iPhone was not launched by Steve Jobs and Apple until 2007 (Price 2017). Nowadays, Facebook, Twitter, and iPhones have become integral parts of the terrorist toolkit in cyberspace (Awan 2017; Ayres and Maglaras 2016). But how could lawmakers (or law professors) back in the 1990s have even been aware of, let alone accurately predicted, such developments?

The European Convention on Cybercrime is a case in point. The Convention began taking shape in 1997 but was not open for signature until 2001 and did not become law until 2004 (Clough 2012). Although it was intended from the outset to apply internationally, only 55 countries had signed it as of 2017. The United States did not get around to ratifying the Convention until 2007, and Canada did not ratify it until 2015. It is noteworthy that Russia, China, Brazil, and India have never signed or ratified the Convention. These four countries are among the world leaders when it comes to malicious websites, the hosting of botnets, and phishing attacks (Kigerl 2012). Russia is of course thought to be behind the 2015 cyberattack on the Ukrainian power grid and the 2013 cyberattack on France's TV5Monde (Chrisafis and Gibbs 2015; Fidler 2016). China is widely believed to be the world's leader in cyberespionage, both against nation states and against commercial enterprises (Segal 2013; Wattanajantra 2012). If the main transgressors are unwilling to sign the Convention and enforce its provisions, then what force and effect can the Convention realistically be expected to have?

The same can be said for the UN‐sponsored Comprehensive Convention on International Terrorism, which has been the subject of negotiation since 1996, a year earlier than negotiations commenced on the European Convention on Cybercrime. The provisions of the draft text of the Comprehensive Convention on International Terrorism are sufficiently broad as to cover cyberterrorism (Fidler 2016), but the process had mostly been drifting sideways, until it was recently revived at the instigation of India, following the deadly terrorist attack in Dhaka in July 2016 (Anam 2017; Haider 2016). But again, how likely is it that the nation states that are known to sponsor or at least sanction cyberterrorism are going to become signatories to the Convention, or enforce its provisions?

While we may not have witnessed any bona fide cyberterrorist attacks as of yet, this could simply be attributable to the fact that it has thus far proven difficult for terrorists to achieve the level of civilian casualties and damage to critical infrastructure in cyberspace that they can achieve by using more tried‐and‐true methods in the real world. To express it differently, terrorists want the biggest bang for their buck, just like everybody else. However, there is no question that terrorist organizations are keenly interested in cybertechnology and everything it has to offer. We should bear in mind that at any given moment, there are reportedly hundreds (if not thousands) of ISIS‐ and Al Qaeda‐inspired computer science students around the world actively attempting to acquire the requisite knowledge to mount more sophisticated cyberterrorist attacks (Heickerö 2014). Thus it may be a question of when we will start to see cyberattacks that approximate the level of destruction associated with real‐world terrorism, rather than if we will ever see such attacks (Archer 2014).

For now, however, terrorist organizations will continue to use cyberspace for the purposes outlined earlier in this chapter: recruitment, coordination, fund‐raising, propaganda, and intelligence gathering. Where possible, they will continue to engage in disruptive activities such as DDoS attacks and network disruption, like the attack on France's TV5Monde (although this was thought in hindsight to be orchestrated by Russia, not by a terrorist organization). While cloud technology may be better equipped than conventional technology to deflect such attacks, due to its relative sophistication and enhanced protective measures, we cannot entirely discount the possibility of terrorist attacks on the Cloud and, in particular, on the cloud‐connected components of the IoT, some of which are very poorly secured (Tzezana 2017).

On rare occasions, terrorists may succeed in mounting cyberattacks on critical infrastructure, along the lines of the attack on Iran's nuclear enrichment facilities and the more recent attack on the Ukrainian power grid (again thought to be orchestrated, respectively, by Israel and the United States, and by Russia, rather than by terrorist organizations). The malicious code for destructive attacks of this nature is certainly out there in cyberspace and is accessible to any terrorist organization that has the requisite knowledge and determination to mobilize the technology. And for the foreseeable future, it can be anticipated that governments and law enforcement agencies will continue to struggle with jurisdictional issues, the complexity of cyberspace itself, and the seemingly never‐ending task of bringing noncompliant nations on side. In fact, terrorist organizations – which see themselves as engaged in asymmetrical warfare against much larger and more powerful entities – appear to revel in the jurisdictional issues, the complexities of cyberspace, and the seeming befuddlement of governments and law enforcement agencies around the world when it comes to dealing with cybercrime. One thing we can say for certain is that terrorist organizations will not play the game according to the rules and that, wherever possible, they will act in an unpredictable manner.

References

  1. Alomari, E., Manickam, S., Gupta, B.B. et al. (2012). Botnet‐based distributed denial of service (DDoS) attacks on web servers. International Journal of Computer Applications 49 (7): 24–32.
  2. Al‐Rawi, A.K. (2014). Cyber warriors in the Middle East: the case of the Syrian electronic Army. Public Relations Review 40 (3): 420–428. https://doi.org/10.1016/j.pubrev.2014.04.005.
  3. Anam, Tahmima. (2017). Under the Shadow of Terrorism in Dhaka. The New York Times. https://www.nytimes.com/2017/08/09/opinion/under‐the‐shadow‐of‐terrorism‐in‐dhaka.html.
  4. Archer, E.M. (2014). Crossing the Rubicon: understanding cyber terrorism in the European context. The European Legacy: Toward New Paradigms 19 (5): 606–621. https://doi.org/10.1080/10848770.2014.943495.
  5. Argomaniz, J. (2015). European Union responses to terrorist use of the internet. Cooperation and Conflict 50 (2): 250–268. https://doi.org/10.1177/0010836714545690.
  6. Awan, Imran. (2012). Cyber threats and cyber terrorism: The internet as a tool for extremism.
  7. Awan, I. (2014). Debating the term cyber‐terrorism: issues and problems. Internet Journal of Criminology https://doi.org/10.1007/978‐1‐4939‐0962‐9_6.
  8. Awan, I. (2017). Cyber‐extremism: Isis and the power of social media. Social Science and Public Policy 54 (2): 138–149. https://doi.org/10.1007/s12115‐017‐0114‐0.
  9. Ayres, N. and Maglaras, L.A. (2016). Cyberterrorism targeting the general public through social media. Security and Communication Networks 9: 2864–2875. https://doi.org/10.1002/sec.1568.
  10. Barker, Cary. (2002). Ghosts in the machine: The who, why, and how of attacks on information security. GIAC Security Essentials Certification (GSEC), 1–38. SANS Institute Reading Room. https://www.sans.org/reading‐room/whitepapers/awareness/ghosts‐machine‐who‐why‐attacks‐information‐security‐914.
  11. Bearse, R.S. (2015). Protecting critical information infrastructure for terrorist attacks and other threats: strategic challenges for NATO and its partner countries. In: Terrorist Use of Cyberspace and Cyber Terrorism: New Challenges and Responses (ed. M.N. Ogun), 29–44. Amsterdam: IOS Press.
  12. Beazer, Daniel. (2016). Silver Linings While Clouds Gather. Scmagazine.com, 18–21.
  13. Becker, H.S. (1963). Outsiders; Studies in the Sociology of Deviance. London: Free Press of Glencoe.
  14. Binder, David. (1972). 9 Israelis on Olympic team killed with 4 Arab captors as police fight band that disrupted Munich games. The New York Times. http://www.nytimes.com/learning/general/onthisday/big/0905.html.
  15. Cartwright, B. (2017). Cyberbullying and “the law of the horse”: a Canadian viewpoint. Journal of Internet Law 20 (10): 14–26.
  16. Chaplain, Chloe. (2017). Boris Johnson labels Syrian leader Bashar al‐Assad a “terrorist” over chemical attack and warns US “could strike again.” Evening Standard. www.standard.co.uk/news/world/boris‐johnson‐labels‐syrian‐leader‐bashar‐alassad‐a‐terrorist‐over‐chemical‐attack‐and‐warns‐us‐a3515891.htm.
  17. Chrisafis, Angelique and Gibbs, Samuel. (2015). French media groups to hold emergency meeting after Isis attack: Culture minister calls talks after television network TV5Monde is taken over by individuals claiming to belong to Islamic State. The Guardian. https://www.theguardian.com/world/2015/apr/09/french‐tv‐network‐tv5monde‐hijacked‐by‐pro‐isis‐hackers.
  18. Clough, J. (2012). The Council of Europe Convention on cybercrime: defining “crime” in a digital world. Criminal Law Forum 23 (4): 363–391. https://doi.org/10.1007/s10609‐012‐9183‐3.
  19. Cohen, D. (2014). Cyber terrorism: case studies. In: Cyber Crime and Cyber Terrorism Investigator's Handbook (ed. B. Akhgar, A. Staniforth and F. Bosco), 165–175. Waltham, MA: Syngress.
  20. Conway, Maura. (2005). Terrorist “use” of the Internet and fighting back. Presented at Cybersafety: Safety and Security in a Networked World: Balancing Cyber‐Rights and Responsibilities, Oxford England. https://www.oii.ox.ac.uk/archive/downloads/research/cybersafety/papers/maura_conway.pdf.
  21. Dale, Chris. (2016). Azure Oday cross‐site scripting with sandbox escape. https://pen‐testing.sans.org/blog/2016/08/19/azure‐0day‐cross‐site‐scripting‐with‐sandbox‐escape (accessed 14 December 2017).
  22. Dimov, Daniel and Juzenaite, Rasa. (2017). Malware‐as‐a‐service. http://resources.infosecinstitute.com/malware‐as‐a‐service (accessed 14 December 2017).
  23. Dupont, B. (2017). Bots, cops and corporations: on the limits of enforcement and the promise of polycentric regulation as a way to control large‐scale cybercrime. Crime, Law and Social Change 67 (1): 97–116. https://doi.org/10.1007/s10611‐016‐9649‐z.
  24. Easterbrook, F.H. (1996). Cyberspace and the law of the horse. University of Chicago Legal Forum 1996: 207–216.
  25. Fanusie, Yaya. (2017). Will a new generation of terrorists turn to bitcoin? Thecipherbrief.Com (11 June 11). https://www.thecipherbrief.com/article/tech/will‐a‐new‐generation‐of‐terrorists‐turn‐to‐bitcoin.
  26. Fidler, D.P. (2016). Cyberspace, terrorism and international law. Journal of Conflict and Security Law 21 (3): 475–493. https://doi.org/10.1093/jcsl/krw013.
  27. Fogarty, Kevin. (2012). Where did “cloud” come from? Users love it, many in IT hate it; cloud changed the relationship between the two forever. IT World. https://www.itworld.com/article/2726701/cloud‐computing/where‐did‐‐cloud‐‐come‐from‐.html.
  28. Garland, J. and Chakraborti, N. (2012). Divided by a common concept? Assessing the implications of different conceptualizations of hate crime in the European Union. European Journal of Criminology 9 (1): 38–51. https://doi.org/10.1177/1477370811421645.
  29. Gayathri, K.S., Thomas, T., and Jayasudha, J. (2012). Security issues of media sharing in social cloud. Procedia Engineering 38: 3806–3815. https://doi.org/10.1016/j.proeng.2012.06.43.
  30. Gellman, Barton. (2002). Cyber‐attacks by Al Qaeda feared: Terrorists at threshold of using Internet as tool of bloodshed, experts warn. The Washington Post, A1.
  31. Haider, Suhasini. (2016). Delhi hopes UN will push global terror convention. The Hindu. http://www.thehindu.com/news/national/Delhi‐hopes‐UN‐will‐push‐global‐terror‐convention/article14467324.ece.
  32. Haykel, B. (2016). ISIS and Al Qaeda – what are they thinking? Understanding the adversary. Annals of the American Academy of Political and Social Science 668 (1): 71–81. https://doi.org/10.1177/0002716216672649.
  33. Heckman, K.E., Stech, F.J., Thomas, R.K. et al. (2015). Cyber Denial, Deception and Counter Deception: A Framework for Supporting Active Cyber Defense. New York, NY: Springer.
  34. Heickerö, R. (2014). Cyber terrorism: electronic Jihad. Strategic Analysis 38 (4): 554–565. https://doi.org/10.1080/09700161.2014.918435.
  35. Helms, R., Constanza, S.E., and Johnson, N. (2012). Crouching Tiger or phantom dragon? Examining the global discourse on cyber‐terror. Security Journal 25 (1): 57–75. https://doi.org/10.1057/sj.2011.6.
  36. Jarvis, L., Macdonald, S., and Whiting, A. (2016). Analogy and Authority in Cyberterrorism Discourse: an analysis of global news media coverage. Global Society 30 (4): 605–623. https://doi.org/10.1080/13600826.2016.1158699.
  37. Jenkins, Brian M. and Johnson, Janera. (1975). International terrorism: a chronology, 1968–1974: a report prepared for Department of State and Defense Advanced Research Projects Agency (No. R‐1597‐DOS/ARPA). Santa Monica, CA: Rand Corporation. http://www.dtic.mil/docs/citations/ADA008354.
  38. Jones, Sam. (2017). Spanish court to investigate Syrian “state terrorism” by Assad regime. Theguardian.Com. https://www.theguardian.com/world/2017/mar/27/spanish‐court‐syria‐state‐terrorism‐assad‐regime‐mrs‐ah.
  39. Kenney, M. (2015). Cyber‐Terrorism in a Post‐Stuxnet World, 110–128. Philadelphia, PA: Foreign Policy Research Institute Retrieved from www.sciencedirect.com.proxy.lib.sfu.ca/science/article/pii/S0030438714000787.
  40. Kigerl, A. (2012). Routine activity theory and the determinants of high cybercrime countries. Social Science Computer Review 30 (4): 470–486. https://doi.org/10.1177/0894439311422689.
  41. Knox, Patrick and Hodge, Mark. (2017). Killer cocktail: What was the Syria chemical attack and what has Donald Trump said about Bashar al‐Assad? Here's everything you need to know. The Sun. www.thesun.co.uk/news/3267810/syria‐chemical‐weapons‐trump‐assad‐russia‐iran‐france.
  42. Kolias, C., Kambourakis, G., Stavrous, A., and Voas, J. (2017). DDoS in the IoT: Mirai and other botnets. Computer 50 (7): 80–84. https://doi.org/10.1109/MC.2017.201.
  43. Lessig, L. (1999). The law of the horse: what Cyberlaw might teach. Harvard Law Review 113 (2): 501–549.
  44. Lichfield, John. (2015). TV5Monde hack: “Jihadist” cyber attack on French TV station could have Russian link. The Independent. www.independent.co.uk/news/world/europe/tv5monde‐hack‐jihadist‐cyber‐attack‐on‐french‐tv‐station‐could‐have‐russian‐link‐10311213.html.
  45. Lindsay, J.R. (2013). Stuxnet and the limits of cyber warfare. Security Studies 22 (3): 365–404. https://doi.org/10.1080/09636412.2013.816122.
  46. Luiijf, E. (2014). Definitions of cyber terrorism. In: Cyber Crime and Cyber Terrorism Investigator's Handbook, 10–17. Syngress.
  47. McDougall, G. G. (2015). Crouch v. Snell, 2015 NSSC 340 (Case/Court Decisions).
  48. McPherson, Andrew. (2004). Examining the cyber capabilities of Islamic terrorist groups. Hanover, NH: Institute for Security, Technology and Society. http://www.ists.dartmouth.edu/library/164.pdf.
  49. Mell, Peter M. and Grance, Timothy. (2011). The NIST Definition of Cloud Computing. Special Publication (NIST SP) No. 800–145). National Institute of Standards and Technology. http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800‐145.pdf.
  50. Nasreldin, M., Aslan, H., Rasslan, M., et al. (2017). Evidence acquisition in cloud forensics. Presented at the IEEE International Conference on New Paradigms in Electronics & Information Technology (PEIT ‘17), Alexandria, Egypt.
  51. National Commission of Terrorist Attacks upon the United States. (2004). The 9/11 Commission report: final report of the National Commission on Terrorist attacks upon the United States, 1–585. https://www.9‐11commission.gov/report/911Report.pdf.
  52. Price, Rob. (2017). The first iPhone went on sale 10 years ago today‐‐here's how Steve Jobs announced it. Business Insider UK. http://uk.businessinsider.com/watch‐steve‐jobs‐first‐iphone‐10‐years‐ago‐legendary‐keynote‐macworld‐sale‐2017‐6.
  53. Raywood, Dan. (2017). Attacks on the cloud increase by 300%. Infosecurity Magazine. https://www.infosecurity‐magazine.com/news/attacks‐cloud‐increase‐300.
  54. Rediker, E. (2015). The incitement of terrorism on the internet: legal standards, enforcement, and the role of the European Union. Michigan Journal of International Law 36 (2): 321–351.
  55. Richardson, L. and Ruby, S. (2007). RESTful Web Services. Sebastopol, CA: O'Reilly Media Inc.
  56. Rid, T. (2011). Cyber war will not take place. Journal of Strategic Studies 35 (1): 5–32. https://doi.org/10.1080/01402390.2011.608939.
  57. Ruvic, Dado. (2017). Russia prepares new UN anti‐cybercrime convention‐report. Rt.Com. https://www.rt.com/politics/384728‐russia‐has‐prepared‐new‐international.
  58. Segal, A. (2013). The code not taken: China, the United States, and the future of cyber espionage. Bulletin of the Atomic Scientists 69 (5): 38–45. https://doi.org/10.1177/0096340213501344.
  59. Seiber, U. (2010). Instruments of international law: against terrorist use of the internet. In: A War on Terror? The European Stance on a New Threat, Changing Laws and Human Rights Implications (ed. M. Wade and A. Maljevic), 171–220. New York, London: Springer Retrieved from https://link‐springer‐com.proxy.lib.sfu.ca/content/pdf/bfm%3A978‐0‐387‐89291‐7%2F1.pdf.
  60. Serbu, Jared. (2017). DoD in discussions with vendors to simplify cloud security rules. Federal News Radio. https://federalnewsradio.com/cloud‐computing/2017/10/dod‐in‐discussions‐with‐vendors‐to‐simplify‐cloud‐security‐rules.
  61. Sexton, E. (2011). Asymmetrical warfare. In: The SAGE Encyclopedia of Terrorism (ed. G. Martin), 71–72. Thousand Oaks, CA: SAGE Publications Retrieved from http://sk.sagepub.com.proxy.lib.sfu.ca/reference/download/terrorism2ed/n49.pdf.
  62. Shaikh, S.A., Chivers, H., Nobles, P. et al. (2008). Network reconnaissance. Network Security 11: 12–16. https://doi.org/10.1016/S1353‐4858(08)70129‐6.
  63. Sill, A. (2016). The design and architecture of microservices. IEEE Cloud Computing 3 (5): 76–80.
  64. Solomon, Erika. (2017). Assad says US welcome to join “fight against terrorism.” Financial Times. https://www.ft.com/content/6f06a3b4‐efa5‐11e6‐ba01‐119a44939bb6.
  65. Spinello, R.A. (2017). Cyberethics: Morality and Law in Cyberspace, 6e. Burlington, MA: Jones & Bartlett Learning.
  66. Stevens, D.A. (2013). William Wallace: the man behind the legend. Saber and Scroll 2 (2): 46–53.
  67. Svete, U. (2009). Asymmetrical warfare and modern digital media: an old concept changed by new technology? In: The Moral Dimension of Asymmetrical Warfare: Counter‐Terrorism, Democratic Values and Military Ethics (ed. T. van Baarda and D.E.M. Verweij), 381–398. Leiden, Boston: Martinus Nijhoff Publishers.
  68. TASS. (2017). Assad praises Russia for succeeding in fighting terroists together. Russia Beyond. https://www.rbth.com/news/2017/03/13/assad‐praises‐russia‐for‐succeeding‐in‐fighting‐terrorists‐together_718898.
  69. Tehrani, P.M. and Manap, N.A. (2013). A rational jurisdiction for cyber terrorism. Computer Law & Security Review 29 (6): 689–701. https://doi.org/10.1016/j.clsr.2013.07.009.
  70. The Economist. (2016). The internet of stings: An electronic tsunami crashes down on a solitary journalist. https://www.economist.com/news/science‐and‐technology/21708220‐electronic‐tsunami‐crashes‐down‐solitary‐journalist‐internet.
  71. Thomas, T.L. (2003). Al Qaeda and the internet: the danger of “Cyberplanning.”. Parameters 23 (1): 112–123.
  72. Tzezana, R. (2017). High‐probability and wild‐card scenarios for future crimes and terror attacks using the internet of things. Foresight 19 (1): 1–14. https://doi.org/10.1108/FS‐11‐2016‐0056.
  73. Vallance, Chris. (2017). Russian Fancy Bear hackers' UK link revealed. BBC Radio 4, PM. London: BBC. http://www.bbc.com/news/technology‐42056555.
  74. van Baarda, T. (2009). The moral dimension of asymmetrical warfare – an introduction. In: The Moral Dimension of Asymmetrical Warfare: Counter‐Terrorism, Democratic Values and Military Ethics (ed. T. van Baarda and D.E.M. Verweij), 1–28. Leiden, Boston: Martinus Nijhoff Publishers.
  75. Verge, Jason. (2015). Defense Department warming to commercial cloud servers: DOD getting more comfortable working with the big commercial cloud vendors. DataCenterKnowledge (5 February). http://www.datacenterknowledge.com/archives/2015/02/05/department‐of‐defense‐works‐with‐commercial‐cloud‐providers.
  76. ViON/Hitachi Data Systems Federal. (2015). DoD and cloud computing: where are we now? ViON/Hitachi Data Systems Federal. https://www.vion.com/assets/site_18/files/vion%20collateral/vion%20whitepaper‐dod%20cloud%20trends%20(draft)%208_25_2025.pdf.
  77. Wattanajantra, Asavin. (2012). The new Cold War. SC Media UK (November–December), 18–21.
  78. Weaver, Matthew, Booth, Robert, and Jacobs, Ben. (2017). Theresa May condemns Trumps' retweets of UK far‐right leader's anti‐Muslim videos. The Guardian. https://www.theguardian.com/us‐news/2017/nov/29/trump‐account‐retweets‐anti‐muslim‐videos‐of‐british‐far‐right‐leader.
  79. Weimann, G. (2011). Cyber‐fatwas and terrorism. Studies in Conflict & Terrorism 34 (10): 765–781. https://doi.org/10.1080/1057610X.2011.604831.
  80. Weinberger, Matt. (2015). Why “cloud computing” is called “cloud computing.” Business Insider (12 March). http://www.businessinsider.com/why‐do‐we‐call‐it‐the‐cloud‐2015‐3.
  81. Weir, G.R.S. and Aßmuth, A. (2017). Strategies for intrusion monitoring in cloud services, cloud computing 2017. Presented at the Eighth International Conference on Cloud Computing, GRIDs, and Virtualization, IARIA, Athens, Greece.
  82. Weir, George, Aßmuth, A., Whittington, M. et al. (2017). Cloud accounting systems, the audit trail, forensics and the EU GDPR: how hard can it be? Presented at the British Accountancy and Finance Association Conference, Aberdeen, Scotland.
  83. Yan, Q. and Yu, F.R. (2015). Distributed denial of service attacks in software‐defined networking with cloud computing. IEEE Communications Magazine 53 (4): 52–59.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset