Elias Groll Archives | CyberScoop https://cyberscoop.com/author/elias-groll/ Thu, 29 Jun 2023 14:33:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://cyberscoop.com/wp-content/uploads/sites/3/2023/01/cropped-cs_favicon-2.png?w=32 Elias Groll Archives | CyberScoop https://cyberscoop.com/author/elias-groll/ 32 32 Does the world need an arms control treaty for AI? https://cyberscoop.com/ai-danger-arm-control-nuclear-proliferation/ Thu, 29 Jun 2023 14:33:06 +0000 https://cyberscoop.com/?p=75041 Organizations like the IAEA offer an imperfect but instructive model for designing systems to control AI proliferation.

The post Does the world need an arms control treaty for AI? appeared first on CyberScoop.

]]>
At the dawn of the atomic age, the nuclear scientists who invented the atomic bomb realized that the weapons of mass destruction they had created desperately needed to be controlled. Physicists such as Niels Bohr and J. Robert Oppenheimer believed that as knowledge of nuclear science spread so, too, would bombs. That realization marked the beginning of the post-war arms control era.

Today, there’s a similar awakening among the scientists and researchers behind advancements in artificial intelligence. If AI really poses an extinction threat to humankind — as many in the field claim — many experts in the field are examining how efforts to limit the spread of nuclear warheads might control the rampant spread of AI.

Already, OpenAI, the world’s leading AI lab, has called for the formation of “something like” an International Atomic Energy Agency — the global nuclear watchdog —  but for AI. United Nations Secretary General Antonio Guterres has since backed the idea, and rarely a day goes by in Washington without one elected official or another expressing a need for stricter AI regulation

Early efforts to control AI — such as via export controls targeting the chips that power bleeding-edge models — show how tools designed to control the spread of nuclear weapons might be applied to AI. But at this point in the development of AI, it’s far from certain that the arms control lessons of the nuclear era translate elegantly to the era of machine intelligence.

Arms control frameworks for AI 

Most concepts of controlling the spread of AI models turn on a quirk of the technology. Building an advanced AI system today requires three key ingredients: data, algorithms and computing power — what the researcher Ben Buchanan popularized as the “AI Triad.” Data and algorithms are essentially impossible to control, but only a handful of companies build the type of computing power — powerful graphics processing units — needed to build cutting-edge language models. And a single company — Nvidia — dominates the upper end of this market. 

Because leading AI models are reliant on high-end GPUs — at least for now — controlling the hardware for building large language model offers a way to use arms control concepts to limit proliferation of the most powerful models. “It’s not the best governance we could imagine, but it’s the best one we have available,” said Lennart Heim, a researcher at the Centre for the Governance of AI, a British nonprofit, who studies computing resources. 

U.S. officials have in recent months embarked on an experiment that offers a preview of what an international regime to control AI might look like. In October, the U.S. banned the export of high-end GPUs to China and the chip making equipment necessary to make the most advanced chips, attempting to prevent proliferation of advanced AI models to China. “If you look at how AI is currently being governed,” Heim said, “it’s being governed right now by the U.S. government. They’re making sure certain chips don’t go to China.” 

Biden administration officials are now considering expanding these controls to lagging-edge chips and limiting Chinese access to cloud computing resources, moves that would further cut Beijing off from the hardware it needs to build competitive AI models.

While Washington is the driving force behind these export controls, which are aimed at ensuring U.S. supremacy in microelectronics, quantum computing and AI, it also relies on allies. In restricting the flow of chips and chipmaking equipment to China, the U.S. has signed up support from other key manufacturers of such goods: the Netherlands, Japan, South Korea and Taiwan.

By virtue of their chokehold on the chips used to train high-end language models, these countries are showing how the spread of AI models might be checked via what for now are ad hoc measures that might one day be integrated into an international body.

But that’s only one half of the puzzle of international arms control. 

Carrots and sticks 

In the popular imagination, the IAEA is an organization primarily charged with sending inspectors around the world to ensure that peaceful nuclear energy programs aren’t being subverted to build nuclear bombs. The less well-known work of the agency facilitates the transfer of nuclear science. Its basic bargain is something like this: sign up to the Nuclear Non-Proliferation Treaty, pledge not to build a bomb and the IAEA will help you reap the benefits of peaceful nuclear energy. 

“That’s the big reason that most states are enthusiastic about the IAEA: They’re in it for the carrots,” said Carl Robichaud, who helps lead the existential risk and nuclear weapons program at Longview Philanthropy, a nonprofit based in London. “They show up in Vienna in order to get assistance with everything from radiotherapy to building nuclear power plants.”

Building an international control regime of this sort for AI requires considering how to first govern the spread of the technology and then how to make its benefits available, argues Paul Scharre, the executive vice president and director of studies at the Center for a New American Security in Washington. By controlling where advanced AI chips go and who amasses them, licensing the data centers used to train models and monitoring who is training very capable models, such a regime could control the proliferation of these models, Scharre argued.

Countries that buy into this arrangement would then gain easier access to very capable models for peaceful use. “If you want to access the model to do scientific discovery, that’s available — just not to make biological weapons,” Scharre said.

These types of access controls have grown more feasible as leading AI labs have abandoned the open source approach that has been a hallmark of the industry in recent years. Today, the most advanced models are only available via online apps or APIs, which allows for monitoring how they are used. Controlling access in this way — both to monitor use and to provide beneficial access — is essential for any regime to control the spread of advanced AI systems, Scharre argued. 

But it’s not clear that the economic incentives of participating in such a regime translate from the world of nuclear arms control to AI governance. Institutions like the IAEA help to facilitate the creation of capital and knowledge intensive nuclear energy industries, and it’s unclear whether similar hurdles exist for AI to incentivize participating in an arms control regime.

“I like the idea of an international agency that helps humanity benefit more equitably from AI and helps this technology reach and help everyone. It’s not clear right now that there is market failure as to why that wouldn’t happen,” Robichaud said.

It’s also not clear that access controls can be maintained in the long run. Unlike nuclear weapons, which are fairly large physical devices that are difficult to move around, AI models are just software that can be easily copied and spread online. “All it takes is one person to leak the model and then the cats out of the bag,” Scharre said.

That places an intense burden on AI labs to keep their products from escaping the lab — as has already occurred — and is an issue U.S. policymakers are trying to address.

In an interview with CyberScoop, Anne Neuberger, a top White House adviser on cybersecurity and emerging technology, said that as leading AI firms increasingly move away from open source models and seek to control access, the U.S. government has carried out defensive cybersecurity briefings to leading AI firms to help ensure that their models aren’t stolen or leaked.

What are we trying to prevent? 

When AI safety researchers speak of the potentially existential threat posed by AI — whether that be a flood disinformation or the development of novel biological weapons — they are speculating. Looking at the exponential progress of machine learning systems in the past decade, many AI safety researchers believe that if current trends hold, machine intelligence may very well surpass human intelligence. And, if it does, there’s reason to think machines won’t be kind to humans

But that isn’t a sure thing, and it’s not clear exactly what catastrophic AI harms the future holds that need to be prevented today. That’s a major problem for trying to build an international regime to govern the spread of AI. “We don’t know exactly what we’re going to need because we don’t know exactly what the technology is going to do,” said Robert Trager, a political scientist at the University of California, Los Angeles, studying how to govern emerging technology. 

In trying to prevent the spread of nuclear weapons, the international community was inspired by the immense violence visited upon Hiroshima and Nagasaki. The destruction of these cities provided an illustration of the dangers posed by nuclear weapons technology and an impetus to govern their spread — which only gained momentum with the advent of more destructive thermonuclear bombs. 

By contrast, the catastrophic risks posed by AI are theoretical and draw from the realm of science fiction, which makes it difficult to build the consensus necessary for an international non-proliferation regime. “I think these discussions are suffering a little bit from being maybe ahead of their time,” said Helen Toner, an AI policy and safety expert at the Center for Security and Emerging Technology at Georgetown University and who sits on OpenAI’s board of directors.

If 10 or 20 years from now, companies are building AI systems that are clearly reaching a point where they threaten human civilization, “you can imagine there being more political will and more political consensus around the need to have something quite, quite strong,” Toner said. But if major treaties and conventions are the product of tragedy and catastrophe, those arguing for AI controls now have a simple request, Toner observes: “Do we have to wait? Can we not skip that step?”

But that idea hasn’t broken through with policymakers, who appear more focused on immediate risks, such as biased AI systems and the spread of misinformation. Neuberger, the White House adviser, said that while international efforts to govern AI are important, the Biden administration is more focused on how the technology is being used and abused today and what steps to take via executive order and congressional action before moving to long-term initiatives.

“There’s a time sequence here,” Neuberger said. “We can talk about longer term efforts, but we want to make sure we’re focusing on the threats today.”

In Europe, where EU lawmakers are at work on a landmark AI Act, which would limit its use in high-risk contexts, regulators have taken a similarly skeptical approach toward the existential risks of AI and are instead focusing on how to address the risks posed by AI as it is used today.

The risk of extinction might exist, “but I think the likelihood is quite small,” the EU’s competition chief Margrethe Vestager recently told the BBC. “I think the AI risks are more that people will be discriminated [against], they will not be seen as who they are.”

Long-term control 

Today’s leading AI models are built on a foundation of funneling ever more data into ever more powerful data centers to produce ever more powerful models. But as the algorithms that process that data become more efficient it’s not clear that ever more powerful data centers — and the chips that power them — will be necessary. As algorithms become more efficient, model developers “get better capability” for “less compute,” Heim from the Centre for the Governance of AI explains. In the future, this may mean that developers can train far more advanced models with less advanced hardware.

Today, efforts to control the spread of AI rest on controlling hardware, but if having access to the most advanced hardware is no longer essential for building the most advanced models, the current regime to control AI crumbles.

These shifts in training models are already taking place. Last year, researchers at Together, an open source AI firm, trained a model known as GPT-JT using a variety of GPUs strung together using slow internet speeds — suggesting that high-performing models could be trained in a decentralized manner by linking large numbers of lagging-edge chips. And as publicly available, ever more capable open source models proliferate, the moat separating AI labs from independent developers continues to narrow — or may disappear altogether.  

What’s more, arguments about the role of algorithmic efficiency making compute less relevant don’t account for entirely new approaches to training models. Today’s leading models rely on a compute-intensive transformer architecture, but future models may use some entirely different approach that would undermine efforts today to control AI models, Toner observes. 

Moreover, arms control experts observe that past efforts to control the spread of dangerous weapons should force a measure of humility on any policymaker trying to control the spread of AI. In the aftermath of World War II, President Truman and many of his key aides, ignoring their scientific advisers, convinced themselves that it would take the Soviet Union decades to build an atomic bomb — when it only took the Kremlin five years. And in spite of export controls, China succeeded in building “2 bombs and 1 satellite” — an atomic bomb, a thermonuclear bomb and a space program. 

That history makes Trager, the political scientist, skeptical about “grand visions for what export restrictions can do.” 

With private companies currently conducting the most advanced AI research, efforts to control the technology have understandably focused on managing industry, but in the long run, military applications may be far more concerning than commercial applications. And that does not bode well for arms control efforts. According to Trager, there is no example in history of major powers “agreeing to limit the development of a technology that they see as very important for their security, and for which they don’t have military substitutes.”

But even if arms control frameworks are imperfect vessels for regulating AI, arms control regimes have evolved over time and grown more stringent to deal with setbacks. The discovery of Iraq’s nuclear program in the 1990s, for example, spurred the creation of additional protocols to the Non-Proliferation Treaty. 

“We’re 80 years into the nuclear age, and we haven’t had a detonation in wartime since 1945 and we only have nine nuclear-armed states,” Robichaud from Longview Philanthropy argues. “We’ve gotten lucky a few times, but we’ve also built the systems that started off really weak and have gotten better over time.” 

The post Does the world need an arms control treaty for AI? appeared first on CyberScoop.

]]>
The 2024 race promises to be ‘very, very active’ in terms of foreign and domestic meddling, says former CISA chief https://cyberscoop.com/chris-krebs-election-security-2024/ Thu, 01 Jun 2023 20:58:40 +0000 https://cyberscoop.com/?p=74520 Chris Krebs said he expects to see Russia, China and Iran — and even domestic groups — attempt to influence and disrupt the presidential race.

The post The 2024 race promises to be ‘very, very active’ in terms of foreign and domestic meddling, says former CISA chief appeared first on CyberScoop.

]]>
The former head of the U.S. Cybersecurity and Infrastructure Security Agency who President Trump fired over his comments about the 2020 election said he fully expects American adversaries such as Russia and China to meddle in the next election through a range of activities to disrupt or influence the vote.

“If we thought 2020 was active, there are more motivations for foreign actors to muck around from an influence perspective, certainly, but perhaps even from an interference perspective,” Chris Krebs, currently a partner at the consulting firm Krebs Stamos Group, told CyberScoop in an interview on Thursday. Drawing a distinction between what he sees as “influence” (the shaping of public opinion) and “interference” (attacking election infrastructure), Krebs said he’s “fully expecting a very, very active threat landscape.”

Given the state of Russia’s faltering military campaign in Ukraine, he wouldn’t be surprised if Russia didn’t once again try to interfere in the vote and attempt to “muck it up.” He also said that increased geopolitical tensions between Washington and Beijing could be enough reason for China to reengage with influence operations. Furthermore, he said, Iran could take “another whack at it” since it was actively involved in 2020.

Krebs comments come on the heels of a New York Times report that Jack Smith, the special counsel investigating Trump’s effort to overturn the 2020 election, has subpoenaed Trump administration officials involved in Krebs’ firing from his position leading CISA. Following the 2020 election, Krebs’ agency, which was responsible for overseeing election security issues, issued a statement attesting to the integrity of the election results. That statement infuriated Trump, who fired Krebs five days after it was issued.

Prosecutors in Smith’s office are examining efforts by Trump aides to test the loyalty of government officials to the president, and Krebs has testified before the inquiry, according to the Times.

Krebs would not discuss the special counsel’s investigation on Thursday but said that he expects the 2024 election will feature similar narratives that marked the 2020 contest. “We’ve got a very hypercharged political environment, and I would expect to see some of the same sort of misbehavior — to put the term lightly — that was on in 2020 return in ‘24,” Krebs said. 

As the election ramps up, Krebs said that he expects domestic political actors — ranging from political action committees to militia groups — to embrace some of the tactics used by foreign groups to meddle in the election. “What we’re seeing is some of the playbooks of foreign adversaries are being adopted by domestic actors,” Krebs said.

Amid widespread conspiracy theories about the integrity of the 2020 election, poll workers have been subjected to violent threats, and Krebs said many of these workers choosing to leave their jobs as a result represents perhaps the greatest threat to the 2024 election. 

Asked what messaging he expects Trump will adopt regarding the integrity of the 2024 election, Krebs demurred: “Don’t even want to think about it.”

The post The 2024 race promises to be ‘very, very active’ in terms of foreign and domestic meddling, says former CISA chief appeared first on CyberScoop.

]]>
US intelligence research agency examines cyber psychology to outwit criminal hackers https://cyberscoop.com/iarpa-cyber-psychology-hackers/ Tue, 30 May 2023 15:37:46 +0000 https://cyberscoop.com/?p=74367 An Intelligence Advanced Research Projects Activity project looks to study hackers' psychological weaknesses and exploit them.

The post US intelligence research agency examines cyber psychology to outwit criminal hackers appeared first on CyberScoop.

]]>
It’s one of the most well-worn clichés in cybersecurity — humans are the weakest link when it comes to defending computer systems. And it’s also true.

Every day, we click links we shouldn’t, download attachments we should avoid and fall for scams that all too often are obvious in hindsight. Overwhelmed by information, apps and devices — along with our increasingly short attention spans — we are our own worst enemies in cyberspace. 

The natural human weaknesses that make defending the open internet so difficult are well understood and plenty of companies and organizations work to make the average person behind the keyboard better at digital self-defense. But what cybersecurity researchers haven’t focused much attention on until now are the psychological weaknesses of attackers. What are their deficiencies, habits or other patterns of behavior that can be used against them? What mistakes do they typically make? And how can those traits be used to stop them?

A new project at the Intelligence Advanced Research Projects Activity — the U.S. intelligence community’s moonshot research division — is trying to better understand hackers’ psychology, discover their blind spots and build software that exploits these deficiencies to improve computer security. 

“When you look at how attackers gain access, they often take advantage of human limitations and errors, but our defenses don’t do that,” Kimberly Ferguson-Walter, the IARPA program manager overseeing the initiative, told CyberScoop. By finding attackers’ psychological weaknesses, the program is “flipping the table to make the human factor the weakest link in cyberattacks,” she said.

Dubbed Reimagining Security with Cyberpsychology-Informed Network Defenses or “ReSCIND,” the IARPA initiative is an open competition inviting expert teams to submit proposals for how they would study hackers’ psychological weaknesses and then build software exploiting them. By funding the most promising proposals, IARPA hopes to push the envelope on how computers are defended. 

The project asks participants to carry out human-subject research and recruit computer security experts to determine what types of “cognitive vulnerabilities” might be exploited by defenders. By recruiting expert hackers and studying how they behave when attacking computer systems, the project aims to discover — and potentially weaponize — their weaknesses.

Ferguson-Walter describes “cognitive vulnerabilities” as an umbrella term for any sort of human limitation. The vulnerabilities a cyber psychological defense system might exploit include the sunk cost fallacy, which is the tendency of a person to continue investing resources in an effort when the more rational choice would be to abandon it and pursue another. In a network defense context, this might involve tricking an attacker into breaking into a network via a frustrating, time-consuming technique.

Another example Ferguson-Walter cites to explain what weaknesses might be exploited is the Peltzman Effect, which refers to the tendency of people to engage in more risky behavior when they feel safe. The canonical example of the Peltzman Effect is when mandatory seatbelt laws were put into effect and drivers engaged in more risky driving, thinking that they were safe wearing a seat belt. The effect might be used against attackers in cyberspace by creating the perception that a network is poorly defended, inducing a sense of safety and resulting in less well-concealed attack. 

Just as the tools of behavioral science have been used to revolutionize the fields of economics, advertising and political campaigning, ReSCIND and the broader field of cyber psychology aims to take insights about human behavior to improve outcomes. By placing the behavior of human beings at the center of designing a defensive system, cyber psychology aims to create systems that address human frailties. 

“Tactics and techniques used in advertising or political campaigning or e-commerce or online gaming or social media take advantage of human psychological vulnerability,” says Mary Aiken, a cyber psychologist and a strategic adviser to Paladin Capital Group, a cybersecurity-focused venture capital firm. Initiatives such as ReSCIND “apply traditional cognitive behavioral science research — now mediated by cyber psychological findings and learnings — and apply that to cybersecurity to improve defensive capabilities,” Aiken said.

Cybersecurity companies are using some tools of cyber psychology in designing defenses but have not done enough to integrate the study of human behavior, said Ferguson-Walter. Tools such as honeypots or decoy systems on networks might be thought of as turning the psychological weaknesses of attackers against them, but defenders could do more to exploit these weaknesses.

Among the central challenges facing ReSCIND participants is figuring out what weaknesses a given attacker might be susceptible — all while operating in a dynamic environment. To address this, the project proposal asks participants to come up with what it conceives of as “bias sensors” and “bias triggers,” which, together, identify a vulnerability and then induce a situation in which an attacker’s cognitive vulnerabilities are exploited. 

Exactly how that system will function and whether it can be integrated into a software solution is far from clear, but Ferguson-Walter says it’s important for IARPA to pursue these types of high-risk, high-reward projects that in the absence of government funding are unlikely to receive support. 

And amid widespread computer vulnerabilities and only halting progress in securing online life, a new approach might yield unexpected breakthroughs. “We’ve had 50 or 60 years of cybersecurity and look where we are now: Everything is getting worse,” Aiken says. “Cybersecurity protects your data, your systems, and your networks. It does not protect what it is to be human online.” 

The post US intelligence research agency examines cyber psychology to outwit criminal hackers appeared first on CyberScoop.

]]>
Reality check: What will generative AI really do for cybersecurity? https://cyberscoop.com/generative-ai-chatbots-cybersecurity/ Tue, 23 May 2023 14:45:31 +0000 https://cyberscoop.com/?p=74239 Cybersecurity professionals are eyeing generative AI’s defensive potential with a mix of skepticism and excitement.

The post Reality check: What will generative AI really do for cybersecurity? appeared first on CyberScoop.

]]>
Everywhere you look across the cybersecurity industry — on conference stages, trade show floors or in headlines — the biggest companies in the business are claiming that generative AI is about to change everything you’ve ever known about defending networks and outsmarting hackers.

Whether it’s Microsoft’s Security Copilot, Google’s security-focused large language model, Recorded Future’s AI-assistant for threat intelligence analysts, IBM’s new AI-powered security offering or a fresh machine learning tool from Veracode to spot flaws in code, tech companies are tripping over one another to roll out their latest AI offerings for cybersecurity. 

And at last month’s RSA Conference — the who’s-who gathering of cybersecurity pros in San Francisco — you couldn’t walk more than a few feet on the showroom floor without bumping into a salesperson touting their firm’s new AI-enabled product. From sensational advertising, to bombastic pitches to more measured talks from top national security officials, AI was on everyone’s lips.

Recent years’ rapid advances in machine learning have made the potential power of AI blindingly obvious. What’s much less obvious is how that technology is going to be usefully deployed in security contexts and whether it will deliver the major breakthroughs its biggest proponents promise. 

Over the course of a dozen interviews, researchers, investors, government officials and cybersecurity executives overwhelmingly say they are eyeing generative AI’s defensive potential with a mix of skepticism and excitement. Their skepticism is rooted in a suspicion that the marketing hype is misrepresenting what the technology can actually do and a sense that AI may even introduce a new set of poorly understood security vulnerabilities.

But that skepticism is tempered by real excitement. By processing human language as it is actually spoken, rather than in code, natural language processing techniques may enable humans and machines to interact in new ways with unpredictable benefits. “This is one of those moments where we see a fundamental shift in human computer interaction, where the computer is more amenable to the way that we naturally do things,” said Juan Andres Guerrero-Saade, the senior director of SentinelLabs, the research division of the cybersecurity firm SentinelOne. 

For veterans of the cybersecurity industry, the intense hype around AI can feel like deja vu. Recent advances in generative AI — tools that can replicate human speech and interact with the user — have captured public attention, but the machine learning technologies that underpin it have been widely deployed by cybersecurity firms in the past decade. Machine learning tools already power anti-virus, spam-filtering and phishing-detection tools, and the notion of “intelligent” cyberdefense — a defense that uses machine learning to adapt to attack patterns — has become a marketing staple of the cybersecurity industry. 

“These machine learning tools are fantastic at saying here’s a pattern that no human is going to have been able to find in all of this massive data,” says Diana Kelley, the chief information security officer at Protect AI, a cybersecurity company.

In cybersecurity contexts, machine learning tools have sat mostly in the back office, powering essential functions, but the revolution in generative AI may change that. This is largely due to the aggressiveness with which the industry’s leader, OpenAI, has released its generative AI products.

As the technology has advanced in recent years, AI incumbents such as Google, which pioneered many of the technical advances that make possible today’s generative AI tools, have hesitated to release their products into the wild. OpenAI, by contrast, has made its AI tools far more readily available and built slick user interfaces that make working with their language models incredibly easy. Microsoft has poured billions of dollars of investments and cloud computing resources into OpenAI’s work and is now integrating the start-up’s large language models into its product offerings, giving OpenAI access to a massive customer base. 

That’s left competitors playing catch-up. During his recent keynote address at Google’s developer conference, company CEO Sundar Pichai said some version of “AI” so many times that his performance was turned into an instantly viral video that clipped together his dozens of references to the technology

With AI companies one of the few slices of the tech sector still attracting venture capital in a slowing economy, today’s start-ups are quick to claim that they too are incorporating generative AI into their offerings. At last month’s RSA conference, investors in attendance were deluged by pitches from firms claiming to put AI to work in cybersecurity contexts, but all too often, the generative AI tie-ins appeared to be mere hot air.

“What we saw at the show was a lot of people that were slapping a front end on ChatGPT and saying, ‘Hey, look at this cool product,’” said William Kilmer, a cybersecurity-focused investor at the venture capital firm Gallos, to describe the scores of pitches he sat through at RSA with thin claims of using generative AI. 

And as companies rush to attract capital and clients, the reality of generative AI can easily be glossed over in marketing copy. “The biggest problem we have here is one of marketing, feeding marketing, feeding marketing,” Guerrero-Sade from SentinelLabs argues. “At this point, people are ready to pack it up, and say the security problem is solved — let’s go! And none of that is remotely true.”

Separating hype from reality, then, represents a tough challenge for investors, technologists, customers and policymakers.

Anne Neuberger, the top cybersecurity adviser at the White House, sees generative AI as a chance to make major improvements in defending computer systems but argues that the technology hasn’t yet delivered to its full potential.

As Neuberger sees it, generative AI could conceivably be used to clean up old code bases, identify vulnerabilities in open-source repositories that lack dedicated maintainers, and even be used to produce provably secure code in formal languages that are hard for people to write. Companies that run extensive end-point security systems — and have access to the data they generate — are in a good position to train effective security models, she believes. 

“Bottom line, there’s a lot of opportunity to accelerate cybersecurity and cyberdefense,” Neuberger told CyberScoop. “What we want to do is make sure that in the chase between offense and defense that defense is moving far more quickly. This is especially the case since large language models can be used to generate malware more quickly than before.”

But on the flip side, effectively implementing large language models in security-sensitive contexts faces major challenges. During her time as an official at the National Security Agency, Neuberger said she witnessed these hurdles first-hand when the agency began using language models to supplement the work of analysts, to do language translation and to prioritize what intelligence human analysts should be examining.

Cleaning data to get it usable for machine learning required time and resources, and once the agency rolled out the models for analysts to use some were resistant and were concerned that they could be displaced. “It took a while until it was accepted that such models could triage and to give them a more effective role,” Neuberger said.

For cybersecurity practitioners such as Guerrero-Saade and others who spoke with CyberScoop, some of the most exciting applications for generative AI lie in reverse engineering, the process to understand what a piece of software is trying to do. The malware research community has quickly embraced the use of generative AI, and within a month of ChatGPT’s release a plug-in was released integrating the chatbot with IDA Pro, the software disassembler tool. Even after years of reverse engineering experience, Guerrero-Saade is learning from these tools, such as when he attended a recent training, didn’t understand everything and leaned on ChatGPT to get him started. 

ChatGPT really shines when it functions as a kind of “glue logic,” in which it functions as a translator between programs that aren’t associated with one another or a program with a human, says Hammond Pearce, a research assistant professor at New York University’s Center for Cybersecurity. “It’s not that ChatGPT by itself isn’t amazing, because it is, but it’s the combination of ChatGPT with other technologies … that are really going to wow people when new products start coming out.”

For now, defensive cybersecurity applications of generative AI are fairly nascent. Perhaps the most prominent such product — Microsoft’s Security Copilot — remains in private preview with a small number of the company’s clients as it solicits feedback. Using it requires being integrated into the Microsoft security stack and running the company’s other security tools, such as Intune, Defender and Sentinel. 

Copilot offers an input system similar to ChatGPT and lets users query a large language model that uses both OpenAI’s GPT-4 and a Microsoft model about security alerts, incidents and malicious code. The goal is to save analysts time by giving them a tool that quickly explains the code or incidents they’re examining and is capable of quickly spitting out analytical products — including close-to-final slide decks. 

Chang Kawaguchi, a Microsoft VP and the company’s AI security architect, sees the ability of the current generation of machine learning to work with human language — even with highly technical topics like security — as a “step function change.” The most consistent piece of feedback Kawaguchi’s colleagues have received when they demo Copilot is, “Oh, God, thank you for generating the PowerPoints for us. Like, I hate that part of my job.”

“We couldn’t have done that with last generation’s machine learning,” Kawaguchi told CyberScoop. 

Despite their smarts, today’s machine learning tools still have a remarkable penchant for stupidity. Even in Microsoft’s YouTube demo for Copilot, the company’s pitchwoman is at pains to emphasize its limitations and points out that the model refers to Windows 9 — a piece of software that doesn’t exist — as an example of how it can convey false information. 

As they are more widely deployed, security experts worry generative AI tools may introduce new, difficult to understand security vulnerabilities. “No one should be trusting these large language models to be reliable right now,” says Jessica Newman, the director of the AI Security Initiative at the University of California at Berkeley.

Newman likens large language models to “instruction following systems” — and that means they can be given instructions to engage in malicious behavior. This category of attack — known as “prompt injection” — leaves models open to manipulation in ways that are difficult to predict. Moreover, AI systems also have typical security vulnerabilities and are susceptible to data poisoning attacks or attacks on their underlying algorithms, Newman points out. 

Addressing these vulnerabilities is especially difficult because the nature of large language models means that we often don’t know why they output a given answer — the so-called “black box” problem. Just like a black box, we can’t see inside a large language model and that makes their work difficult to understand. While language models are rapidly advancing, tools to improve their explainability are not moving ahead at the same speed. 

“The people who make these systems cannot tell you reliably how they’re making a decision,” Newman said. “That black box nature of these advanced AI systems is kind of unprecedented when dealing with a transformative technology.”

That makes operators of safety critical systems — in, for example, the energy industry — deeply worried about the speed with which large language models are being deployed. “We are concerned by the speed of adoption and the deployment in the field of LLM in the cyber-physical world,” said Leo Simonovich, the vice president and global head for industrial cyber and digital security at Siemens Energy.

AI adoption in the operational technology space — the computers that run critical machinery and infrastructure — has been slow “and rightly so,” Simonovichn said. “In our world, we’ve seen a real hesitancy of AI for security purposes, especially bringing IT security applications that are AI powered into the OT space.”

And as language models deploy more widely, security professionals are also concerned that they lack the right language to describe their work. When LLMs confidently output incorrect information, researchers have taken to describing such statements as “hallucination” — a term that anthropomorphizes a computer system that’s far from human. 

These features of LLMs can result in uncanny interactions between man and machine. Kelley, who is the author of the book Practical Cybersecurity Architecture, has asked various LLMs whether she has written a book. Instead of describing the book she did write, LLMs will instead describe a book about zero trust that she hasn’t — but is likely to — write.

“So was that a huge hallucination? Or was that just good old mathematical probability? It was the second but the cool term is hallucination.” Kelley said. “We have to think about how we talk about it.”

Correction, May 23, 2023: An earlier version of this article misspelled William Kilmer’s last name.

The post Reality check: What will generative AI really do for cybersecurity? appeared first on CyberScoop.

]]>
Zoom executives knew about key elements of plan to censor Chinese activists https://cyberscoop.com/zoom-china-doj-eric-yuan/ Wed, 17 May 2023 15:44:24 +0000 https://cyberscoop.com/?p=74061 Pressured by the Chinese government to comply with censorship guidelines, Zoom drafted guidelines to suppress content critical of Beijing.

The post Zoom executives knew about key elements of plan to censor Chinese activists appeared first on CyberScoop.

]]>
In late October 2019, Zoom CEO Eric Yuan traveled to China on a high-stakes mission. A month earlier, Chinese authorities blocked the videoconferencing platform, saying it hadn’t done enough to suppress anti-government speech. It was a massive crisis for Yuan. With both clients and large portions of his development team there, he desperately needed to get Zoom up and running again in China. 

To make that happen, Zoom rolled over for Chinese officials and promised to comply with Beijing’s demands to suppress speech on the platform, according to court documents. A proposed “rectification plan” pledged to monitor user communications for political views the Chinese Communist Party deemed unacceptable, any talk of the Tiananmen Square massacre, commentary about political unrest in Hong Kong and rumors disparaging Chinese political leaders.

Ten weeks after it was banned, Zoom’s service returned to China. An updated criminal complaint unsealed last month detailed the company’s efforts to comply with Beijing’s censorship demands. That complaint updates charges U.S. prosecutors first brought in 2020 against a Zoom employee named Julian Jin and names several Chinese government officials as his co-conspirators. 

U.S. prosecutors allege Jin cooperated with these government officials to use the platform to harass dissidents based in the United States and suppress their speech. While Jin and his co-conspirators concealed large parts of their plot from Zoom’s senior leadership, the updated complaint reveals that at key points, Jin informed executives about his work to suppress speech. The complaint suggests that company executives were aware of Jin’s work to a degree previously not known.

At a time when government officials in Washington are trying to address the risks posed by Chinese companies operating in the United States — TikTok being the most prominent example — and write more stringent rules governing the relationships between American and Chinese businesses, Zoom’s experience in China illustrates the compromises required of firms doing business there to ensure that they don’t fall afoul of censorship demands. 

“Companies operating in China must understand that they do so only by the grace of the Chinese Communist Party, and the party will exploit and abuse their products, employees and access,” said Rep. Mike Gallagher, R-Wis.,who leads the House of Representatives’ select committee on China.

“Zoom’s failure to appreciate this risk allowed the CCP to exploit the company’s reach to silence Americans on American soil who wanted to commemorate the Party’s victims,” Gallagher said. “American businesses need to take off their golden blindfolds and become clear-eyed about the moral, legal, and financial risks of dealing with the Chinese Communist Party.”

Bringing tech companies to heel 

American tech companies operating in China have made peace with the realities of doing business there. Apple stores Chinese user data on servers located inside the country — as does Tesla. Microsoft’s Bing search engine complies with Chinese censorship laws. Amazon provided cloud technology to local companies in order to maintain its presence in China. And while many large U.S. companies are reassessing their relationship with China amid heightened tensions between Washington and Beijing, they are constrained by a basic reality: At a time of slowing growth, their companies need the Chinese market

A trio of Chinese laws passed in recent years — the Cybersecurity Law, the Personal Information Protection Law and the Data Security Law — have granted the Chinese government expansive power over how data is stored, where it can flow and when it can be accessed. The calculation for U.S. firms is a simple one. “You’re either complying with Chinese government law or you’re not operating in China,” said Dakota Cary, a China-focused consultant at the Krebs Stamos Group, an advisory firm.

While Zoom is an American company, its links to China, where it maintains a sizable development team, has raised concerns about whether it can maintain independence vis-a-vis the Chinese state — concerns that echo those associated with TikTok.

“The price the party-state exacts for market access is compliance with their repressive approach to politics, and the more your business depends on China, the more leverage they have to make you comply,” said Matthew Schrader, the China adviser at the International Republican Institute. “The DOJ charging documents show an excellent example of the kind of pressure foreign companies face every day in China, and democratic governments everywhere need to do a better job ensuring that pressure can’t succeed.”

When Yuan, Zoom’s CEO, arrived in Beijing in October of 2019 with the goal of restoring access to the platform in China, he attended a series of meetings that demonstrate how Chinese authorities exercise power over foreign technology companies. 

One of these meetings, where Yuan was accompanied by Jin, the man at the center of the scheme to suppress speech, was Tian Xinning of the Ministry of Public Security’s Network Security Bureau. The MPS plays a major role in China’s domestic security and has contributed to Chinese law enforcement’s embrace of large scale data collection in its public security mission. Its Network Security Bureau enforces restrictions on online speech, and just last month the bureau announced that it would carry out a 100-day sprint to combat online rumor mongering

Little is known about Tian beyond that he is an officer with the MPS’s Network Security Bureau stationed in Beijing. Last month’s updated indictment charges him with one count of conspiracy to commit interstate harassment, yet the DOJ lists his age as “unknown.” Tian appears to be an expert in facial-recognition technology — he is thanked for his contributions to a Chinese think tank publication describing how to ensure compliance in the processing of facial-recognition data. 

Yuan has described Zoom’s presence in China as a matter of course for a large technology company, but critics say his decision to meet with an official at the heart of the Chinese surveillance state raises red flags. 

“The CEO of Zoom has always distanced himself from the Chinese government and has professed to be completely independent and autonomous of the Chinese government. Here in the indictment we see him allegedly attending a meeting hosted by the Ministry of Public Security,” said Jacob Helberg, a commissioner of the U.S.-China Economic and Security Review Commission.

At their meetings in October, Chinese government officials instructed Yuan and Jin about the steps Zoom needed to take in order to be unblocked in China, according to the complaint. The company drafted a “rectification report” that would be submitted to Chinese authorities detailing the types of content deemed illicit by the CCP and that the company would monitor for. The plan designated Jin as the MPS’s contact person within Zoom to address takedown requests. 

On Oct. 25, 2019, Jin wrote to Yuan and a group of other Zoom employees to update them on his work with the MPS to bring the company into compliance with Chinese law, according to the DOJ complaint. Following a meeting he attended at the MPS office in Hangzhou, Jin wrote that the MPS wants “a list of some details on our routine monitoring; such as Hong Kong protests, illegal religions, fundraising, and multi-level marketing.” Jin added that “they will help with the determination of issues that we find difficult to determine whether they are illegal” and that he planned to visit the unit “often” to “give live demonstrations.” 

After Chinese authorities lifted the block on Zoom on Nov. 17, Jin wrote to Tian to thank him and the MPS Network Security Bureau for their guidance in resolving access to the service in China, according to emails cited in the complaint. The key to maintaining the availability of Zoom in China going forward, Tian told Jin, was to rigorously enforce know-your-customer rules. Jin said the company planned to strictly follow the rectification report.

A spokesperson for Zoom declined to answer a list of detailed questions about the degree to which Zoom executives were aware of Jin’s actions and the content of communications between Yuan and Jin. The spokesperson said in a statement that Zoom has fully cooperated with U.S. investigators and that it supports the “U.S. Government’s commitment to protecting American companies and citizens from foreign coercion or influence.”

When the charges against Jin were first announced in 2020, Zoom said in a statement that aspects of the “rectification plan” cited in the DOJ charging documents were not carried out, “such as working with a local Chinese partner to develop technology that would analyze the content of meetings hosted in China to identify and report illegal activity and shut down meetings that violate Chinese law.”

Shutting down Tiananmen vigils

Had Zoom’s efforts to suppress speech on the platform remained constrained to its Chinese users, it would have remained largely in line with how U.S. tech companies operating in China moderate content. But after access to Zoom was restored in China, Jin and his co-conspirators embarked on an aggressive campaign to snuff out dissent on the platform much of which senior company executives were unaware, according to the Justice Department’s account of their efforts.

That campaign coincided with Zoom’s astronomic growth. When Yuan traveled to China in 2019, the San Jose-based company was a very different organization. It went public in April of that year but had not yet been transformed by the pandemic’s turn to remote work. At the end of 2019, Zoom had 10 million daily meeting participants. By April 2020 — when the world went into lockdown — the platform had 300 million daily meeting participants, according to a profile of the company published by Sequoia Capital, one of its investors. 

The company’s skyrocketing success was a dream come true for Yuan. Born in China, he was inspired to create video conferencing software after being forced to commute 10 hours to see his then-girlfriend as an 18-year-old. “I thought it would be fantastic if in the future there was a device where I could just click a button and see her and talk to her,” he told Forbes in 2017.

The service he built did just that, but with popularity came scrutiny. In the spring of 2020, a series of articles raised questions about the technical infrastructure of Zoom’s products and whether the company was living up to its security promises. In late March — as the full implications of the pandemic were coming into view — investigative journalists at the Intercept revealed that Zoom’s claims to offer end-to-end encryption were not quite true. The following month, the company was forced to admit that some calls on the platform were being inappropriately routed through China. 

This caused a major headache for the Chinese operatives trying to suppress speech on the platform. In response to these articles, Zoom restricted access to U.S. user data by employees based in China, according to the DOJ complaint, and this made it far more difficult for Chinese operatives to ascertain information about meetings involving Chinese dissidents based in the United States. 

In theory, these rules should have prevented Jin and his co-conspirators from harassing U.S.-based Chinese dissidents on the platform, but this depended on Zoom employees effectively enforcing them. On at least three occasions, the DOJ complaint alleges, Zoom employees handed over data stored on U.S. servers to Jin that he did not have access to — simply because he asked for it. 

As the anniversary of Tiananmen approached, Jin and his fellow operatives stepped up their efforts, according to the DOJ. On May 19, 2020, Jin wrote to a group of Zoom executives, including Yuan, the CEO, that the upcoming anniversary required extra vigilance: “June 4th is coming, which is a sensitive date for China Cybersecurity. They’re very strict on this period.”

In the run-up to the anniversary, Chinese tactics to suppress speech on Zoom grew even more aggressive, according to the DOJ complaint. After one U.S.-based dissident hosted several practice meetings on Zoom, police in China showed up at the residence of potential participants in China and took away their electronics or threatened them with jail time. The father of another dissident carrying out meetings on Zoom was harassed so frequently by police that he asked his child whether they wanted him dead. 

On June 3 and 4, Chinese dissidents based in the United States planned to gather on Zoom to commemorate the signal anniversary for the Chinese democracy movement — meetings that Jin and his handlers were monitoring.  

As the attendees of the meetings signed on, the MPS officials charged alongside Jin were monitoring events and using email accounts created for this purpose sent bogus claims to Zoom’s hotline for online abuse. These emails claimed that the meetings devoted to the Tiananmen anniversary featured flags of the Islamic State, images of violence and child pornography — causing them to be shut down. 

Watch what you say

Chinese dissidents challenging the state have long known that they face a threat of surveillance, and the lack of clarity regarding the relationship of firms such as Zoom and TikTok with the Chinese government means that the choice of platform ends up shaping political speech for critics of the regime. “They have to be a bit careful when they say anything on these platforms,” said Maya Wang, the associate director in the Asia division at Human Rights Watch. 

While she generally tries to avoid using Zoom for China-related meetings, Wang said, the platform’s convenience and the lack of viable alternatives means it is once more hosting meetings of potential interest to the Chinese government.

But evaluating the risks posed by platforms is all but impossible due to a lack of transparency requirements for companies, she argues. “The problem in the U.S. is that there are so few rules with regard to transparency that everyone’s rights are really quite at risk,” Wang says. Platforms face no real standards when it comes to freedom of expression, and that creates a space for speech to be silenced. 

The coercion of individuals in China to suppress the speech rights of individuals outside China raises troubling implications for technology companies trying to operate in China, argues Helberg from the U.S.-China Economic and Security Review Commission. At a time when U.S. policymakers are considering how to address the security concerns raised by TikTok, the events described in the updated indictment provide “an extraordinary justification for why the United States should take action against Chinese software in the United States,” he said.

The Chinese government makes no distinction between Chinese people living in China and members of the diaspora, and in using its access to Zoom employees in China, the government was able to enforce its speech laws far beyond its borders. Zoom is a high-profile example of how one platform can be leveraged by the Chinese state to carry out censorship abroad, but there may be other, less prominent platforms that are also at risk. 

“Are there other instances of tech companies operating in China that are facilitating access to users’ data overseas?” Cary, the Krebs Stamos consultant, wonders. That’s a question with no clear answer. 

More than three years after their access to their platform was blocked in China, kicking off the scheme to suppress speech on the platform, Zoom executives concede that the company’s presence in China remains a risk.

In a section of its annual report filed with the Securities and Exchange Commission that describes risks to its business, the company observes that “the Chinese government has at times turned off our service in China without warning and requested that we take certain steps prior to restoring our service, such as designating an in-house contact for law enforcement requests and transferring China-based user data housed in the United States to a data center in China.” 

The post Zoom executives knew about key elements of plan to censor Chinese activists appeared first on CyberScoop.

]]>
Coming to DEF CON 31: Hacking AI models https://cyberscoop.com/def-con-red-teaming-ai/ Thu, 04 May 2023 18:16:47 +0000 https://cyberscoop.com/?p=73818 A group of prominent AI companies committed to opening their models to attack at this year's DEF CON hacking conference in Las Vegas.

The post Coming to DEF CON 31: Hacking AI models appeared first on CyberScoop.

]]>
A group of leading artificial intelligence companies in the U.S. committed on Thursday to open their models to red-teaming at this year’s DEF CON hacking conference as part of a White House initiative to address the security risks posed by the rapidly advancing technology.

Attendees at the premier hacking conference held annually in Las Vegas in August will be able to attack models from Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI in an attempt to find vulnerabilities. The event hosted at the AI Village is expected to draw thousands of security researchers. 

A senior administration official speaking to reporters on condition of anonymity ahead of the announcement said the red-teaming event is the first public assessment of large language models. “Red-teaming has been really helpful and very successful in cybersecurity for identifying vulnerabilities,” the official said. “That’s what we’re now working to adapt for large language models.”

The announcement Thursday came ahead of a meeting at the White House later in the day between Vice President Kamala Harris, senior administration officials and the CEOs of Anthropic, Google, Microsoft and OpenAI.

This won’t be the first time Washington has looked to the ethical hacking community at DEF CON to help find weaknesses in critical and emerging technologies. The U.S. Air Force has held capture-the-flag contests there for hackers to test the security of satellite systems and the Pentagon’s Defense Advanced Program Research Agency brought a new technology to the conference that could be used for more secure voting.

Rapid advances in machine learning in recent years have resulted in a slew of product launches featuring generative AI tools. But in the rush to launch these models, many AI experts are concerned that companies are moving too quickly to ship new products to market without properly addressing the safety and security concerns. 

Advances in machine learning have historically occurred in academic communities and open research teams, but AI companies are increasingly closing off their models to the public, making it more difficult for independent researchers to examine potential shortcomings. 

“Traditionally, companies have solved this problem with specialized red teams. However this work has largely happened in private,” AI Village founder Sven Cattell said in a statement. “The diverse issues with these models will not be resolved until more people know how to red team and assess them.”

Among the risks posed by these models are using them to create and spread disinformation; to write malware; to create phishing emails; to provide harmful knowledge not widely available to the public, such as instructions on how to create toxins; biases that are difficult to test for; the emergence of unexpected model properties and what industry researchers refer to as “hallucinations” — when an AI model gives a confident response to a query that isn’t grounded in reality.  

The DEF CON event will rely on an evaluation platform developed by Scale AI, a California company that produces training for AI applications. Participants will be given laptops to use to attack the models. Any bugs discovered will be disclosed using industry-standard responsible disclosure practices. 

Thursday’s announcement coincided with a set of White House initiatives aimed at improving the safety and security of AI models, including $140 million in funding for the National Science Foundation to launch seven new national AI institutes. The Biden administration also announced that the Office of Management and Budget will release guidelines for public comment this summer for how federal agencies should deploy AI.

The post Coming to DEF CON 31: Hacking AI models appeared first on CyberScoop.

]]>
US plans to boost tech diplomats deployed to embassies https://cyberscoop.com/fick-cyber-diplomats-embassies/ Wed, 12 Apr 2023 21:32:55 +0000 https://cyberscoop.com/?p=73058 Top cyber diplomat Nate Fick says the State Department is on track to have a diplomat trained in tech issues in every embassy.

The post US plans to boost tech diplomats deployed to embassies appeared first on CyberScoop.

]]>
The U.S. State Department is on track to deploy a diplomat trained in technology issues to the 168 American embassies around the world by the end of next year, Nate Fick, who leads the department’s cyber diplomacy, said during a meeting with reporters on Wednesday. 

As Washington attempts to maintain its technological edge over a rising China and address a range of pressing global challenges, Fick described getting diplomats with technical expertise into the field as an urgent need for the U.S. diplomatic corps. “You can’t practice East Asian diplomacy without tech diplomacy. You can’t do human rights work without tech. You can’t do climate diplomacy around the world without tech,” he said at a meeting organized by the Defense Writers Group.  

To get skilled personnel in place, the department is carrying out training sessions to school U.S. diplomats in technology issues. The sessions have so far been oversubscribed, Fick said, but at the current pace, he expects to hit his target of having a diplomat in every U.S. embassy with at least some training by the end of 2024. 

Fick said he is also making progress in reorienting the bureaucracy of the State Department to better incentivize U.S. diplomats to brush up on their tech skills. The department recently created a so-called “skill code” that diplomats can put on their resume and help them advance in the foreign service. “I can imagine future where every credible candidate to be a chief of mission, every future U.S. ambassador anywhere in the world has to have some demonstrated understanding of technology issues,” Fick said. 

Sworn in last year as the first U.S. ambassador at large for cyberspace and digital policy, Fick oversees an expansive portfolio that ranges from internet freedom issues to U.S. work in international standard-setting bodies to digital freedom. His office is at work writing an international cybersecurity strategy to complement the strategy document released last month by the White House, which places great emphasis on building international partnerships to shape norms of online behavior and secure supply chains. 

To bolster the ability of diplomats in the field to help partner nations and to boost capacity, Fick said he is engaged in active discussions with Congress to create a permanent fund for U.S. cybersecurity assistance.

The White House recently provided Costa Rica with $25 million to help respond to a devastating ransomware attack and another $25 million to Albania to help recover from an Iranian cyberattack. Fick said he hopes to build on these aid packages to make U.S. cybersecurity assistance a regular feature of its tech diplomacy toward countries that need it. 

The post US plans to boost tech diplomats deployed to embassies appeared first on CyberScoop.

]]>
Russian attacks on Ukrainian infrastructure cause internet outages, cutting off a valuable wartime tool https://cyberscoop.com/ukraine-internet-outages-infrastructure-attacks/ Wed, 12 Apr 2023 13:00:00 +0000 https://cyberscoop.com/?p=73008 With its war effort faltering, the Kremlin is stepping up its attacks on Ukrainian power plants, resulting in cascading internet failures.

The post Russian attacks on Ukrainian infrastructure cause internet outages, cutting off a valuable wartime tool appeared first on CyberScoop.

]]>
When Russian forces crossed into Ukraine early last year, one of their first targets was a key piece of internet infrastructure. By hitting the satellite internet provider Viasat on Feb. 24, 2022, with a wiper malware attack that infected its networking hardware, Russian forces appear to have disrupted communications at a key moment. 

But as the war has dragged on, disruptions to Ukraine’s internet have grown increasingly low tech. With the Russian war effort faltering, the Kremlin has stepped up its missile and artillery attacks on Ukraine’s energy infrastructure, and that has resulted in a series of localized internet outages, according to findings released by the security company Cloudflare on Wednesday

Beginning in the fourth quarter of 2022 and into the first quarter of 2023, a series of Russian strikes on local energy infrastructure caused internet outages in cities ranging from Odessa to Kharkiv. On Jan. 27, Russian airstrikes targeted Odessa’s internet infrastructure, resulting in a partial outage that lasted some 18 hours. On March 9, Russian attacks on Ukrainian energy and distribution networks caused disruptions in internet access in Kharkiv that lasted nearly two days. 

The shift in Russian targeting to more aggressively focus on energy infrastructure has had cascading effects on Ukraine’s internet access. “The power goes down in a given area and internet access obviously suffers,” said David Belson, Cloudflare’s head of data insight. When the power gets knocked out, that can cause cell transmission towers to no longer function, knocking out the internet in unpredictable ways.  

“The network engineers there are really doing heroic work just keeping facilities online,” he said.

There is nothing to indicate that Russian forces are striking electrical infrastructure with the goal of disrupting the internet, but Cloudflare’s data shows how the Kremlin’s shift toward more aggressive targeting of civilian infrastructure is impacting ordinary Ukrainian’s access to information. 

Internet access has been a key in Ukraine’s attempt to fend off the Russian invasion. The Russian invasion forced key Ukrainian state services to move online, and the internet has been a primary method for the government to spread information about what is happening in the country, for President Volodymyr Zelensky to broadcast his nightly address and to galvanize domestic and international support.

In areas controlled by Russian forces, occupying powers have re-routed key parts of Ukraine’s internet infrastructure to make it more easily surveilled.

The post Russian attacks on Ukrainian infrastructure cause internet outages, cutting off a valuable wartime tool appeared first on CyberScoop.

]]>
Recorded Future offers peek at the AI future of threat intelligence https://cyberscoop.com/recorded-future-openai-gpt-intelligence/ Tue, 11 Apr 2023 14:00:00 +0000 https://cyberscoop.com/?p=72965 The Massachusetts-based cybersecurity company has fine-tuned an OpenAI model to help analysts synthesize data.

The post Recorded Future offers peek at the AI future of threat intelligence appeared first on CyberScoop.

]]>
The threat intelligence company Recorded Future announced on Tuesday that it is rolling out a generative artificial intelligence tool that relies on a fine-tuned version of Open AI’s GPT model to synthesize data. 

Rapid advances in generative AI in recent months have led to a flurry of initiatives by companies to incorporate the technology into their offerings, and companies such as Recorded Future — with its massive trove of proprietary data — are showing how the technology is likely to be incorporated into products in the short term. 

Over the course of nearly 15 years in business, Recorded Future has collected a huge amount of data on the activity of malicious hackers, their technical infrastructure and criminal campaigns. The company has used that data to tune a version of Open AI’s deep learning models to build a tool that summarizes data and events for analysts and clients. By connecting the AI model to its intelligence graph, which collects data from across the web, the model will include near real-time information about commonly exploited vulnerabilities or recent breaches.

“This is something that for a human analyst can take several hours — reading all this material and then generating a summary,” Staffan Truvé, co-founder and chief technology officer of Recorded Future, told CyberScoop. “As you move through the information, you now have someone summarizing it in real time.” 

Cybersecurity companies have broadly incorporated AI into their products over the past decade, but the next step of incorporating machine learning into corporate applications is figuring how to build useful generative tools.

Companies such as Recorded Future with large internal data holdings have in recent months embraced deep learning technology to build generative AI tools. Late last month, Bloomberg rolled out BloombergGPT, a 50 billion parameter model trained on financial data.

By taking large data holdings and feeding them into AI models, companies like Recorded Future and Bloomberg are attempting to build generative AI systems that are finely tuned to answering the questions that their clients rely on them to answer. Companies with large data holdings will likely look to generative AI to turn that data into a more productive resource.

But Bloomberg and Recorded Future also offer an example of how companies can take different approaches in building generative AI models with major implications for the broader industry. While Bloomberg has built its own bespoke model, Recorded Future relies on OpenAI’s foundational GPT model and pays the company based on how much it queries the model.

While Truvé would not comment on the financial terms of the relationship between Recorded Future and OpenAI, it is likely that these types of business-to-business deals represent a fairly lucrative business model for OpenAI, a company that faces a difficult road to profitability while facing staggering computing costs to train its models. 

It’s difficult to evaluate the quality of Recorded Future’s AI offerings. The company has not tested its model against standard AI benchmarking tools, instead relying on its in-house analysts to test and verify its accuracy. The company relies on OpenAI’s most advanced GPT models, but OpenAI has severely limited the amount of information it makes available about its top-of-the-line products. 

In their eagerness to answer questions, advanced AI models are prone to hallucination — confidently stating a piece of information as fact that has no basis in reality. But Truvé said the company’s model manages to mostly avoid hallucinating in large part because its primary application is in summarizing a body of information returned as part of a query. 

Indeed, the performance of Recorded Future’s AI is aided by the fact that its purpose is fairly straight forward. The company’s AI feature functions mainly as a summarizing tool, and Truvé sees the AI tool as something that will augment cybersecurity analysts. 

“The challenge facing people in cybersecurity is that there is too much information and too few people to process it.” Truve said “This tries to solve the lack of time available to analysts and the rather acute lack of analysts.” 

The post Recorded Future offers peek at the AI future of threat intelligence appeared first on CyberScoop.

]]>
The Discord servers at the center of a massive US intelligence leak https://cyberscoop.com/discord-intelligence-leak-ukraine/ Mon, 10 Apr 2023 19:49:22 +0000 https://cyberscoop.com/?p=72955 The intelligence files related to the Ukraine war that appeared online aren't the first sensitive military documents shared on video game forums.

The post The Discord servers at the center of a massive US intelligence leak appeared first on CyberScoop.

]]>
Over the past few days, U.S. investigators and digital security researchers alike have probed what would seem to be the most unlikely of places to determine the origin of a major leak of classified intelligence documents: video game-focused chat servers.

A series of video game servers have emerged as key distribution points for a cache of perhaps as many as 100 intelligence documents containing secret and top secret information about the war in Ukraine. The documents first appears on a server known as “Thug Shaker Central” and were then reposted on servers known as “WowMao” and “Minecraft Earth Map,” according to an investigation by Bellingcat, an investigative journalism outfit based in the Netherlands, that provides the most thorough account to date of how the documents made it into the public domain. 

The documents mostly date from February and March, but may include material from as far back as January. At least one set of the files circulating online includes a photograph of a handwritten character sheet of a roleplaying game — Doctor Izmer Trotzky. 

Last week’s leaks represent a stark departure from how classified information has reached the public in recent years. “When you think of these big leaks, you think of whistleblowers like Snowden, hack and dumps from Russia,” Aric Toler, who has investigated the Discord leaks for Bellingcat, wrote in an email to CyberScoop. “This is just a guy in a tiny Discord server sharing hundreds of insanely sensitive [files] with his gaming buddies.” 

After being posted, the files appear to have sat dormant for about a month, until they were shared last week on 4chan and Telegram, where they received greater attention. “Since Discord isn’t really publicly archived, indexed, or searchable (as 4chan and, to a lesser degree, Telegram are), then it’s not like you can easily scrape and analyze these sources,” Toler said. “So it’s a bit of a perfect storm.”

The release of classified material on online gaming forums is not as novel as it might seem. In the last two years, fans of the free-to-play combat game War Thunder have repeatedly posted classified material in the game’s online forum — on one occasion to settle an obscure argument about the design details of a tank depicted in the game. 

Highly sensitive classified information repeatedly appearing on online gaming forums has intelligence experts exasperated. “The idea of paying a source to dead-drop this stuff when it’s popping up unsolicited on Minecraft and world of tanks seems quaint,” says Gavin Wilde, a senior fellow at the Carnegie Endowment for International Peace and a 10-year veteran of the National Security Agency. 

A furious effort inside the Department of Defense is now attempting to verify this most recent cache documents circulating online, assess the damage and prevent further fallout. The Department of Justice has opened an investigation into the leak that aims to determine its source, a probe that will likely scrutinize the online communities where the material appears to have originated. 

The leaked documents are photographs of briefing slides that appear to have been folded up. They are photographed mostly against what appears to be a low table. In the background of some of the photographs can be seen a bottle of Gorilla Glue and what appears to be a strap with the Bushnell brand, a popular maker of outdoor optics and rifle scopes. 

The documents amount to one of the most serious leaks in the history of the U.S. intelligence community, on par with the WikiLeaks disclosures and material made public by the group known as the ShadowBrokers, according to intelligence experts. The material spans the U.S. intelligence community, including information obtained by the CIA, the NSA and the National Reconnaissance Office, which operates America’s fleets of highly secretive spy satellites.

The material includes timetables for the delivery of munitions to Ukraine by South Korea, references to sensitive American satellite surveillance capabilities, and indications that the United States has managed to penetrate the Russian military to such an extent that it has been able to warn Ukraine about the site of upcoming artillery and missile strikes. 

The cache also includes reference to communication between a cybercriminal group and an officer of Russia’s powerful domestic intelligence agency, the FSB, claiming that the group had gained access to the computer systems of a Canadian pipeline and that it could use that access to disrupt a pipeline. That claim has not been confirmed, and it is fully possible the communications intercepted by U.S. intelligence services amount to not more than bluster by the hacking group. 

Ukrainian officials have cautioned that the leaked document may include falsified information or may be entirely fabricated, but so far, the documents appear to be mostly authentic with only minor alterations that appear to have occurred after the documents began circulating more widely last week on 4chan and Russian telegram channels. 

“We are very fortunate that this leak has received such a skeptical reception,” said John Hultquist, the head of threat intelligence at Mandiant.

The post The Discord servers at the center of a massive US intelligence leak appeared first on CyberScoop.

]]>