Technology - CyberScoop https://cyberscoop.com/news/technology/ Wed, 31 May 2023 21:07:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://cyberscoop.com/wp-content/uploads/sites/3/2023/01/cropped-cs_favicon-2.png?w=32 Technology - CyberScoop https://cyberscoop.com/news/technology/ 32 32 FTC settles with Amazon Ring over hacking, security incidents https://cyberscoop.com/ftc-ring-amazon-settlement-camera-hacking/ Wed, 31 May 2023 19:17:14 +0000 https://cyberscoop.com/?p=74441 Thousands of Ring customers have been victims of cyberattacks that the commission alleged were in part due to poor data security practices.

The post FTC settles with Amazon Ring over hacking, security incidents appeared first on CyberScoop.

]]>
Amazon-owned Ring reached a $5.8 million settlement with the Federal Trade Commission on Wednesday over the company’s alleged failures to protect user data against cyberattacks.

According to a complaint filed on behalf of the FTC in a federal court, approximately 55,000 U.S. customers suffered serious account compromises over a period during which Ring failed to take necessary measures to prevent credential stuffing and brute force attacks. The attacks allowed hackers to try and access consumers’ accounts through a previously breached password or automated, repeated attempts at guessing credentials.

For 910 of the U.S. accounts (or 1,250 devices), attackers were able to not just take over accounts but take additional steps such as accessing a live stream. In at least 20 cases, hackers maintained this access for more than a month.

“Ring’s disregard for privacy and security exposed consumers to spying and harassment,” Samuel Levine, Director of the FTC’s Bureau of Consumer Protection, wrote in a statement. “The FTC’s order makes clear that putting profit over privacy doesn’t pay.”

A separate complaint and proposed settlement filed by the Justice Department on behalf of the FTC Wednesday accused Amazon’s Alexa voice assistant of violating the FTC Act and Children’s Online Privacy Protection Act by retaining children’s information without parental permission. Amazon settled that complaint for $25 million.

The FTC’s Ring settlement follows a series of incidents in 2019 in which hackers accessed Ring cameras to harass and stalk owners, including families and children. The complaint notes several examples of these cases, including one when an 87-year-old woman in an assisted living facility was threatened and sexually propositioned.

The FTC complaint alleges that Ring’s security promises to customers would have reasonably led them to believe that the company was taking steps to prevent such attacks. The complaint also notes that Ring failed to limit customers’ video data to employees who needed access, instead of allowing every employee and well as hundreds of contractors to access feeds whether they needed to or not.

“This approach to access meant that Ring’s employees and third-party contractors
had dangerous — and unnecessary — access to highly sensitive data,” the complaint said.

The proposed settlement orders Ring to pay $5.8 million which will go to customer refunds. It also requires Ring to delete any customer videos, face embedding and face data collected prior to 2018 as well as any work products derived from the data. Ring would agree under the order to notify the FTC about future incidents of unauthorized access.

The FTC’s settlement with Alexa would prohibit Amazon from using geolocation, voice information, and children’s voice information for the creation or improvement of any data product.

Both settlements are pending court approval.

The FTC also noted several “unreasonable data security and privacy practices” the company had between 2016 and 2020, including: failing to encrypt customer video at rest, failing to obtain customer consent for reviewing video data for research and failing to provide employees with data security training.

Amazon told lawmakers in a letter in 2020 that it had updated its security practices to encrypt video feeds and “proactively monitory” for credential stuffing.

Amazon said in a statement it disagreed with both the Ring and Alexa complaints and denied violating the law in both cases.

“Our focus has been and remains on delivering products and features our customers love, while upholding our commitment to protect their privacy and security,” an Amazon spokesperson said about the Ring settlement. “Ring promptly addressed these issues on its own years ago, well before the FTC began its inquiry.”

About the Alexa settlement, the company said in an online blog it has applied “rigorous standards” to protect children’s data.

This is the second major swing by the FTC at children’s privacy in recent weeks. Earlier this month the FTC accused Facebook of violating federal children’s privacy law, proposing a settlement that would prohibit the company from profiting off of children’s data. Meta is challenging the complaint.

Updated May 31, 2023: To include statements from the FTC and Amazon.

The post FTC settles with Amazon Ring over hacking, security incidents appeared first on CyberScoop.

]]>
US intelligence research agency examines cyber psychology to outwit criminal hackers https://cyberscoop.com/iarpa-cyber-psychology-hackers/ Tue, 30 May 2023 15:37:46 +0000 https://cyberscoop.com/?p=74367 An Intelligence Advanced Research Projects Activity project looks to study hackers' psychological weaknesses and exploit them.

The post US intelligence research agency examines cyber psychology to outwit criminal hackers appeared first on CyberScoop.

]]>
It’s one of the most well-worn clichés in cybersecurity — humans are the weakest link when it comes to defending computer systems. And it’s also true.

Every day, we click links we shouldn’t, download attachments we should avoid and fall for scams that all too often are obvious in hindsight. Overwhelmed by information, apps and devices — along with our increasingly short attention spans — we are our own worst enemies in cyberspace. 

The natural human weaknesses that make defending the open internet so difficult are well understood and plenty of companies and organizations work to make the average person behind the keyboard better at digital self-defense. But what cybersecurity researchers haven’t focused much attention on until now are the psychological weaknesses of attackers. What are their deficiencies, habits or other patterns of behavior that can be used against them? What mistakes do they typically make? And how can those traits be used to stop them?

A new project at the Intelligence Advanced Research Projects Activity — the U.S. intelligence community’s moonshot research division — is trying to better understand hackers’ psychology, discover their blind spots and build software that exploits these deficiencies to improve computer security. 

“When you look at how attackers gain access, they often take advantage of human limitations and errors, but our defenses don’t do that,” Kimberly Ferguson-Walter, the IARPA program manager overseeing the initiative, told CyberScoop. By finding attackers’ psychological weaknesses, the program is “flipping the table to make the human factor the weakest link in cyberattacks,” she said.

Dubbed Reimagining Security with Cyberpsychology-Informed Network Defenses or “ReSCIND,” the IARPA initiative is an open competition inviting expert teams to submit proposals for how they would study hackers’ psychological weaknesses and then build software exploiting them. By funding the most promising proposals, IARPA hopes to push the envelope on how computers are defended. 

The project asks participants to carry out human-subject research and recruit computer security experts to determine what types of “cognitive vulnerabilities” might be exploited by defenders. By recruiting expert hackers and studying how they behave when attacking computer systems, the project aims to discover — and potentially weaponize — their weaknesses.

Ferguson-Walter describes “cognitive vulnerabilities” as an umbrella term for any sort of human limitation. The vulnerabilities a cyber psychological defense system might exploit include the sunk cost fallacy, which is the tendency of a person to continue investing resources in an effort when the more rational choice would be to abandon it and pursue another. In a network defense context, this might involve tricking an attacker into breaking into a network via a frustrating, time-consuming technique.

Another example Ferguson-Walter cites to explain what weaknesses might be exploited is the Peltzman Effect, which refers to the tendency of people to engage in more risky behavior when they feel safe. The canonical example of the Peltzman Effect is when mandatory seatbelt laws were put into effect and drivers engaged in more risky driving, thinking that they were safe wearing a seat belt. The effect might be used against attackers in cyberspace by creating the perception that a network is poorly defended, inducing a sense of safety and resulting in less well-concealed attack. 

Just as the tools of behavioral science have been used to revolutionize the fields of economics, advertising and political campaigning, ReSCIND and the broader field of cyber psychology aims to take insights about human behavior to improve outcomes. By placing the behavior of human beings at the center of designing a defensive system, cyber psychology aims to create systems that address human frailties. 

“Tactics and techniques used in advertising or political campaigning or e-commerce or online gaming or social media take advantage of human psychological vulnerability,” says Mary Aiken, a cyber psychologist and a strategic adviser to Paladin Capital Group, a cybersecurity-focused venture capital firm. Initiatives such as ReSCIND “apply traditional cognitive behavioral science research — now mediated by cyber psychological findings and learnings — and apply that to cybersecurity to improve defensive capabilities,” Aiken said.

Cybersecurity companies are using some tools of cyber psychology in designing defenses but have not done enough to integrate the study of human behavior, said Ferguson-Walter. Tools such as honeypots or decoy systems on networks might be thought of as turning the psychological weaknesses of attackers against them, but defenders could do more to exploit these weaknesses.

Among the central challenges facing ReSCIND participants is figuring out what weaknesses a given attacker might be susceptible — all while operating in a dynamic environment. To address this, the project proposal asks participants to come up with what it conceives of as “bias sensors” and “bias triggers,” which, together, identify a vulnerability and then induce a situation in which an attacker’s cognitive vulnerabilities are exploited. 

Exactly how that system will function and whether it can be integrated into a software solution is far from clear, but Ferguson-Walter says it’s important for IARPA to pursue these types of high-risk, high-reward projects that in the absence of government funding are unlikely to receive support. 

And amid widespread computer vulnerabilities and only halting progress in securing online life, a new approach might yield unexpected breakthroughs. “We’ve had 50 or 60 years of cybersecurity and look where we are now: Everything is getting worse,” Aiken says. “Cybersecurity protects your data, your systems, and your networks. It does not protect what it is to be human online.” 

The post US intelligence research agency examines cyber psychology to outwit criminal hackers appeared first on CyberScoop.

]]>
Reality check: What will generative AI really do for cybersecurity? https://cyberscoop.com/generative-ai-chatbots-cybersecurity/ Tue, 23 May 2023 14:45:31 +0000 https://cyberscoop.com/?p=74239 Cybersecurity professionals are eyeing generative AI’s defensive potential with a mix of skepticism and excitement.

The post Reality check: What will generative AI really do for cybersecurity? appeared first on CyberScoop.

]]>
Everywhere you look across the cybersecurity industry — on conference stages, trade show floors or in headlines — the biggest companies in the business are claiming that generative AI is about to change everything you’ve ever known about defending networks and outsmarting hackers.

Whether it’s Microsoft’s Security Copilot, Google’s security-focused large language model, Recorded Future’s AI-assistant for threat intelligence analysts, IBM’s new AI-powered security offering or a fresh machine learning tool from Veracode to spot flaws in code, tech companies are tripping over one another to roll out their latest AI offerings for cybersecurity. 

And at last month’s RSA Conference — the who’s-who gathering of cybersecurity pros in San Francisco — you couldn’t walk more than a few feet on the showroom floor without bumping into a salesperson touting their firm’s new AI-enabled product. From sensational advertising, to bombastic pitches to more measured talks from top national security officials, AI was on everyone’s lips.

Recent years’ rapid advances in machine learning have made the potential power of AI blindingly obvious. What’s much less obvious is how that technology is going to be usefully deployed in security contexts and whether it will deliver the major breakthroughs its biggest proponents promise. 

Over the course of a dozen interviews, researchers, investors, government officials and cybersecurity executives overwhelmingly say they are eyeing generative AI’s defensive potential with a mix of skepticism and excitement. Their skepticism is rooted in a suspicion that the marketing hype is misrepresenting what the technology can actually do and a sense that AI may even introduce a new set of poorly understood security vulnerabilities.

But that skepticism is tempered by real excitement. By processing human language as it is actually spoken, rather than in code, natural language processing techniques may enable humans and machines to interact in new ways with unpredictable benefits. “This is one of those moments where we see a fundamental shift in human computer interaction, where the computer is more amenable to the way that we naturally do things,” said Juan Andres Guerrero-Saade, the senior director of SentinelLabs, the research division of the cybersecurity firm SentinelOne. 

For veterans of the cybersecurity industry, the intense hype around AI can feel like deja vu. Recent advances in generative AI — tools that can replicate human speech and interact with the user — have captured public attention, but the machine learning technologies that underpin it have been widely deployed by cybersecurity firms in the past decade. Machine learning tools already power anti-virus, spam-filtering and phishing-detection tools, and the notion of “intelligent” cyberdefense — a defense that uses machine learning to adapt to attack patterns — has become a marketing staple of the cybersecurity industry. 

“These machine learning tools are fantastic at saying here’s a pattern that no human is going to have been able to find in all of this massive data,” says Diana Kelley, the chief information security officer at Protect AI, a cybersecurity company.

In cybersecurity contexts, machine learning tools have sat mostly in the back office, powering essential functions, but the revolution in generative AI may change that. This is largely due to the aggressiveness with which the industry’s leader, OpenAI, has released its generative AI products.

As the technology has advanced in recent years, AI incumbents such as Google, which pioneered many of the technical advances that make possible today’s generative AI tools, have hesitated to release their products into the wild. OpenAI, by contrast, has made its AI tools far more readily available and built slick user interfaces that make working with their language models incredibly easy. Microsoft has poured billions of dollars of investments and cloud computing resources into OpenAI’s work and is now integrating the start-up’s large language models into its product offerings, giving OpenAI access to a massive customer base. 

That’s left competitors playing catch-up. During his recent keynote address at Google’s developer conference, company CEO Sundar Pichai said some version of “AI” so many times that his performance was turned into an instantly viral video that clipped together his dozens of references to the technology

With AI companies one of the few slices of the tech sector still attracting venture capital in a slowing economy, today’s start-ups are quick to claim that they too are incorporating generative AI into their offerings. At last month’s RSA conference, investors in attendance were deluged by pitches from firms claiming to put AI to work in cybersecurity contexts, but all too often, the generative AI tie-ins appeared to be mere hot air.

“What we saw at the show was a lot of people that were slapping a front end on ChatGPT and saying, ‘Hey, look at this cool product,’” said William Kilmer, a cybersecurity-focused investor at the venture capital firm Gallos, to describe the scores of pitches he sat through at RSA with thin claims of using generative AI. 

And as companies rush to attract capital and clients, the reality of generative AI can easily be glossed over in marketing copy. “The biggest problem we have here is one of marketing, feeding marketing, feeding marketing,” Guerrero-Sade from SentinelLabs argues. “At this point, people are ready to pack it up, and say the security problem is solved — let’s go! And none of that is remotely true.”

Separating hype from reality, then, represents a tough challenge for investors, technologists, customers and policymakers.

Anne Neuberger, the top cybersecurity adviser at the White House, sees generative AI as a chance to make major improvements in defending computer systems but argues that the technology hasn’t yet delivered to its full potential.

As Neuberger sees it, generative AI could conceivably be used to clean up old code bases, identify vulnerabilities in open-source repositories that lack dedicated maintainers, and even be used to produce provably secure code in formal languages that are hard for people to write. Companies that run extensive end-point security systems — and have access to the data they generate — are in a good position to train effective security models, she believes. 

“Bottom line, there’s a lot of opportunity to accelerate cybersecurity and cyberdefense,” Neuberger told CyberScoop. “What we want to do is make sure that in the chase between offense and defense that defense is moving far more quickly. This is especially the case since large language models can be used to generate malware more quickly than before.”

But on the flip side, effectively implementing large language models in security-sensitive contexts faces major challenges. During her time as an official at the National Security Agency, Neuberger said she witnessed these hurdles first-hand when the agency began using language models to supplement the work of analysts, to do language translation and to prioritize what intelligence human analysts should be examining.

Cleaning data to get it usable for machine learning required time and resources, and once the agency rolled out the models for analysts to use some were resistant and were concerned that they could be displaced. “It took a while until it was accepted that such models could triage and to give them a more effective role,” Neuberger said.

For cybersecurity practitioners such as Guerrero-Saade and others who spoke with CyberScoop, some of the most exciting applications for generative AI lie in reverse engineering, the process to understand what a piece of software is trying to do. The malware research community has quickly embraced the use of generative AI, and within a month of ChatGPT’s release a plug-in was released integrating the chatbot with IDA Pro, the software disassembler tool. Even after years of reverse engineering experience, Guerrero-Saade is learning from these tools, such as when he attended a recent training, didn’t understand everything and leaned on ChatGPT to get him started. 

ChatGPT really shines when it functions as a kind of “glue logic,” in which it functions as a translator between programs that aren’t associated with one another or a program with a human, says Hammond Pearce, a research assistant professor at New York University’s Center for Cybersecurity. “It’s not that ChatGPT by itself isn’t amazing, because it is, but it’s the combination of ChatGPT with other technologies … that are really going to wow people when new products start coming out.”

For now, defensive cybersecurity applications of generative AI are fairly nascent. Perhaps the most prominent such product — Microsoft’s Security Copilot — remains in private preview with a small number of the company’s clients as it solicits feedback. Using it requires being integrated into the Microsoft security stack and running the company’s other security tools, such as Intune, Defender and Sentinel. 

Copilot offers an input system similar to ChatGPT and lets users query a large language model that uses both OpenAI’s GPT-4 and a Microsoft model about security alerts, incidents and malicious code. The goal is to save analysts time by giving them a tool that quickly explains the code or incidents they’re examining and is capable of quickly spitting out analytical products — including close-to-final slide decks. 

Chang Kawaguchi, a Microsoft VP and the company’s AI security architect, sees the ability of the current generation of machine learning to work with human language — even with highly technical topics like security — as a “step function change.” The most consistent piece of feedback Kawaguchi’s colleagues have received when they demo Copilot is, “Oh, God, thank you for generating the PowerPoints for us. Like, I hate that part of my job.”

“We couldn’t have done that with last generation’s machine learning,” Kawaguchi told CyberScoop. 

Despite their smarts, today’s machine learning tools still have a remarkable penchant for stupidity. Even in Microsoft’s YouTube demo for Copilot, the company’s pitchwoman is at pains to emphasize its limitations and points out that the model refers to Windows 9 — a piece of software that doesn’t exist — as an example of how it can convey false information. 

As they are more widely deployed, security experts worry generative AI tools may introduce new, difficult to understand security vulnerabilities. “No one should be trusting these large language models to be reliable right now,” says Jessica Newman, the director of the AI Security Initiative at the University of California at Berkeley.

Newman likens large language models to “instruction following systems” — and that means they can be given instructions to engage in malicious behavior. This category of attack — known as “prompt injection” — leaves models open to manipulation in ways that are difficult to predict. Moreover, AI systems also have typical security vulnerabilities and are susceptible to data poisoning attacks or attacks on their underlying algorithms, Newman points out. 

Addressing these vulnerabilities is especially difficult because the nature of large language models means that we often don’t know why they output a given answer — the so-called “black box” problem. Just like a black box, we can’t see inside a large language model and that makes their work difficult to understand. While language models are rapidly advancing, tools to improve their explainability are not moving ahead at the same speed. 

“The people who make these systems cannot tell you reliably how they’re making a decision,” Newman said. “That black box nature of these advanced AI systems is kind of unprecedented when dealing with a transformative technology.”

That makes operators of safety critical systems — in, for example, the energy industry — deeply worried about the speed with which large language models are being deployed. “We are concerned by the speed of adoption and the deployment in the field of LLM in the cyber-physical world,” said Leo Simonovich, the vice president and global head for industrial cyber and digital security at Siemens Energy.

AI adoption in the operational technology space — the computers that run critical machinery and infrastructure — has been slow “and rightly so,” Simonovichn said. “In our world, we’ve seen a real hesitancy of AI for security purposes, especially bringing IT security applications that are AI powered into the OT space.”

And as language models deploy more widely, security professionals are also concerned that they lack the right language to describe their work. When LLMs confidently output incorrect information, researchers have taken to describing such statements as “hallucination” — a term that anthropomorphizes a computer system that’s far from human. 

These features of LLMs can result in uncanny interactions between man and machine. Kelley, who is the author of the book Practical Cybersecurity Architecture, has asked various LLMs whether she has written a book. Instead of describing the book she did write, LLMs will instead describe a book about zero trust that she hasn’t — but is likely to — write.

“So was that a huge hallucination? Or was that just good old mathematical probability? It was the second but the cool term is hallucination.” Kelley said. “We have to think about how we talk about it.”

Correction, May 23, 2023: An earlier version of this article misspelled William Kilmer’s last name.

The post Reality check: What will generative AI really do for cybersecurity? appeared first on CyberScoop.

]]>
When it comes to online scams, ‘ChatGPT is the new crypto’ https://cyberscoop.com/chatgpt-scam-facebook-meta-hackers-malware/ Wed, 03 May 2023 12:00:00 +0000 https://cyberscoop.com/?p=73714 Researchers at Meta have seen a rise in ChatGPT-themed attacks, the company said in an overview of cybersecurity issues on its platforms.

The post When it comes to online scams, ‘ChatGPT is the new crypto’ appeared first on CyberScoop.

]]>
Digital fraudsters are as enamored with ChatGPT as everyone else on the internet and have taken advantage its allure to spread a new strain of malware across Facebook, Instagram and WhatsApp in recent months.

Since March, Meta has blocked more than 1,000 unique ChatGPT-themed web addresses designed to deliver malicious software to users’ devices, the company revealed Wednesday in a report on security issues across the company three major platforms.

“As an industry we’ve seen this across other topics that are popular in their time, such as crypto scams fueled by the immense interest in digital currency,” Guy Rosen, Meta’s chief information security officer, told reporters ahead of the report’s release. “So from a bad actor’s perspective, ChatGPT is the new crypto.”

Indeed, hackers are using the skyrocketing interest in artificial intelligence chatbots such as ChatGPT to convince people to click on phishing emails, to register malicious domains that contain ChatGPT information and develop bogus apps that resemble the generative AI software. At Meta, the company’s security team has observed around 10 malware families using ChatGPT and other generative AI-related themes to lure victims into installing malware on their systems, Meta researchers Duc H. Nguyen and Ryan Victory said in a blog posted to the company’s site.

The malware used in these cases are part of attackers’ efforts to take control of business account pages and accounts in order to run unauthorized ads, the company said, which can then lead to further malicious activity. Along with identifying a new strain of malware dubbed “NodeStealer,” the company also said it is launching a support tool that guides users through a step-by-step process to identify and remove malware.

Hackers have created malicious browser extensions hosted on the official browser web stores claiming to offer ChatGPT-related tools, Rosen said, noting that some of the malicious tools did include working ChatGPT functionality alongside the malicious code.

Meta’s security researchers have not yet seen generative AI used to craft the attacks, or as the interaction point with victims, rather than as a general lure to attract victims, said Nathaniel Gleicher, the company’s head of security policy. Meta is thinking through how AI could be abused in that way, he added, “but it’s very early in the development of these tools.”

The malware was also observed across a range of platforms, including the major file-sharing companies such as Dropbox, Google Drive, Mega and others, with the ultimate goal of compromising businesses with access to ad accounts across the internet, the researchers said. Rosen said Meta shared information about the malicious tools with the platforms involved.

Meta researchers working this issue also identified a new malware strain they dubbed “NodeStealer” in late January 2023 that targeted internet browsers on Windows systems with the goal of stealing cookies and saved usernames and passwords to ultimately compromise Facebook, Gmail and Outlook accounts, the researchers said. Analysis of the malware determined that it likely originated in Vietnam, they added.

The malware was discovered within two weeks of it being deployed, and counter actions against the operation — including takedown requests to third-party registrars, hosting providers and others — “led to a successful disruption of the malware,” the researchers added, noting that they have not seen new samples from the NodeStealer family since February 27 of this year.

In October, Meta shared information on more than 400 malicious Android and iOS apps that targeted users for their Facebook login information. Those apps indiscriminately targeted the general public but Wednesday’s revelations are different, Gleicher said, calling it “aggressive and persistent malware campaigns that target businesses.”

Business-account attacks typically start with attackers going after the personal accounts of people who manage or are connected to the business pages or advertising accounts, the company said. In response, the company is updating its approach to the problem with the malware removal support, and also changes to how business pages are managed, such as by giving administrators the power to better limit who has access to key functions and expanded authorization requirements for sensitive actions such as accessing credit lines.

Later this year, the company is planning later to roll out Meta Work accounts to allow business users to log in and operate pages without requiring a personal account as part of an effort to keep business accounts secure even if a personal account has been compromised.

Also on Wednesday, Meta released its quarterly adversarial threat report, which details the company’s fight against cyberespionage and coordinated inauthentic behavior networks. In the first quarter of 2023, the company removed three cyberespionage networks: one from a state-aligned threat group in Pakistan, a mercenary cyber operation known as Bahamut, and a pro-Indian hacking group known as Patchwork.

Meta also removed a flurry of inauthentic behavior networks linked to China, Iran, Toto and Burkina Faso, Georgia, and another based in Venezuela and the U.S.

The cross Venezuelan/U.S. operation included 24 Facebook accounts, 54 pages and four Instagram accounts, the company said, but also included activity on Twitter, Medium and websites posing as news organizations. The operation targeted Guatemala and Honduras, according to Meta, but had little to no engagement from authentic communities on the company’s services.

The post When it comes to online scams, ‘ChatGPT is the new crypto’ appeared first on CyberScoop.

]]>
Biden administration wants to avoid 5G mistakes in race to beat China on 6G https://cyberscoop.com/biden-5g-china-6g/ Fri, 21 Apr 2023 09:00:00 +0000 https://cyberscoop.com/?p=73367 The White House seeks to shape next-generation telecommunications standards and technology before falling behind to Beijing.

The post Biden administration wants to avoid 5G mistakes in race to beat China on 6G appeared first on CyberScoop.

]]>
The United States is aiming to shape the development of 6G telecom technology at an early stage of research and development and to avoid letting China build up an early lead in next-generation telecommunications, a senior Biden administration official told reporters ahead of a Friday summit on 6G.

“We want to take the list of lessons we’ve learned from 5G, about the importance of early involvement and resilience, and to drive an approach to 6G that optimizes performance, accessibility, and security,” Anne Neuberger, the deputy national security advisor for cyber and emerging technology, said on a call with reporters Thursday.

Because 6G remains at the research and development stage, “we’re at the stage where we can shape and develop that,” Neuberger said.

Hosted by the National Science Foundation, Friday’s summit will bring together members of the private sector, civil society, and allied nations to discuss topics around 6G, including workforce, federal funding, and standardization. The summit is the latest example of Washington’s attempt to win the race against China in developing a range of emerging technologies seen as key to national security.

Getting caught flat-footed on 6G would give dominance to “adversaries who have shown a willingness to provide the market by offering distorted incentives, so they can achieve their goals of compromising our security,” a senior administration official said during the call.

The United States learned that lesson the hard way after China dominated the global roll-out of 5G infrastructure by offering subsidies to Huawei that let the Chinese telecommunications giant undercut its Western competitors in selling 5G telecommunications gear around the world for competitive prices.

The United States banned the firm’s equipment domestically over concerns of Chinese spying, but getting other countries to follow suit has been an uphill struggle for the U.S. government.

The senior official noted that dominating the telecom market gives China an advantage in spying and potentially disrupting another nation’s communications during a conflict or crisis, which experts warn could happen if China decided to attack Taiwan.

Friday’s summit aims to bring together private and public sector players in the telecom industry to help shape standards and technology for 6G with a view toward preventing a repeat of China’s dominance of 5G. Among those in attendance will be Federal Communications Commission Chairwoman Jessica Rosenworcel and the National Telecommunications and Information Administration’s Alan Davidson.

The official didn’t rule out collaborating with China on 6G standards in the future. “If China is willing to work with us on that we’re very much willing,” the senior administration official said.

“Based on what we’ve observed that will be a challenge,” the official added. “We know what we need to have, which is secure open and interoperable networks.”

The post Biden administration wants to avoid 5G mistakes in race to beat China on 6G appeared first on CyberScoop.

]]>
CISA and partners issue secure-by-design principles for software manufacturers  https://fedscoop.com/cisa-and-partners-issue-secure-by-design-principles-for-software-manufacturers/ Thu, 13 Apr 2023 15:08:23 +0000 https://cyberscoop.com/?p=73093 The post CISA and partners issue secure-by-design principles for software manufacturers  appeared first on CyberScoop.

]]>
The post CISA and partners issue secure-by-design principles for software manufacturers  appeared first on CyberScoop.

]]>
Recorded Future offers peek at the AI future of threat intelligence https://cyberscoop.com/recorded-future-openai-gpt-intelligence/ Tue, 11 Apr 2023 14:00:00 +0000 https://cyberscoop.com/?p=72965 The Massachusetts-based cybersecurity company has fine-tuned an OpenAI model to help analysts synthesize data.

The post Recorded Future offers peek at the AI future of threat intelligence appeared first on CyberScoop.

]]>
The threat intelligence company Recorded Future announced on Tuesday that it is rolling out a generative artificial intelligence tool that relies on a fine-tuned version of Open AI’s GPT model to synthesize data. 

Rapid advances in generative AI in recent months have led to a flurry of initiatives by companies to incorporate the technology into their offerings, and companies such as Recorded Future — with its massive trove of proprietary data — are showing how the technology is likely to be incorporated into products in the short term. 

Over the course of nearly 15 years in business, Recorded Future has collected a huge amount of data on the activity of malicious hackers, their technical infrastructure and criminal campaigns. The company has used that data to tune a version of Open AI’s deep learning models to build a tool that summarizes data and events for analysts and clients. By connecting the AI model to its intelligence graph, which collects data from across the web, the model will include near real-time information about commonly exploited vulnerabilities or recent breaches.

“This is something that for a human analyst can take several hours — reading all this material and then generating a summary,” Staffan Truvé, co-founder and chief technology officer of Recorded Future, told CyberScoop. “As you move through the information, you now have someone summarizing it in real time.” 

Cybersecurity companies have broadly incorporated AI into their products over the past decade, but the next step of incorporating machine learning into corporate applications is figuring how to build useful generative tools.

Companies such as Recorded Future with large internal data holdings have in recent months embraced deep learning technology to build generative AI tools. Late last month, Bloomberg rolled out BloombergGPT, a 50 billion parameter model trained on financial data.

By taking large data holdings and feeding them into AI models, companies like Recorded Future and Bloomberg are attempting to build generative AI systems that are finely tuned to answering the questions that their clients rely on them to answer. Companies with large data holdings will likely look to generative AI to turn that data into a more productive resource.

But Bloomberg and Recorded Future also offer an example of how companies can take different approaches in building generative AI models with major implications for the broader industry. While Bloomberg has built its own bespoke model, Recorded Future relies on OpenAI’s foundational GPT model and pays the company based on how much it queries the model.

While Truvé would not comment on the financial terms of the relationship between Recorded Future and OpenAI, it is likely that these types of business-to-business deals represent a fairly lucrative business model for OpenAI, a company that faces a difficult road to profitability while facing staggering computing costs to train its models. 

It’s difficult to evaluate the quality of Recorded Future’s AI offerings. The company has not tested its model against standard AI benchmarking tools, instead relying on its in-house analysts to test and verify its accuracy. The company relies on OpenAI’s most advanced GPT models, but OpenAI has severely limited the amount of information it makes available about its top-of-the-line products. 

In their eagerness to answer questions, advanced AI models are prone to hallucination — confidently stating a piece of information as fact that has no basis in reality. But Truvé said the company’s model manages to mostly avoid hallucinating in large part because its primary application is in summarizing a body of information returned as part of a query. 

Indeed, the performance of Recorded Future’s AI is aided by the fact that its purpose is fairly straight forward. The company’s AI feature functions mainly as a summarizing tool, and Truvé sees the AI tool as something that will augment cybersecurity analysts. 

“The challenge facing people in cybersecurity is that there is too much information and too few people to process it.” Truve said “This tries to solve the lack of time available to analysts and the rather acute lack of analysts.” 

The post Recorded Future offers peek at the AI future of threat intelligence appeared first on CyberScoop.

]]>
The Discord servers at the center of a massive US intelligence leak https://cyberscoop.com/discord-intelligence-leak-ukraine/ Mon, 10 Apr 2023 19:49:22 +0000 https://cyberscoop.com/?p=72955 The intelligence files related to the Ukraine war that appeared online aren't the first sensitive military documents shared on video game forums.

The post The Discord servers at the center of a massive US intelligence leak appeared first on CyberScoop.

]]>
Over the past few days, U.S. investigators and digital security researchers alike have probed what would seem to be the most unlikely of places to determine the origin of a major leak of classified intelligence documents: video game-focused chat servers.

A series of video game servers have emerged as key distribution points for a cache of perhaps as many as 100 intelligence documents containing secret and top secret information about the war in Ukraine. The documents first appears on a server known as “Thug Shaker Central” and were then reposted on servers known as “WowMao” and “Minecraft Earth Map,” according to an investigation by Bellingcat, an investigative journalism outfit based in the Netherlands, that provides the most thorough account to date of how the documents made it into the public domain. 

The documents mostly date from February and March, but may include material from as far back as January. At least one set of the files circulating online includes a photograph of a handwritten character sheet of a roleplaying game — Doctor Izmer Trotzky. 

Last week’s leaks represent a stark departure from how classified information has reached the public in recent years. “When you think of these big leaks, you think of whistleblowers like Snowden, hack and dumps from Russia,” Aric Toler, who has investigated the Discord leaks for Bellingcat, wrote in an email to CyberScoop. “This is just a guy in a tiny Discord server sharing hundreds of insanely sensitive [files] with his gaming buddies.” 

After being posted, the files appear to have sat dormant for about a month, until they were shared last week on 4chan and Telegram, where they received greater attention. “Since Discord isn’t really publicly archived, indexed, or searchable (as 4chan and, to a lesser degree, Telegram are), then it’s not like you can easily scrape and analyze these sources,” Toler said. “So it’s a bit of a perfect storm.”

The release of classified material on online gaming forums is not as novel as it might seem. In the last two years, fans of the free-to-play combat game War Thunder have repeatedly posted classified material in the game’s online forum — on one occasion to settle an obscure argument about the design details of a tank depicted in the game. 

Highly sensitive classified information repeatedly appearing on online gaming forums has intelligence experts exasperated. “The idea of paying a source to dead-drop this stuff when it’s popping up unsolicited on Minecraft and world of tanks seems quaint,” says Gavin Wilde, a senior fellow at the Carnegie Endowment for International Peace and a 10-year veteran of the National Security Agency. 

A furious effort inside the Department of Defense is now attempting to verify this most recent cache documents circulating online, assess the damage and prevent further fallout. The Department of Justice has opened an investigation into the leak that aims to determine its source, a probe that will likely scrutinize the online communities where the material appears to have originated. 

The leaked documents are photographs of briefing slides that appear to have been folded up. They are photographed mostly against what appears to be a low table. In the background of some of the photographs can be seen a bottle of Gorilla Glue and what appears to be a strap with the Bushnell brand, a popular maker of outdoor optics and rifle scopes. 

The documents amount to one of the most serious leaks in the history of the U.S. intelligence community, on par with the WikiLeaks disclosures and material made public by the group known as the ShadowBrokers, according to intelligence experts. The material spans the U.S. intelligence community, including information obtained by the CIA, the NSA and the National Reconnaissance Office, which operates America’s fleets of highly secretive spy satellites.

The material includes timetables for the delivery of munitions to Ukraine by South Korea, references to sensitive American satellite surveillance capabilities, and indications that the United States has managed to penetrate the Russian military to such an extent that it has been able to warn Ukraine about the site of upcoming artillery and missile strikes. 

The cache also includes reference to communication between a cybercriminal group and an officer of Russia’s powerful domestic intelligence agency, the FSB, claiming that the group had gained access to the computer systems of a Canadian pipeline and that it could use that access to disrupt a pipeline. That claim has not been confirmed, and it is fully possible the communications intercepted by U.S. intelligence services amount to not more than bluster by the hacking group. 

Ukrainian officials have cautioned that the leaked document may include falsified information or may be entirely fabricated, but so far, the documents appear to be mostly authentic with only minor alterations that appear to have occurred after the documents began circulating more widely last week on 4chan and Russian telegram channels. 

“We are very fortunate that this leak has received such a skeptical reception,” said John Hultquist, the head of threat intelligence at Mandiant.

The post The Discord servers at the center of a massive US intelligence leak appeared first on CyberScoop.

]]>
Microsoft leads effort to disrupt illicit use of Cobalt Strike, a dangerous hacking tool in the wrong hands https://cyberscoop.com/microsoft-cobalt-strike-hacking-tool/ Thu, 06 Apr 2023 16:00:00 +0000 https://cyberscoop.com/?p=72887 The action against illicit versions of legitimate Cobalt Strike applications represents the culmination of a year-long investigation.

The post Microsoft leads effort to disrupt illicit use of Cobalt Strike, a dangerous hacking tool in the wrong hands appeared first on CyberScoop.

]]>
Microsoft’s Digital Crimes Unit, cybersecurity firm Fortra and the Health Information Sharing & Analysis Center announced legal action Thursday to seize domains related to criminal activity involving cracked copies of the security testing application Cobalt Strike, which has become a favorite tool for cybercriminals to carry out attacks around the world.

Cobalt Strike, an adversary emulation tool that information security professionals use to evaluate network and system defenses to enable better security, like other legitimate hacking tools, is regularly abused by cybercriminals as part of attacks ranging from financially motived cybercrime to high-end state-aligned attacks.

Fortra, the maker of Cobalt Strike, works to prevent Cobalt Strike getting into the hands malicious hackers, but manipulated versions of the software have inevitably proliferated online. Thursday’s action attempts to disrupt the use of these cracked, older versions of Cobalt Strike that cybercriminals widely use to carry out attacks, especially to deploy ransomware.

“If you identify their preferred method of attack and make it no longer usable that’s a good thing,” said Amy Hogan-Burney, Microsoft’s general manager for cybersecurity policy and protection.

The court order names a range of entities and groups the companies allege misuse their technologies, including the LockBit and Conti ransomware groups and a series of cybercrime operations tracked by Microsoft under various designations. In a 223-page complaint filed in the U.S. District Court in the Eastern District of New York, the companies detail known IP addresses associated with the criminal activity, along with the range of domain names utilized by the criminal groups.

The court order instructs data centers and hosting providers to block traffic to the known IPs and domains and “completely disable the computers, servers, electronic data storage devices” and other infrastructure associated with the defendants’ activities, as well as transfer control of the IPs and domains to Microsoft.

Microsoft has in recent years pioneered the use of domain seizure as a way to disrupt the technical infrastructure malicious hackers rely on, and Thursday’s action targeting Cobalt Strike builds on that earlier work to carry out the novel targeting of a hacking tool. Thursday’s legal order targets 16 anonymous “John Doe” actors engaged in a range of criminal behavior, from ransomware activity to malware distribution and development.

The action against illicit Cobalt Strike applications represents the culmination of what Hogan-Bruney said was a year-long investigation, and Thursday’s attempt to disrupt use of Cobalt Strike is likely only a first step to challenge illicit use of the hacking tool. Malicious actors will likely be able to retool their infrastructure, and Cobalt Strike relies on dynamic hosting, creating a challenge in disrupting it use.

Hogan-Burney said that investigators in her office have coined a joke about the operation that’s by now well-worn: “We call this an advanced persistent disruption.”

“It’s insufficient to think of it as a single action like we used to,” she said.

Legitimate cybersecurity researchers use Cobalt Strike to emulate the work of an attacker and to probe weaknesses in computer systems and maintain a long-term, covert presence on a network. But in the wrong hands, Cobalt Strike provides an attacker with sophisticated hacking tools, one that offers highly sophisticated capabilities off the shelf — while having to write less custom code that would make it easier to trace an attack.

That’s made Cobalt Strike a favorite of malicious hackers in recent years. The ransomware gang Conti used it in attacking the Irish healthcare system in 2021 and in a crippling attack on the Costa Rican government last year. Indeed, ransomware families associated with or deployed by cracked copies of Cobalt Strike have been linked to more than 68 ransomware attacks impacting healthcare organizations in more than 19 countries around the world, Hogan-Burney said in a blog announcing Thursday’s action. A June 2021 analysis from cybersecurity firm Proofpoint reported a 161% increase of threat actors using Cobalt Strike between 2019 and 2020, and said it was a “high-volume threat in 2021.”

Furthermore, internal chat logs from the Conti ransomware group revealed in the weeks after the Russian invasion of Ukraine showed that the group invested tens of thousands of dollars in acquiring legitimate licenses for Cobalt Strike via a third-party company, cybersecurity journalist Brian Krebs reported at the time.

Fortra executives told CyberScoop they recognize the power of the tool and its prevalence in the cybercrime ecosystem and were happy to participate.

“As you can imagine, an effort such as this takes time to research, document, and coordinate before legal action can start,” said Matthew Schoenfeld, president of Fortra. “It’s taken months of targeted hard work and joint investigations and we’re happy to be working with Microsoft and H-ISAC to reduce risk and help keep bad actors at bay.”

Bob Erdman, the company’s associate vice president of research and development, said that “Cobalt Strike is the go-to security tool used legitimately by reputable entities to help strengthen their security posture and prevent bad actors from compromising their infrastructure. This action is an example of industry members combining resources and expertise to block the criminal abuse of legitimate security tools, making it harder for malicious actors to operate.”

The post Microsoft leads effort to disrupt illicit use of Cobalt Strike, a dangerous hacking tool in the wrong hands appeared first on CyberScoop.

]]>
Twitter’s recommendation algorithm opens platform to manipulation, bot attacks, researcher finds https://cyberscoop.com/twitter-algorithm-cve-bots-elon-musk/ Tue, 04 Apr 2023 15:53:36 +0000 https://cyberscoop.com/?p=72814 Twitter's source code apparently revealed how it's possible to game the company's software to reduce access to specific accounts.

The post Twitter’s recommendation algorithm opens platform to manipulation, bot attacks, researcher finds appeared first on CyberScoop.

]]>
Just three days after Twitter released a portion of its source code online that included the app’s recommendation algorithm, a security researcher found that attackers could manipulate the software to effectively silence specific accounts on the social media platform.

An Argentine developer flagged the issue on the software hosting service GitHub on April 1 after Twitter made the code public in a pair of repositories on the site. “The current implementation allows for coordinated hurting of account reputation without recourse,” the developer wrote.

As a result, the nonprofit Mitre Corporation assigned portions of Twitter’s code a common vulnerabilities and exposure, or CVE, designation based on the way attackers could target specific accounts to diminish their exposure on the platform. It’s not clear who submitted the CVE to the Mitre database, and the company would not identify whoever submitted it for review, per company policy.

The CVE, which is a designation that the information security community uses to identify and track publicly disclosed software flaws, notes that Twitter’s current recommendation algorithm “allows attackers to cause a denial of service (reduction of reputation score) by arranging for multiple Twitter accounts to coordinate negative signals regarding a target account, such as unfollowing, muting, blocking, and reporting, as exploited in the wild in March and April 2023.”

On the same day the Argentine developer flagged the issue, a Twitter user by the name of “el gato malo” pointed out essentially the same issue, noting that “this is how the botnet/activist armies are crushing accounts.” The user tagged Twitter owner Elon Musk with a suggestion that only “blue check mutes/blocks/reports” should count. Musk replied, asking who was behind the botnets. “Million dollar bounty if convicted,” Musk wrote, although it’s not clear what “convicted” means in this context.

Twitter has disbanded its press team and immediately responded to a request for comment about the CVE and algorithm issue with: “💩”.

In an unsigned blog posted to the company’s website last week, Twitter said releasing the code was “the first step in a new era of transparency,” and that as “the town square of the internet, we’re ultimately doing this to foster transparency and build trust with our users, customers, and the general public.”

Musk later said in a Twitter Spaces session that the release “is going to be quite embarrassing, and people are going to fine a lot of mistakes, but we’re going to fix them very quickly,” according to TechCrunch. He added that the company is “aspiring to the great example of Linux as an open source operating system,” where exploits could be found but “the community identifies and fixes those exploits.”

There are nearly 650 CVEs that mention some kind of algorithm in the National Institute of Standards and Technology database that mirrors the CVE database operated and maintained by the Mitre Corporation. Social media-related CVEs have been assigned in the past, a Mitre representative told CyberScoop Tuesday, including CVE-2022-46405 and CVE-2022-48364, which both have to do with the Mastodon service.

Updated, April 4, 2023: This story has been updated with information from Mitre Corporation regarding previous social media-related CVEs and its privacy policy.

The post Twitter’s recommendation algorithm opens platform to manipulation, bot attacks, researcher finds appeared first on CyberScoop.

]]>