CyberScoop https://cyberscoop.com/ Fri, 30 Jun 2023 19:58:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://cyberscoop.com/wp-content/uploads/sites/3/2023/01/cropped-cs_favicon-2.png?w=32 CyberScoop https://cyberscoop.com/ 32 32 Russian telecom confirms hack after group backing Wagner boasted about an attack https://cyberscoop.com/russia-satellite-hack-wagner/ Fri, 30 Jun 2023 19:58:12 +0000 https://cyberscoop.com/?p=75201 A Dozor-Teleport CJSC executive told ComNews that the company has been the victim of a cyberattack affecting its cloud infrastructure.

The post Russian telecom confirms hack after group backing Wagner boasted about an attack appeared first on CyberScoop.

]]>
The Russian satellite telecom company that hackers targeted this week in a claimed effort to support the Wagner paramilitary group confirmed the cyberattack on Friday, according to a Russian technology publication. The satellite company provides internet and other communication services that support state agencies such as Moscow’s main intelligence agency.

Alexander Anosov, the general director of the satellite company Dozor-Teleport CJSC and the first deputy general director of its parent company, Amtel-Svyaz, told a Russian information technology news outlet that the company was indeed infiltrated, and that preliminary information suggested that “infrastructure on the side of the cloud provider was compromised,” according to a Google translation.

ComNews, the publication that reported Anosov’s confirmation, reported that it “may take up to to weeks to restore the network to full operation.” The story did not offer additional detail to the severity or scale of the attack but said more information would be published on Monday.

News emerged late Wednesday into Thursday that the company had been targeted by a group claiming affiliation to the PMC Wagner, the private military company run by Victor Prigozhin. Along targeting the company and leaking nearly 700 files, the hackers defaced several websites and put up Wagner-related messages and a video.

Oleg Shakirov, a cyber policy expert and consultant at the Moscow-based PIR Center think tank, tweeted Thursday that “Wagner’s involvement is very unlikely,” and that it looked “like Ukrainian false flag trolling.”

The Wagner group did not respond to a request for comment and has not posted about the alleged connection to the hack in its widely followed Telegram channel. In the days since Prigozhin led his private military on an uprising and threatened to kill the head of the Russian military, his company, which includes the notorious Internet Research Agency troll factory, has faced major setbacks. Prigozhin announced the “liquidation” of Patriot Media, his company that had “dozens” of “news” sites, Meduza reported Friday.

The article also implied that the company was targeted because it uses a Latin-alphabet “Z” in its name, rather than the Cyrillic “З”. Anosov said that the company’s use of the “Z” could lead some to think that it works with the Russian Ministry of Defense. The symbol “Z” has become a symbol of the Russian invasion of Ukraine.

Sean Townsend, a spokesperson for the loose collective of hackers and various hacking groups in Ukraine known as the Ukrainian Cyber Alliance, tweeted screenshot of text from one of the files dumped by the hackers shows multiple references to its work with the Ministry of Defense.

The file, which is a spreadsheet titled “stations,” also shows that the Moscow-based company has infrastructure in the occupied areas of Ukraine, including near the Zaporizhzhia Nuclear Power Station, Townsend told CyberScoop Friday.

The post Russian telecom confirms hack after group backing Wagner boasted about an attack appeared first on CyberScoop.

]]>
OpenAI lawsuit reignites privacy debate over data scraping https://cyberscoop.com/openai-lawsuit-privacy-data-scraping/ Fri, 30 Jun 2023 18:40:05 +0000 https://cyberscoop.com/?p=75179 The lawsuit against the generative AI company raises questions about the legal grey area of web scraping in the United States.

The post OpenAI lawsuit reignites privacy debate over data scraping appeared first on CyberScoop.

]]>
The lawsuit filed this week in California against OpenAI, the artificial intelligence company behind the wildly popular ChatGPT app, is rekindling a decade-old debate about the legal and ethical concerns about tech companies scraping as much information as possible about everything — and everyone — on the web.

The suit filed on behalf of 16 clients alleges an array of harms from copyright violations to wiretapping due to Open AI’s data collection practices, adding to a growing list of legal challenges against companies repurposing or reusing images, personal information, code and other data for their own purposes.

Last November, coders sued GitHub along with its parent company Microsoft and partner OpenAI over a tool known as CoPilot that uses AI to generate code. The coders argued the companies violated the licensing agreements for the code. In February, Getty Images sued Stability AI for allegedly infringing the copyright of more than 12 million images.

As the lawsuit notes, AI companies deploy data scraping technology at a massive scale. The race between every major tech company and a growing pack of startups to develop new AI technologies, experts say, has also accelerated not just the scale of web scraping but the potential harms that come with it. Experts note that while web scraping can have benefits to society, such as business transparency and academic research, it can also come with harms, such as cybersecurity risks and scammers harvesting sensitive information for fraud.

“The volume with which they’re going out across the web and scraping code and scraping data and using that to train their algorithms raises an array of legal issues,” said Lee Tiedrich, distinguished faculty fellow in ethical technology at Duke University. “Certainly, to the extent that privacy and other personally identifiable information are involved, it raises a whole host of privacy issues.”

Those privacy concerns are the centerpiece of the recent California lawsuit, which accuses OpenAI of scraping the web to steal “private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

“They’re taking personal data that has been shared for one purpose and using it for a completely different purpose without the consent of those who shared the data,” said Timothy Edgar, professor of practice of computer science at Brown University. “It is by definition, a privacy violation, or at least an ethical violation, and it might be a legal violation.”

The ways that AI companies may use that data to train their models could lead to unforeseen consequences for those whose privacy has been violated, such as having that information surface in a generated response, said Edgar. And it will be very hard for those whose privacy has been violated to claw back that data.

“It’s going to become a whack-a-mole situation where people are trying to go after each company collecting our information to try to do something about it,” said Megan Iorio, senior counsel at Electronic Privacy Information Center. “It will be a very similar to a situation we have with data brokers where it’s just impossible to control your information.”

Data scraping cases have a long history in the U.S. and go all the way up to the Supreme Court. In November 2022, the court heard a six-year-long case from LinkedIn accusing data company HiQ Labs of violating the Computer Fraud and Abuse Act by scraping profiles from the networking website to build its product. The high court denied the claim that the scraping amounted to hacking and sent the case back to a lower court where it was eventually resolved. ClearView AI, a facial recognition company, has been sued for violating privacy laws in Europe and the state of Illinois for its practice of trawling the web to build its database of more than 20 billion images. It settled the ACLU’s lawsuit in Illinois in May 2022 by promising to stop selling the database to private companies.

Now, LinkedIn’s parent company Microsoft is on the other side of the courtroom, named as a plaintiff in three different related lawsuits against OpenAI. “The whole issue of data scraping and code scraping was like a crescendo that kept getting louder. It kept growing and growing,” said Tiedrich, who called the AI lawsuit “inevitable.”

The California suit against OpenAI combines the arguments of many of these lawsuits in a whopping 157-page document. Tiedrich says that while there have been recent court cases weighing in on fair use of materials, something relevant to the copyright aspects of the OpenAI lawsuit, the legality of data scraping is full of grey areas for courts and lawmakers to resolve.

“A lot of the big AI companies are doing data scraping, but data scraping has been around. There are cases, going back 20 years ago, to scraping airline information,” said Tiedrich. “So I think it’s fair to say that the decision could have broader implications than just if it gets to a judicial decision than just AI.”

The OpenAI lawsuit’s privacy arguments might be even more difficult to uphold. Iorio, who with EPIC filed a friend of the court brief in the LinkedIn case, said the plaintiffs suing OpenAI are in a better position to show those harms since they are individuals, not a company. However, the limitation of federal privacy laws makes it hard to bring a data scraping case on those grounds, she said. Of the three privacy statutes cited by the lawsuit, only the Illinois privacy law covers publicly available information of all users. (The lawsuit also cites the Children’s Online Privacy Protection Rule, which protects users under 13.)

That leaves scrapers, whether they are tech giants or cyber criminals, with a lot of leeway. “Without a comprehensive privacy law that does not have a blanket exemption for publicly available data, we have the danger here of this country becoming a safe haven for malicious web scrapers,” said Edgar.

The post OpenAI lawsuit reignites privacy debate over data scraping appeared first on CyberScoop.

]]>
CISA election security lead Kim Wyman to leave agency https://statescoop.com/kim-wyman-leave-agency-election-security/ Fri, 30 Jun 2023 15:55:33 +0000 https://cyberscoop.com/?p=75181 Wyman, who previously served as Washington state's top election official, will step down as CISA's top election security adviser.

The post CISA election security lead Kim Wyman to leave agency appeared first on CyberScoop.

]]>
The post CISA election security lead Kim Wyman to leave agency appeared first on CyberScoop.

]]>
Hackers attack Russian satellite telecom provider, claim affiliation with Wagner Group https://cyberscoop.com/russian-satellite-hack-wagner-group/ Thu, 29 Jun 2023 16:02:26 +0000 https://cyberscoop.com/?p=75153 The attackers released nearly 700 files associated with the attack.

The post Hackers attack Russian satellite telecom provider, claim affiliation with Wagner Group appeared first on CyberScoop.

]]>
Unidentified hackers claimed to have targeted Dozor, a satellite telecommunications provider that services power lines, oil fields, Russian military units and the Federal Security Service (FSB), among others, according to a message posted to Telegram late Wednesday night.

“The DoZor satellite provider (Amtel group of companies), which serves power lines, oil fields, military units of the Russian Defense Ministry, the Federal Security Service, the pension fund and many other projects, including the northern merchant fleet and the Bilibino nuclear power plant, went to rest,” the group’s first message read, according to a translation. “Part of the satellite terminals failed, the switches rebooted, the information on the servers was destroyed.”

The hackers also claimed to have defaced four seemingly unconnected Russian websites with messaging supportive of the Wagner private military company, the Russian mercenary group that made international headlines last weekend as it marched toward Moscow in an astonishing uprising that challenged the power of Russian President Vladimir Putin, before the group stopped short.

The group’s leadership was relocated to Belarus, a staunch Russian ally. Yevgeny Prigozhin, the head of Wagner, also created and funded the Internet Research Agency, a troll farm that the U.S. government sanctioned for its role in the sweeping Russian election interference operations targeting the 2016 U.S. presidential elections and then the 2018 elections.

Belarusian President Aleksandr Lukashenko said he argued against Putin’s contemplation of killing Prigozhin for leading the uprising, and instead brokered the deal to send Prigozhin to Belarus.

The message posted to the defaced websites showed the Wagner insignia, along with a message about the uprising and its results. “We agreed to a peaceful solution because we achieved the main thing — we showed our capabilities and full social approval of our actions,” the message read, according to a Google translation. “But what do we see instead? The current military leadership has not been removed from office, criminal cases have not been closed … You kicked us out of the NWO zone, out of Russia, but you can’t kick us out of the network.”

“We take responsibility for hacking,” the message continued. “This is just the beginning, more to come.”

Screenshot from one of the defaced websites, captured June 29, 2023 (CyberScoop).

The group posted a link to a zip file containing 674 files, including pdfs, images and documents. On Thursday morning, the group also posted three files that appear to show connections between the FSB and Dozor, and the passwords Dozor employees were to use to verify that they were dealing with actual FSB representatives, with one password valid for every two months in 2023, according to a Google translation.

Doug Madory, the director of internet analysis for Kentik, told CyberScoop Thursday that Dozor’s connection to the internet went down at about 10 p.m. ET Wednesday and remains unreachable. One of the routes the company uses was switched to Amtel-Svyaz, Dozor’s Moscow-based parent company.

Amtel-Svyaz could not be reached for comment.

The Wagner Group could not be reached for comment.

Oleg Shakirov, a cyber policy expert and consultant at the Moscow-based PIR Center think tank, tweeted Thursday that “Wagner’s involvement is very unlikely,” and that it looked “like Ukrainian false flag trolling.”

Shakirov told CyberScoop in an online message that “the whole hack and leak looks very real, but it’s not something Wagner does. They don’t have a motive now & no history of such attacks.”

The post Hackers attack Russian satellite telecom provider, claim affiliation with Wagner Group appeared first on CyberScoop.

]]>
Does the world need an arms control treaty for AI? https://cyberscoop.com/ai-danger-arm-control-nuclear-proliferation/ Thu, 29 Jun 2023 14:33:06 +0000 https://cyberscoop.com/?p=75041 Organizations like the IAEA offer an imperfect but instructive model for designing systems to control AI proliferation.

The post Does the world need an arms control treaty for AI? appeared first on CyberScoop.

]]>
At the dawn of the atomic age, the nuclear scientists who invented the atomic bomb realized that the weapons of mass destruction they had created desperately needed to be controlled. Physicists such as Niels Bohr and J. Robert Oppenheimer believed that as knowledge of nuclear science spread so, too, would bombs. That realization marked the beginning of the post-war arms control era.

Today, there’s a similar awakening among the scientists and researchers behind advancements in artificial intelligence. If AI really poses an extinction threat to humankind — as many in the field claim — many experts in the field are examining how efforts to limit the spread of nuclear warheads might control the rampant spread of AI.

Already, OpenAI, the world’s leading AI lab, has called for the formation of “something like” an International Atomic Energy Agency — the global nuclear watchdog —  but for AI. United Nations Secretary General Antonio Guterres has since backed the idea, and rarely a day goes by in Washington without one elected official or another expressing a need for stricter AI regulation

Early efforts to control AI — such as via export controls targeting the chips that power bleeding-edge models — show how tools designed to control the spread of nuclear weapons might be applied to AI. But at this point in the development of AI, it’s far from certain that the arms control lessons of the nuclear era translate elegantly to the era of machine intelligence.

Arms control frameworks for AI 

Most concepts of controlling the spread of AI models turn on a quirk of the technology. Building an advanced AI system today requires three key ingredients: data, algorithms and computing power — what the researcher Ben Buchanan popularized as the “AI Triad.” Data and algorithms are essentially impossible to control, but only a handful of companies build the type of computing power — powerful graphics processing units — needed to build cutting-edge language models. And a single company — Nvidia — dominates the upper end of this market. 

Because leading AI models are reliant on high-end GPUs — at least for now — controlling the hardware for building large language model offers a way to use arms control concepts to limit proliferation of the most powerful models. “It’s not the best governance we could imagine, but it’s the best one we have available,” said Lennart Heim, a researcher at the Centre for the Governance of AI, a British nonprofit, who studies computing resources. 

U.S. officials have in recent months embarked on an experiment that offers a preview of what an international regime to control AI might look like. In October, the U.S. banned the export of high-end GPUs to China and the chip making equipment necessary to make the most advanced chips, attempting to prevent proliferation of advanced AI models to China. “If you look at how AI is currently being governed,” Heim said, “it’s being governed right now by the U.S. government. They’re making sure certain chips don’t go to China.” 

Biden administration officials are now considering expanding these controls to lagging-edge chips and limiting Chinese access to cloud computing resources, moves that would further cut Beijing off from the hardware it needs to build competitive AI models.

While Washington is the driving force behind these export controls, which are aimed at ensuring U.S. supremacy in microelectronics, quantum computing and AI, it also relies on allies. In restricting the flow of chips and chipmaking equipment to China, the U.S. has signed up support from other key manufacturers of such goods: the Netherlands, Japan, South Korea and Taiwan.

By virtue of their chokehold on the chips used to train high-end language models, these countries are showing how the spread of AI models might be checked via what for now are ad hoc measures that might one day be integrated into an international body.

But that’s only one half of the puzzle of international arms control. 

Carrots and sticks 

In the popular imagination, the IAEA is an organization primarily charged with sending inspectors around the world to ensure that peaceful nuclear energy programs aren’t being subverted to build nuclear bombs. The less well-known work of the agency facilitates the transfer of nuclear science. Its basic bargain is something like this: sign up to the Nuclear Non-Proliferation Treaty, pledge not to build a bomb and the IAEA will help you reap the benefits of peaceful nuclear energy. 

“That’s the big reason that most states are enthusiastic about the IAEA: They’re in it for the carrots,” said Carl Robichaud, who helps lead the existential risk and nuclear weapons program at Longview Philanthropy, a nonprofit based in London. “They show up in Vienna in order to get assistance with everything from radiotherapy to building nuclear power plants.”

Building an international control regime of this sort for AI requires considering how to first govern the spread of the technology and then how to make its benefits available, argues Paul Scharre, the executive vice president and director of studies at the Center for a New American Security in Washington. By controlling where advanced AI chips go and who amasses them, licensing the data centers used to train models and monitoring who is training very capable models, such a regime could control the proliferation of these models, Scharre argued.

Countries that buy into this arrangement would then gain easier access to very capable models for peaceful use. “If you want to access the model to do scientific discovery, that’s available — just not to make biological weapons,” Scharre said.

These types of access controls have grown more feasible as leading AI labs have abandoned the open source approach that has been a hallmark of the industry in recent years. Today, the most advanced models are only available via online apps or APIs, which allows for monitoring how they are used. Controlling access in this way — both to monitor use and to provide beneficial access — is essential for any regime to control the spread of advanced AI systems, Scharre argued. 

But it’s not clear that the economic incentives of participating in such a regime translate from the world of nuclear arms control to AI governance. Institutions like the IAEA help to facilitate the creation of capital and knowledge intensive nuclear energy industries, and it’s unclear whether similar hurdles exist for AI to incentivize participating in an arms control regime.

“I like the idea of an international agency that helps humanity benefit more equitably from AI and helps this technology reach and help everyone. It’s not clear right now that there is market failure as to why that wouldn’t happen,” Robichaud said.

It’s also not clear that access controls can be maintained in the long run. Unlike nuclear weapons, which are fairly large physical devices that are difficult to move around, AI models are just software that can be easily copied and spread online. “All it takes is one person to leak the model and then the cats out of the bag,” Scharre said.

That places an intense burden on AI labs to keep their products from escaping the lab — as has already occurred — and is an issue U.S. policymakers are trying to address.

In an interview with CyberScoop, Anne Neuberger, a top White House adviser on cybersecurity and emerging technology, said that as leading AI firms increasingly move away from open source models and seek to control access, the U.S. government has carried out defensive cybersecurity briefings to leading AI firms to help ensure that their models aren’t stolen or leaked.

What are we trying to prevent? 

When AI safety researchers speak of the potentially existential threat posed by AI — whether that be a flood disinformation or the development of novel biological weapons — they are speculating. Looking at the exponential progress of machine learning systems in the past decade, many AI safety researchers believe that if current trends hold, machine intelligence may very well surpass human intelligence. And, if it does, there’s reason to think machines won’t be kind to humans

But that isn’t a sure thing, and it’s not clear exactly what catastrophic AI harms the future holds that need to be prevented today. That’s a major problem for trying to build an international regime to govern the spread of AI. “We don’t know exactly what we’re going to need because we don’t know exactly what the technology is going to do,” said Robert Trager, a political scientist at the University of California, Los Angeles, studying how to govern emerging technology. 

In trying to prevent the spread of nuclear weapons, the international community was inspired by the immense violence visited upon Hiroshima and Nagasaki. The destruction of these cities provided an illustration of the dangers posed by nuclear weapons technology and an impetus to govern their spread — which only gained momentum with the advent of more destructive thermonuclear bombs. 

By contrast, the catastrophic risks posed by AI are theoretical and draw from the realm of science fiction, which makes it difficult to build the consensus necessary for an international non-proliferation regime. “I think these discussions are suffering a little bit from being maybe ahead of their time,” said Helen Toner, an AI policy and safety expert at the Center for Security and Emerging Technology at Georgetown University and who sits on OpenAI’s board of directors.

If 10 or 20 years from now, companies are building AI systems that are clearly reaching a point where they threaten human civilization, “you can imagine there being more political will and more political consensus around the need to have something quite, quite strong,” Toner said. But if major treaties and conventions are the product of tragedy and catastrophe, those arguing for AI controls now have a simple request, Toner observes: “Do we have to wait? Can we not skip that step?”

But that idea hasn’t broken through with policymakers, who appear more focused on immediate risks, such as biased AI systems and the spread of misinformation. Neuberger, the White House adviser, said that while international efforts to govern AI are important, the Biden administration is more focused on how the technology is being used and abused today and what steps to take via executive order and congressional action before moving to long-term initiatives.

“There’s a time sequence here,” Neuberger said. “We can talk about longer term efforts, but we want to make sure we’re focusing on the threats today.”

In Europe, where EU lawmakers are at work on a landmark AI Act, which would limit its use in high-risk contexts, regulators have taken a similarly skeptical approach toward the existential risks of AI and are instead focusing on how to address the risks posed by AI as it is used today.

The risk of extinction might exist, “but I think the likelihood is quite small,” the EU’s competition chief Margrethe Vestager recently told the BBC. “I think the AI risks are more that people will be discriminated [against], they will not be seen as who they are.”

Long-term control 

Today’s leading AI models are built on a foundation of funneling ever more data into ever more powerful data centers to produce ever more powerful models. But as the algorithms that process that data become more efficient it’s not clear that ever more powerful data centers — and the chips that power them — will be necessary. As algorithms become more efficient, model developers “get better capability” for “less compute,” Heim from the Centre for the Governance of AI explains. In the future, this may mean that developers can train far more advanced models with less advanced hardware.

Today, efforts to control the spread of AI rest on controlling hardware, but if having access to the most advanced hardware is no longer essential for building the most advanced models, the current regime to control AI crumbles.

These shifts in training models are already taking place. Last year, researchers at Together, an open source AI firm, trained a model known as GPT-JT using a variety of GPUs strung together using slow internet speeds — suggesting that high-performing models could be trained in a decentralized manner by linking large numbers of lagging-edge chips. And as publicly available, ever more capable open source models proliferate, the moat separating AI labs from independent developers continues to narrow — or may disappear altogether.  

What’s more, arguments about the role of algorithmic efficiency making compute less relevant don’t account for entirely new approaches to training models. Today’s leading models rely on a compute-intensive transformer architecture, but future models may use some entirely different approach that would undermine efforts today to control AI models, Toner observes. 

Moreover, arms control experts observe that past efforts to control the spread of dangerous weapons should force a measure of humility on any policymaker trying to control the spread of AI. In the aftermath of World War II, President Truman and many of his key aides, ignoring their scientific advisers, convinced themselves that it would take the Soviet Union decades to build an atomic bomb — when it only took the Kremlin five years. And in spite of export controls, China succeeded in building “2 bombs and 1 satellite” — an atomic bomb, a thermonuclear bomb and a space program. 

That history makes Trager, the political scientist, skeptical about “grand visions for what export restrictions can do.” 

With private companies currently conducting the most advanced AI research, efforts to control the technology have understandably focused on managing industry, but in the long run, military applications may be far more concerning than commercial applications. And that does not bode well for arms control efforts. According to Trager, there is no example in history of major powers “agreeing to limit the development of a technology that they see as very important for their security, and for which they don’t have military substitutes.”

But even if arms control frameworks are imperfect vessels for regulating AI, arms control regimes have evolved over time and grown more stringent to deal with setbacks. The discovery of Iraq’s nuclear program in the 1990s, for example, spurred the creation of additional protocols to the Non-Proliferation Treaty. 

“We’re 80 years into the nuclear age, and we haven’t had a detonation in wartime since 1945 and we only have nine nuclear-armed states,” Robichaud from Longview Philanthropy argues. “We’ve gotten lucky a few times, but we’ve also built the systems that started off really weak and have gotten better over time.” 

The post Does the world need an arms control treaty for AI? appeared first on CyberScoop.

]]>
White House releases cybersecurity budget priorities for FY 2025 https://cyberscoop.com/white-house-cybersecurity-budget-2025/ Wed, 28 Jun 2023 14:47:27 +0000 https://cyberscoop.com/?p=75118 The Biden administration noted that department and agencies are expected to follow the recently released National Cybersecurity Strategy.

The post White House releases cybersecurity budget priorities for FY 2025 appeared first on CyberScoop.

]]>
The Office of Management and Budget and the Office of the National Cyber Director released a memorandum on Tuesday outlining five cybersecurity budget priorities for federal departments and agencies for fiscal year 2025 consistent with the U.S. National Cybersecurity Strategy.

The memo also said the budget submissions should be consistent with the Biden administration’s national cyber strategy released earlier this year. The OMB and ONCD will review agencies’ upcoming budget submissions to “identify potential gaps” and “potential solutions to those gaps.”

“OMB, in coordination with ONCD, will provide feedback to agencies on whether their submissions are adequately addressed and are consistent with overall cybersecurity strategy and policy, aiding agencies’ multiyear planning through the regular budget process,” the memo said.

The five in the memo are the same as the National Cybersecurity Strategy: defend critical infrastructure, disrupt and dismantle threat actors, shape market forces to drive security and resilience, invest in a resilient future and forge international partnerships to pursue shared goals.

The memo comes as the White House is preparing multiple strategies such as the implementation plan for the National Cybersecurity Strategy expected this summer as well as a national cyber workforce strategy. ONCD and OMB also said that a separate memo will be released with additional guidance focused on cybersecurity research and development priorities.

The memo said federal agencies need to defend critical infrastructure by modernizing federal defenses by implementing the federal zero-trust strategy, improving baseline cybersecurity requirements and scaling public-private collaboration.

Additionally, the memo pointed out that ransomware continues to be a national security threat and that some agencies should focus on dismantling threat actors by focusing on investigating and disrupting criminal infrastructure, “prioritize staff to combat the abuse of virtual currency,” and to participate in interagency task forces.

Beyond that, the administration directed agencies to use their buying power to influence the cybersecurity market, to use skills-based hiring methods to strengthen the cyber workforce, follow national security memorandums surrounding a post-quantum future, strengthen international partnerships and secure global supply chains for information, communication and operational technologies.

The post White House releases cybersecurity budget priorities for FY 2025 appeared first on CyberScoop.

]]>
Two major energy corporations added to growing MOVEit victim list https://cyberscoop.com/schnieder-electric-siemens-energy-moveit-cl0p/ Tue, 27 Jun 2023 19:07:24 +0000 https://cyberscoop.com/?p=75101 Leading global energy companies Schneider Electric and Siemens Energy are the latest victims in the MOVEit vulnerability.

The post Two major energy corporations added to growing MOVEit victim list appeared first on CyberScoop.

]]>
Two major energy corporations have fallen victim to the MOVEit breach, the latest targets in an ongoing hacking campaign that has struck a growing number of organizations including government agencies, states and universities.

CL0P, the ransomware gang executing the attacks, added both Schneider Electric and Siemens Energy to its leak site on Tuesday. Siemens confirmed that it was targeted; Schneider said it is investigating the group’s claims.

Since early June, the hacking campaign has added more than 100 victims after CL0P began to take advantage of a vulnerability in MOVEit, a widely used file transfer tool from Progress Software. Multiple federal agencies, including two Department of Energy entities, have been affected by the vulnerability, federal authorities have said. Additional reporting has indicated that the Department of Agriculture may have had a “possible breach” and the Office of Personnel Management is also affected.

Both Siemens Energy and Schneider Electric are among the largest vendors in industrial control systems, though there is little indicated of what information the hackers may have pilfered. Cybersecurity and Infrastructure Security Agency Director Jen Easterly has previously said that the MOVEit campaign appears to be largely opportunistic and the stolen files may be limited to what was in the software at the time the bug was exploited.

“As far as we know, the actors are only stealing information that is specifically being stored on the file transfer application at the precise time that the intrusion occurred,” Easterly said on June 15.

“Regarding the global data security incident, Siemens Energy is among the targets. Based on the current analysis, no critical data has been compromised and our operations have not been affected. We took immediate action when we learned about the incident,” a Siemens spokesperson said in an email.

A Schneider spokesperson said that the company became aware of the vulnerability on May 30 and “promptly deployed available mitigations to secure data and infrastructure and have continued to monitor the situation closely.”

“Subsequently, on June 26th, 2023, Schneider Electric was made aware of a claim mentioning that we have been the victim of a cyber-attack relative to MOVEit vulnerabilities. Our cybersecurity team is currently investigating this claim as well,” the spokesperson said in an email.

Since the Russian-speaking CL0P began publicizing its victims, state and local governments appear to have been heavily affected by the campaign as at least seven have been hit, including the nation’s largest public-employee pension fund the California Public Employees’ Retirement System. Over the weekend, around 45,000 New York City public school students had their personal data stolen which included information like Social Security numbers, StateScoop reported.

The State Department has offered a $10 million reward for information leading to the actors linking to the CL0P ransomware gang.

The post Two major energy corporations added to growing MOVEit victim list appeared first on CyberScoop.

]]>
The potent cyber adversary threatening to further inflame Iranian politics https://cyberscoop.com/iran-government-hack-leak-documents-hacktivist/ Mon, 26 Jun 2023 22:03:14 +0000 https://cyberscoop.com/?p=75062 A group calling itself GhyamSarnegouni has entered the Iranian cyber fray with a damaging hack-and-leak operation against the government.

The post The potent cyber adversary threatening to further inflame Iranian politics appeared first on CyberScoop.

]]>
Just before 2 a.m. Eastern Standard Time on May 29, someone posted a simple message to a Farsi-language Telegram channel called “GhyamSarnegouni,” which roughly translates to Uprising until Overthrow. “The entire highly protected internal network of the executioner’s presidential institution in Tehran was captured and out of reach,” it read, according to a Google translation.

Within minutes, images of top Mujahedeen-e-Khalq leaders appeared on the channel, along with the message of “Death to Khameni Raisi,” the supreme leader of Iran. The Iranian exile group commonly known as MEK has long opposed the Iranian government and advocated for its overthrow. Within a half hour of the original message, a screenshot of an internal presidential document was also posted on Telegram, the first of what has grown to more than 100 related to the office of the president of Iran and other major government agencies.

The documents include diplomatic correspondence, floor plans Iranian president’s office and other officials’ offices and detailed network topology diagrams of various government networks along with associated IP addresses. The leak also included documents that appeared to be related to the country’s nuclear program and reportedly details of officials routing money through Chinese banks and other apparent sanctions-evasions activities. In addition to defacing multiple government websites, the hackers claimed to have gained control over 120 servers and databases, the government’s server management networks and access to more than 1,300 computers connected to the presidency’s internal network, according to a post on the MEK website in the hours after the attack went public.

The group claimed to have stolen “tens of thousands of classified, top secret and secret documents,” according to the post from the MEK, which has not officially claimed any connection to the GhyamSarnegouni. Likewise, the hackers have not claimed to have ties to MEK or any other political group or organization.

The Iranian government called the hack “fake,” and said website updates and maintenance — caused as the defaced sites were returned to the previous content — was the reason for any site outages. But outside experts agreed the documents, and the hack, were likely legitimate.

The scale of intrusion and leak would present a major national security dilemma for any country and send officials and politicians scrambling to find the culprits, identify the vulnerabilities and prosecute the hackers. But, so far, the Iranian government’s reaction — other than saying the leaked documents are fake — isn’t public.

Over the past several years in Iran, a patchwork of hacking groups have sprung up with various aims, political motives and ambitions — and it’s nearly impossible to know for certain who is behind each one of them. Some operations appear to be designed to expose Iranian government secrets or support opposition groups, while others target Israel and the U.S. While Iran has long been an active participant in the cyber domain, in the past few years its internal and external attacks have gained new potency and become more public visible since 2020, such as when hackers with suspected links to the Iranian government targeted water treatment systems in Israel.

Looking to stir up trouble inside Iran, a growing number of groups have taken aim at the current government. These include groups such as Black Reward, Tapandegan and Lab Dookhtegan. Another group known as Predatory Sparrow, which has possible ties to Israel, targeted steel mills with alleged ties to the Islamic Revolutionary Guard Corps (IRGC), posting a video after an apparent breach that showed what appeared to be the inside of an industrial facility.

The U.S. government and American tech companies have long accused the Iranian government of hiding behind hacktivist personas to carry out hack and leak operations and destructive attacks on targets around the world. A May 2023 report from Microsoft details more than a dozen hacktivist personas with links to either the IRGC or the Iranian Ministry of Intelligence, many thought operated by Emennet Pasargad, a U.S. government-sanctioned Iranian cyber group. That same organization is thought to have been involved with a sprawling plan to interfere with the 2020 U.S. election, according to the U.S. Department of Justice.

Homeland Justice, an Iranian front group according to researchers with Mandiant and also multiple western governments, hacked multiple Albanian government systems in July 2022, stealing data and wiping systems with faux ransomware, in response to Albania’s hosting of the MEK. Albania, a NATO member, cut diplomatic ties with Iran over the attack. The U.S. government sanctioned Iran’s Ministry of Intelligence over the attacks, and the U.S. Cyber National Mission Force deployed what it said was its first-ever defensive cyber operation in response to the Iranian-linked attacks.

“We’ve observed multiple cyber groups in action,” said Nariman Gharib, a U.K.-based Iranian opposition activist and independent cyber espionage investigator. “One focuses on human rights, unmasking the darker side of the regime, while another specializes in cyber operations, exposing the regime’s cyber tactics. There’s also a group dedicated to sabotage. They execute their task with efficiency in executing disruptive attacks and [GhyamSarnegouni] is that group.”

Indeed, the latest hack claimed by GhyamSarnegouni involving highly sensitive government documents takes the role that hackers and hacktivists are playing in Iran’s internal politics to a new level, experts say, given the depth of information accessed, which touches on aspects of not only the office of Iranian President Ebrahim Raisi, and correspondence related to multiple sensitive agencies.

The hack is “one of the worst cases that has been publicly discussed and people are aware of about the compromise of classified documents and information from a government network,” said Hamid Kashfi, an independent security consultant originally from Iran, formerly a consultant for Trail of Bits and Immunity, who has uncovered multiple malicious Iranian government cyber activities over the years.

“What’s scary, if I was an Iranian government entity, or someone in charge of [assessing the situation] is what they’re not releasing and what they’re not exposing,” he said. “Because that’s a huge pile of A-plus grade intel and very interesting and very useful information for any government to be able to access.”

The attack is the fourth major hack and leak operation claimed by GhyamSarnegouni, a group that seemed to come out of nowhere in January 2022 when it claimed to have been behind the hacking and disruption of Iran’s national broadcast service. The attack included the broadcast of the faces of the long-missing Massoud Rajavi, and his wife Maryam Rajavi — the leaders of the MEK, which has been variously characterized by detractors as a cult and was, until 2012, deemed a terrorist organization by the U.S. government — and calls for the murder of Iran’s supreme leader, as well as destructive malware to damage equipment.

The MEK sharply disputes that it’s anything other than an opposition political movement, and has said the Iranian government is taking active steps to discredit the group, including by, in some cases, fabricating stories about members’ treatment.

Subsequent attacks tied to the group include the June 2022 hack of more than 5,000 municipal CCTV cameras in Tehran, and the early May 2023 hack of the Iranian Ministry of Foreign Affairs, which included more than 200 defaced websites and the publication of a trove of sensitive internal government files.

GhyamSarnegouni did not respond to a message sent via Instagram, where it also posts images of documents and other messages.

The recently leaked government documents are appearing against the backdrop of the U.S. and Iran getting closer to an agreement that the New York Times reported would ease sanctions on the country, release some imprisoned Americans, cease attacks on American contractors in Syria and Iraq and cap uranium refinement at 60% purity. After the presidential office hack first became public, an expert in Iranian cybersecurity told CyberScoop that embarrassing breaches of this nature seem to mirror major geopolitical developments, including progress on the nuclear deal.

“Any time we are at the middle of the conversation that this nuclear negotiation might lead somewhere, might end somewhere, you will see somehow, either by Israeli or by some hacking group or something like that, some kind of information being publicized regarding Iran nuclear program,” said Amir Rashidi, the director of internet security and digital rights at the Miaan Group, an Iranian digital and human rights organization.

Kashfi said whoever is behind the hack has “demonstrated access to communications [letters] between different government agencies and the presidential office.” The purpose of the system that the posted materials are coming from, he said, is to have secure, encrypted communications between disparate agencies and offices for a particular purpose, not mundane communications.

“If they have access and dumped one classified letter from that system, it means that they have had access to dump all of it,” he said.

He doesn’t expect whoever is behind the attack to post everything they have, given the immense intelligence and operational value at stake. Although the attackers are so far displaying technical abilities beyond the reach of any “random activist group,” it’s not clear whether it’s a state intelligence service, a hired mercenary group, or unaffiliated individuals are behind the attack.

Kashfi noted that it’s far too early to tell who is behind the group. But one data point, he said, supports the idea that it is not MEK. Some of the file names, and even some of the way certain words are used in the messaging “is not in a way that a native [Farsi] speaker would use.”

“Non-native speakers would easily overlook this,” he said. “But if you look at the context of it, you would notice that if it’s actually someone from MEK that’s supposed to be Iranian or a native speaker, they wouldn’t name files like this. It more looks like someone is receiving and processing this information and then doing the PR for the group through this Telegram channel.”

Simin Kargar, a doctoral researcher at Johns Hopkins University who tracks human rights and cybersecurity matters related to Iran, views the group’s activity in the context of the larger cyber tit-for-tat involving Iran and its adversaries, whether Israel, the U.S. or others in the region. The group has aggressively promoted MEK symbols and messaging from its inception, she said, and over time, the MEK “has come to own this, whether or not there is an actual relation between the MEK as an organization and this hacktivist group.”

MEK has a history of exposing highly sensitive Iranian secrets, she added, most notably revealing Iran’s nuclear program in a press conference in 2002. While not directly cyber related, the revelations foreshadowed a scenario whereby MEK gained supporters among hawkish American policy makers looking to find ways to undermine the Iranian government, most notably during the Trump years when several officials interacted directly with MEK.

During that period Kargar’s research showed a “surge of MEK activities” on social media promoting some of the Trump administration’s most hawkish anti-Iran messaging. Fast forward to the current era with a plethora of hacktivist groups sharing Iranian data, some of whom also promote MEK messaging, and it’s clear that something is going on, she said.

“Speculations in the background about who these groups might be, and who they might be connected to, has always involved some sort of connection with the MEK,” she said. “Because they definitely have the motivation and interest to either pull something like this off independently, or being fed with intelligence in this domain, and then kind of using that, packaging that in a way that serves their purposes.”

In a statement provided to CyberScoop, the MEK said there’s no proof any hack occurred from its camp in Albania, “let alone that it is naive to hack from a known center.” 

Additionally, the materials seem to be the work of insiders in Iran, the statement said, with access to them “possible only with direct access to the regime’s devices inside the country. Many documents revealed are way outside the Internet domain.”

Whether the group is connected to the MEK or not, its activities are having consequences for the exiled group. Albanian police raided MEK camp Ashraf-3 June 20 in an action that left dozens injured and one man dead. The police seized 150 “computer devices allegedly linked to prohibited political activities,” the Associated Press reported.

Authorities raided the camp as part of an Albanian government investigation into alleged provocation of war, illegal interception of computer data, interference in data and computer systems, equipment misuse, and for the MEK being a “structured criminal group,” the Albanian news outlet Politiko reported the next day. The investigation began May 18 based on news articles reporting on the early May hack of the Iranian Ministry of Foreign Affairs, according to the story. Albanian authorities also cited the June 2022 hack on the Tehran municipal CCTV system in the search warrant.

“In July 2022, Albania was subjected to the most serious cyber-attack sponsored by the Islamic Republic of Iran, which caused massive damage to Albania’s digital infrastructure and interrupted the provision of public services and documents — 95% of which are offered only online — for months,” the Albanian embassy wrote in an email to CyberScoop. “In response, the Albanian Government severed diplomatic relations with the Islamic Republic of Iran and since then, we have received numerous threats, always related to the MEK presence in Albania.”

Albania “cannot tolerate that our territory be used to engage in illegal, subversive and political activity against other countries, as has allegedly been the case with the MEK,” the email read. “Humanitarian protection does not provide the MEK with special immunity before the law. MEK members are just as liable to be investigated and prosecuted for crimes committed in the territory of the Republic of Albania as any other individual, be they citizens, residents, refugees, or — as is the case with the MEK — individuals enjoying humanitarian protection from the Government of Albania.”

According to the MEK’s statement, roughly 1,200 Albanian police arrived at the camp the morning of June 20, and the majority of the people at the camp were unaware of the court order related to the hack investigation. Aggressive police actions caused “residents to protest,” the statement read, resulting in Albanian police injuring more than 100 people and leading to the death of one man after he was pepper sprayed, according to the statement. 

Albanian authorities seized 200 computers, the statement added. “There is nothing illegal in them; we are apprehensive that the information contained in these computers fall into the hands of the Iranian regime, with families and relatives of the residents in Iran put in danger.”

Updated June 27, 2023: This story has been updated to include comment provided to CyberScoop by the MEK after publication, and to reflect that the MEK disputes any characterization implying it is a “cult.”

The post The potent cyber adversary threatening to further inflame Iranian politics appeared first on CyberScoop.

]]>
How to AI-proof the cybersecurity workforce https://cyberscoop.com/ai-proof-cybersecurity-workforce/ Fri, 23 Jun 2023 18:45:48 +0000 https://cyberscoop.com/?p=75056 Generative AI can enhance digital security, but it can’t — and shouldn’t — replace humans that are essential to fight malicious hackers.

The post How to AI-proof the cybersecurity workforce appeared first on CyberScoop.

]]>
Automation is hardly new. Ever since the Industrial Revolution, jobs have been transformed, created and eliminated because of it. Now, automation in the form of artificial intelligence is coming for the tech sector — and specifically cybersecurity.

The excitement over AI in cybersecurity was on full display at the annual gathering of infosec professionals in San Francisco known as the RSA Conference. At this year’s event, multiple keynotes focused on the potential for AI to efficiently hunt for digital risks and automate threat response protocols. AI also promises to alleviate the stresses associated with many cybersecurity jobs, such as first responders. But just as there’s potential, there are downsides. As AI tools inevitably begin to scale and tackle more complex cybersecurity problems, the impact on the workforce is troublesome — and dangerous. 

We cannot let the potential of AI overshadow the value of human cybersecurity professionals. While AI excels at pattern recognition tasks such as detecting malware attacks, machines cannot take into account the context for why an attack may be happening. AI can be amazing at automating some aspects of reasoning, but algorithms cannot replace people when it comes to the creativity required to find unique solutions. Chatbots can’t replicate all the human competencies that are crucial within cybersecurity. So, without a measured — and cautious — approach to AI, our sector risks moving toward insecurity.

While it’s reassuring to see a growing conversation about the potential dangers of AI and efforts to put in place some common sense guardrails to regulate its deployment, such as President Biden’s meeting this week with Big Tech critics in San Francisco, there’s still not enough focus on the potentially devastating impact that AI tools could have on the American workforce.

Goldman Sachs estimates that in the U.S. and Europe, about one-fourth of current work can be substituted by generative AI. It’s unlikely that entire job functions will be eliminated, but less people will be needed to maintain a baseline level of work. Moreover, there is research that posits that high-skilled jobs may be impacted more because AI’s predictive capabilities mimic the analytical and optimization skills core to many higher skilled jobs. Within cybersecurity, these can include individuals across a number of functions such as the SOC analysts to efficiently aggregate suspicious activity data, or red teamers who code and test for vulnerabilities.

What needs more attention beyond the job numbers are the economic impacts on the cybersecurity workforce. Empirical evidence examining wage changes and automation between 1980 and 2016 suggests that about 50% of wage changes are due to task displacement and actually exacerbate wage inequality. The study is not sector-specific, but if leading cybersecurity firms are touting AI’s potential to efficiently conduct tasks such as automated threat detection protocols, then cybersecurity will not be insulated from these changes. 

We also need to consider the impacts on diversity. There have been commendable efforts the past several years to lower barriers to entry into cybersecurity, including scholarship programs that cut the cost of entering the field, and professional associations such as Black Girls Hack or Women in Cybersecurity that help foster belongingness and retention in the sector. The National Cybersecurity Strategy further underscores how central diversity is for the workforce to long-term cybersecurity. But we are at a critical crossroads as layoffs across sectors, especially in tech, are cutting diversity, equity and inclusion efforts. If history suggests that job displacement by automation is on the horizon, AI can further slow our hard-earned progress. 

It’s imperative that investors and advocates of the cyber workforce consider the potential ramifications of AI, including on its least represented members. Luckily the U.S. has a growing ecosystem of cyber workforce development programs designed to usher individuals into the cybersecurity sector that we can reframe workforce priorities rather than inventing a new wheel.

But more needs to be done to make cybersecurity workers AI-proof. For starters, many of the new cyber educational efforts can focus on soft skills that cannot be automated. Generative AI can automate many tasks, but skills such as creativity, emotional intelligence and intuition are hard to replace. Whether in designing training curriculum or hiring practices, emphasize these skills to ensure your cybersecurity staff can solve tough problems, but also have the capabilities to complement the challenges and potentials of AI. 

Several large tech companies have professional development tracks that upskill their staff and other associations provide additional training and certifications at a premium but there’s opportunities for other nonprofits to expand their programming to include AI. Nonprofit organizations who have a stellar track record for technical training have an opportunity to step in and build equitable pathways for cybersecurity workers to continue their technical careers, and there is space for philanthropies and corporations to invest in developing these programs. 

We also need to rethink what it means to have a “cybersecurity career.”Cybersecurity extends beyond patching vulnerabilities and detecting threats. Policy analysts now contextualize strings of cyberattacks within a wider geopolitical conflict. Developers contribute their lived experiences to designing tech solutions to society’s pressing challenges. While extending our definition of a cybersecurity expert, we need to ensure these professionals are communicating. Programs such as the #ShareTheMicInCyber Fellowship or TechCongress focus on bridging the gap between technical experts in cybersecurity and technology to inform better policymaking.

There is no doubt that generative AI will have a transformative impact. We have the opportunity to prepare the cyber workforce for a future just as promising, and we need to start now.

Bridget Chan is the program manager at New America for the #ShareTheMicInCyber Fellowship, a program advancing DEI in cybersecurity.

The post How to AI-proof the cybersecurity workforce appeared first on CyberScoop.

]]>
Treasury sanctions two Russian intelligence officers for election influence operations https://cyberscoop.com/treasury-sanctions-russian-election-influence/ Fri, 23 Jun 2023 17:12:38 +0000 https://cyberscoop.com/?p=75045 The charges follow a grand jury indictments alleging that the officers engaged in years-long international election influence campaigns.

The post Treasury sanctions two Russian intelligence officers for election influence operations appeared first on CyberScoop.

]]>
The Treasury Department issued sanctions on Friday against two Russian intelligence officers for their alleged role in global election influence operations that included recruiting political groups within the U.S. to distribute pro-Moscow propaganda.

“The Kremlin continues to target a key pillar of democracy around the world — free and fair elections,” Brian Nelson, under secretary at the Office of Terrorism and Financial Intelligence at the Treasury Department, said in a statement. “The United States will not tolerate threats to our democracy, and today’s action builds on the whole of government approach to protect our system of representative government, including our democratic institutions and elections processes.”

Aleksey Borisovich Sukhodolov and Yegor Sergeyevich Popov, both Moscow-based officers of Russian Federal Security Service, or FSB, were directly engaged in a years-long effort to recruit local “co-optees” to influence elections that benefit the Kremlin, the Treasury said. “In support of its influence operations, Russia has recruited and forged ties with people and groups around the world who are positioned to amplify and reinforce Russia’s disinformation efforts to further its goals of destabilizing democratic societies.”

The sanctions announcement Friday follow a criminal indictment against Sukhodolov and Popov that the Department of Justice unsealed in April alleging the two were involved in a years-long campaign to influence elections. The U.S. government has also said the two are suspected of attempting to sway elections in Ukraine, Spain, the United Kingdom and Ireland.

According to the Treasury Department, Popov was the main handler for “co-optees” Aleksandr Viktorovich Ionov and Natalya Valeryevna Burlinova who were previously sanctioned by the Treasury Department and have also been indicted for their alleged activities. “From as early as 2015 through at least 2022, Popov worked with Burlinova and oversaw her activities on behalf of the FSB,” Treasury said.

Ionov and Burlinova influenced multiple U.S. individuals and political groups all in an effort to “to create or heighten divisions within the country,” according to a sanctions announcement in July 2022.

While it’s unlikely any of the four Russians sanctioned by the U.S. government and facing charges related to election interference will see the inside of an American court, the actions are part of broader government effort to more aggressively push back against foreign influence on elections, which many experts believe is only expected to increase ahead of the 2024 presidential campaign.

Former Cybersecurity and Infrastructure Security Agency Director Chris Krebs said earlier this month to expect a “very, very active threat landscape” concerning election influence and interference.

The post Treasury sanctions two Russian intelligence officers for election influence operations appeared first on CyberScoop.

]]>