The Chinese foreign ministry has lashed out at Lithuania after the small Baltic Sea nation approved the opening of the Taiwan Representative Office in Vilnius. Beijing says it undermines its One China policy.

Beijing was disappointed that Lithuania had proceeded to grant Taiwan permission to open its ‘representative office’ in Vilnius despite “China’s strong opposition and repeated persuasion,” Chinese foreign ministry spokesman Zhao Lijian said at a press briefing on Friday. Taiwan had opened its mission in Vilnius the previous day. 

Read more

Chinese star Peng Shuai is reportedly missing. © Visual China Group via Getty Images
Peng Shuai: What do we know about Chinese tennis star at center of international storm?

Zhao called the move a violation of the One China principle, which he said is undermining China’s sovereignty and territorial integrity, while grossly interfering in its internal affairs. The spokesman reminded Lithuania that Taiwan is an inalienable part of China’s territory and the Beijing government has sole legal authority. 

As to what necessary measures China will take, you may wait and see. The Lithuanian side shall reap what it sows.

In a “stern warning” to the Taiwanese authorities, Zhao then added that “seeking ‘Taiwan independence’ by soliciting foreign support is a totally misguided attempt that is doomed to fail.”

In August, Lithuania announced that the diplomatic outpost would be named the “Taiwan Representative Office,” angering China. Taiwan’s diplomatic branches – in countries that have de facto relations with the island’s authorities – are normally called “Taipei Economic and Cultural Offices.”

China demanded that Lithuania recall its ambassador from China, which it did. Beijing then withdrew its envoy to the Baltic state.

Chinese officials have repeatedly called on Western nations, notably the UK and US, to stop interfering in Beijing’s internal affairs, stressing that they consider Taiwan to be part of China.

If you like this story, share it with a friend!

find more fun & mates at SoShow now !

The European Union’s top court has ruled that Hungary’s 2018 law aimed at criminalizing aiding illegal immigrants who are claiming asylum violates the “rights safeguarded” by the bloc’s legislature.

The Hungarian legislation, passed in 2018, sought to punish anyone “facilitating illegal immigration” with a year in prison, under a bill dubbed the “Stop Soros” law. Hungary’s government justified it at the time by arguing that migrants illegally entering the country threatened its national security. 

In the ruling, handed down on Tuesday, the European Court of Justice declared that “criminalizing such activities impinges on the exercise of the rights safeguarded by the EU legislature in respect of the assistance of applicants for international protection.”

The EU’s advocate general, Athanasios Rantos, had urged the court to make such a judgement back in February, claiming the introduction of the legislation meant that “Hungary has failed to fulfil its obligations under the [bloc’s] Procedures Directive.”

It became known as the Stop Soros law after billionaire philanthropist George Soros became a vocal opponent of the Hungarian government’s opposition to migration. The administration, in turn, accused Soros of orchestrating migration to Europe, with the Open Society Foundation, run by the philanthropist, closing its operation in the country in response. 

Read more

The towers of the European Court of Justice are seen in Luxembourg. © Reuters / Francois Lenoir
Top EU court says Poland broke rules with judge appointment system

Hungary, under the leadership of right-wing Prime Minister Viktor Orban, has repeatedly clashed with the EU in recent years over its strong stance on immigration and concerns from the bloc about threats to the rule of law in the country.

At the end of 2020, a dispute between Hungary and Poland and the EU risked derailing the bloc’s budget, as both member states were threatening to veto it over their view that the EU was attempting to interfere in their domestic affairs. Ultimately, the EU backed down, agreeing to a compromise with Budapest and Warsaw to ensure the budget secured the support of all 27 member states. 

Despite acknowledging the EU court’s ruling, Hungary’s government defended its right to challenge any foreign-funded non-government organizations that are attempting to “promote migration.”

“Hungary’s position on migration remains unchanged: Help should be taken where the problem is, instead of bringing the problem here,” Hungarian government spokesperson Zoltan Kovacs said, adding that the country will challenge outside entities “seeking to gain political influence and interference.”

Think your friends would be interested? Share this story!

find more fun & mates at SoShow now !

Two climate activists in Australia have brought the world’s busiest coal port to a halt, strapping themselves to a massive piece of machinery and refusing to come down. The stunt follows more than a week of similar actions.

Blockade Australia, a climate group focused on “strategic direct action,” declared that two of its members had clambered atop equipment at the Port of Newcastle and stopped work there late Tuesday night, sharing footage captured by the activists as they suspended themselves from a large loading machine.

“Zianna and Hannah have shut down Newcastle coal port, abseiling from coal handling machinery. The port cannot resume operations until the pair are removed by police,” the group said, identifying the activists by their first names only, adding “This is the tenth consecutive day of disruption to Newcastle coal port and its supply rail network.”

The protest stunt follows at least 16 similar actions over the last week or so, the group said, some targeting the rail line near the port, the world’s largest for coal exports.

Another demonstration carried out on Tuesday saw a second pair of activists breach the port and “hit emergency stop buttons” on machines before strapping themselves to a different piece of equipment. They were brought down and arrested after “several hours” and are expected to appear in court in the coming days.

The disruptions have drawn the ire of state officials, with New South Wales Environment Minister Matt Kean calling them “completely out of line” and urging police to “throw the book” at the protesters, at least 19 of whom have already been arrested so far this month, according to ABC.

“Pull your heads in, get out of the way and stop hurting other people going about their lives, running their businesses,” the minister said during a radio interview on Wednesday. “There are hundreds of ways to make your views known and advocate for change, but risking the lives of rail workers is definitely not one of them.”

The activists could face charges that carry maximum sentences of 25 years in prison, NSW police commissioner Mick Fuller said, noting that local law enforcement has created a “strike force” to deal with future disturbances at the port.

READ MORE: The trillion dollar push to decarbonize global shipping

Like this story? Share it with a friend!

find more fun & mates at SoShow now !

President Joe Biden commented on reports that US officials are planning to boycott the upcoming Olympics in Beijing over alleged human rights violations – but his answer left journalists perplexed.

When asked on Tuesday if an official US delegation will be traveling to the Winter Games in the Chinese capital in February, Biden responded: “I am the delegation.”

The president, however, did not elaborate, leaving the White House correspondents in a state of confusion, as his response could mean that Biden will attend the Winter Olympics alone or, as some reporters suggested, that he simply did not understand the question.

Read more

A screen at a restaurant in Beijing showing Chinese President Xi Jinping's virtual meeting with US President Joe Biden. © Reuters / Tingshu Wang
Biden & Xi agree to avoid conflict

A recent report by a Washington Post columnist claimed the US won’t be sending an official delegation to Beijing in 2022 over allegations of human rights violations by the Chinese government. According to the sources cited in the article, a formal recommendation for a diplomatic boycott of the Olympics has been already presented to Biden, with the move expected to be approved by the president by the end of November.

The piece was published on the day that Biden held a lengthy virtual meeting with Chinese leader Xi Jinping, in which they discussed a range of issues regarding the strained relations between the two nations – but not the Olympics.

The White House said that during the talks, President Biden challenged his Chinese counterpart over what Washington sees as persecution against the Uyghur population in the Xinjiang region, as well as human rights violations in Tibet and Hong Kong. China has strongly denied the claims, accusing the US of interfering in its internal affairs.

Calls for the Biden administration to boycott the Olympics and refrain from sending a political delegation to Beijing have recently been made by top Democratic and Republican lawmakers. 

If implemented, it won’t affect the American athletes, who will still be taking part in the Winter Olympics.

Like this story? Share it with a friend!

find more fun & mates at SoShow now !

An American nonprofit behind the US-funded bat virus research in China has denied ever sending virus samples from Laos – a place where SARS-CoV-2 closest natural relative was found – to Wuhan in response to fresh allegations.

“No work was ever conducted in Laos as a part of this collaborative research project,” EcoHealth Alliance – a group that conducted experiments on coronaviruses while receiving funding from the National Institutes of Health (NIH) – said in a series of tweets on Sunday, responding to media reports alleging that the group might have transported a potentially dangerous virus from Laos to the laboratory in Wuhan.

The group’s name surfaced in October when the NIH principal deputy director, Lawrence Tabak, revealed EcoHealth Alliance did experiment on the viruses with the agency’s financial help. At that time, White House Medical Advisor Dr. Anthony Fauci stated that the viruses studied as part of the project “were distant enough molecularly that no matter what you did to them, they could never, ever become SARS-CoV-2.”

EcoHealth has come under renewed scrutiny after its emails, obtained through a Freedom of Information request, appeared to suggest that the group was discussing the prospect of collecting viral samples from bats in Laos and sending them to the Wuhan Institute of Virology. The emails were initially obtained by the White Coat Waste Project and sparked a flurry of reports over the weekend, including in the Spectator by British science writer Matt Ridley.

Read more

FILE PHOTOS.
Fauci DID fund Wuhan virus experiments, but officials insist virus involved ‘could not have been’ cause of Covid-19 pandemic

The emails shared between EcoHealth Alliance and its US government funders reportedly reveal that the scientists discussed collecting viral samples from bats in eight countries, including in Laos, between 2016 and 2019, and toyed with the idea of transporting them to Wuhan, ostensibly, to avoid red tape. One email from 2016 cited by the Spectator reportedly reads: “All samples collected would be tested at the Wuhan Institute of Virology.”

Laos is the birthplace of at least one virus that seems to be very close to SARS-Cov-2. A bat viral strain called Banal-52 discovered in Laos in September shares 96.8 percent of its genome with the virus behind the Covid-19 pandemic

On Sunday, EcoHealth Alliance claimed that the emails cited by Ridley “do not show…that we were sampling bats in Laos and sending the results to Wuhan.”

The group acknowledged, however, that it requested NIH permission to work in Southeast Asian countries, including in Laos, and that this permission was granted. 

However, the nonprofit claimed it ended focusing on China instead.

 The response failed to satisfy Ridley, who is also a co-author of a book on Covid’s origin, who demanded “evidence” proving that  his report was not “fully accurate.”

Think your friends would be interested? Share this story!

find more fun & mates at SoShow now !

When the PRC decides to move on Taiwan, it is unlikely to move in a manner that makes a US decision on intervention clear cut.  Should China decide, initially at least, against a full-scale invasion of that island nation, it could instead opt to try to “win without fighting.” Beijing might do so by using its large, state-controlled fishing fleet to cut smaller Taipei-controlled islands off from Taiwan itself much as the PRC is now massing fishing boats to expand Chinese-controlled seas to press claims on the Japanese Senkakus and Whitsun Reef in Philippine waters. Chinese state-owned fisheries companies – part of the so-called ‘Maritime Militia’ – serve as fronts for PLA intelligence. Using their fleets to operate in a manner somewhere between peace and conflict in the gray zone of contested control around Taiwan would allow Beijing to test whether the US and its allies are willing to help defend the island’s independence without being seen to initiate open conflict.

“The Cipher Brief has become the most popular outlet for former intelligence officers; no media outlet is even a close second to The Cipher Brief in terms of the number of articles published by formers.” – Sept. 2018, Studies in Intelligence, Vol. 62 No.

Access all of The Cipher Brief’s national-security focused expert insight by becoming a  Cipher Brief Level I Member .  

 

 

The post Why the China – Russia Relationship Should Worry You – Part Two appeared first on The Cipher Brief.

find more fun & mates at SoShow now !

As India is mulling new rules for digital money, Prime Minister Narendra Modi has called for regulations to ensure cryptocurrencies like bitcoin do not “end up in the wrong hands,” warning that this could “spoil” young people.

While he did not expand on those concerns, Modi spoke on Thursday about the need for “democratic nations” to band together and deal with challenges posed by emerging technologies. He was delivering a virtual address at the Sydney Dialogue, an annual cyber-tech summit.

Noting that technology and data could either become “new weapons” for conflict or “instruments of cooperation,” Modi brought up digital currencies as an example of how it was important that like-minded nations “work together on this” to “ensure it does not end up in the wrong hands, which can spoil our youth.”

We are at a historic moment of choice. Whether all the wonderful powers of technology of our age will be instruments of cooperation or conflict, coercion or choice, domination or development, oppression or opportunity.

He also urged the development of technical and governance standards and norms, singling out the use of data, and called for renewed efforts to prevent manipulation of public opinion. In recent weeks, Indian authorities have raised concerns over claims of huge returns from cryptocurrency investment as well as its potential connections to money laundering, organized crime and terror financing.

Read more

© AP Photo / Charles Krupa
Bitcoin crashes after China rules all crypto-related transactions illegal

On Saturday, Modi chaired a meeting to formulate the country’s approach to digital currencies and examine their impact on the economy. According to The Economic Times, Indian officials are drafting regulations to propose a ban on all transactions and payments in cryptocurrencies, while allowing investors to hold them as assets, similar to gold, bonds and stock shares.

Citing unnamed sources familiar with the government’s discussions, the newspaper said there was a belief in policy circles that crypto markets needed to be regulated in order to tackle the problem of opaque advertising that exaggerates investment returns in order to attract young investors.

The sources informed the newspaper that draft legislation on the matter was expected to be forwarded to Modi’s cabinet for consideration in the next two to three weeks.

In September, China had banned all cryptocurrency transactions and crypto-mining.

If you like this story, share it with a friend!

find more fun & mates at SoShow now !

A Roadmap for AI in the Intelligence Community

(Editor’s Note: This article was first published by our friends at Just Security and is the fourth in a series that is diving into the foundational barriers to the broad integration of AI in the IC – culture, budget, acquisition, risk, and oversight.  This article considers a new IC approach to risk management.)

OPINION — I have written previously that the Intelligence Community (IC) must rapidly advance its artificial intelligence (AI) capabilities to keep pace with our nation’s adversaries and continue to provide policymakers with accurate, timely, and exquisite insights. The good news is that there is strong bipartisan support for doing so. The not-so-good news is that the IC is not well-postured to move quickly and take the risks required to continue to outpace China and other strategic competitors over the next decade.

In addition to the practical budget and acquisition hurdles facing the IC, there is a strong cultural resistance to taking risks when not absolutely necessary. This is understandable given the life-and-death nature of intelligence work and the U.S. government’s imperative to wisely execute national security funds and activities. However, some risks related to innovative and cutting-edge technologies like AI are in fact necessary, and the risk of inaction – the costs of not pursuing AI capabilities – is greater than the risk of action.

The Need for a Risk Framework

For each incredible new invention, there are hundreds of brilliant ideas that have failed. To entrepreneurs and innovators, “failure” is not a bad word. Rather, failed ideas are often critical steps in the learning process that ultimately lead to a successful product; without those prior failed attempts, that final product might never be created. As former President of India A.P.J. Abdul Kalam once said, “FAIL” should really stand for “First Attempt In Learning.”

The U.S. government, however, is not Silicon Valley; it does not consider failure a useful part of any process, especially when it comes to national security activities and taxpayer dollars. Indeed, no one in the U.S. government wants to incur additional costs or delay or lose taxpayer dollars. But there is rarely a distinction made within the government between big failures, which may have a lasting, devastating, and even life-threatening impact, and small failures, which may be mere stumbling blocks with acceptable levels of impact that result in helpful course corrections.


The Cipher Brief hosts private briefings with the world’s most experienced national and global security experts.  Become a member today.


As a subcommittee report of the House Permanent Select Committee on Intelligence (HPSCI) notes “[p]rogram failures are often met with harsh penalties and very public rebukes from Congress which often fails to appreciate that not all failures are the same. Especially with cutting-edge research in technologies … early failures are a near certainty …. In fact, failing fast and adapting quickly is a critical part of innovation.” There is a vital difference between an innovative project that fails and a failure to innovate. The former teaches us something we did not know before, while the latter is a national security risk.

Faced with congressional hearings, inspector general reports, performance evaluation downgrades, negative reputational effects, and even personal liability, IC officers are understandably risk-averse and prefer not to introduce any new risk. That is, of course, neither realistic nor the standard the IC meets today. The IC is constantly managing a multitude of operational risks – that its officers, sources, or methods will be exposed, that it will miss (or misinterpret) indications of an attack, or that it will otherwise fail to produce the intelligence policymakers need at the right time and place. Yet in the face of such serious risks, the IC proactively and aggressively pursues its mission. It recognizes that it must find effective ways to understand, mitigate, and make decisions around risk, and therefore it takes action to make sure potential ramifications are clear, appropriate, and accepted before any failure occurs. In short, the IC has long known that its operations cannot be paralyzed by a zero-risk tolerance that is neither desirable nor attainable. This recognition must also be applied to the ways in which the IC acquires, develops, and uses new technology.

This is particularly important in the context of AI. While AI has made amazing progress in recent years, the underlying technology, the algorithms and their application, are still evolving and the resulting capabilities, by design, will continue to learn and adapt. AI holds enormous promise to transform a variety of IC missions and tasks, but how and when these changes may occur is difficult to forecast and AI’s constant innovation will introduce uncertainty and mistakes. There will be unexpected breakthroughs, as well as failures in areas that initially seemed promising.

The IC must rethink its willingness to take risks in a field where change and failure is embraced as part of the key to future success. The IC must experiment and iterate its progress over time and shift from a culture that punishes even reasonable risk to one that embraces, mitigates, and owns it. This can only be done with a systematic, repeatable, and consistent approach to making risk-conscious decisions.

Today there is no cross-IC mechanism for thinking about risk, let alone for taking it. When considering new activities or approaches, each IC element manages risk through its own lens and mechanisms, if at all. Several individual IC elements have created internal risk assessment frameworks to help officers understand the risks of both action and inaction, and to navigate the decisions they are empowered to make depending upon the circumstances. These frameworks increase confidence that if an activity goes wrong, supervisors all the way up the chain will provide backing as long as the risk was reasonable, well-considered and understood, and the right leaders approved it. And while risk assessments are often not precise instruments of measurement – they reflect the quality of the data, the varied expertise of those conducting the assessments, and the subjective interpretation of the results – regularized and systematic risk assessments are nevertheless a key part of effective risk management and facilitate decision-making at all levels.


Go beyond the headlines with expert perspectives on today’s news with The Cipher Brief’s Daily Open-Source Podcast.  Listen here or wherever you listen to podcasts.


Creating these individual frameworks is commendable and leading-edge for government agencies, but more must be done holistically across the IC. Irregular and inconsistent risk assessments among IC elements will not provide the comfort and certainty needed to drive an IC-wide cultural shift to taking risk. At the same time, the unique nature of the IC, comprised of 18 different elements, each with similar and overlapping, but not identical, missions, roles, authorities, threats and vulnerabilities, does not lend itself to a one-size-fits-all approach.

For this reason, the IC needs a flexible but common strategic framework for considering risk that can apply across the community, with each element having the ability to tailor that framework to its own mission space. Such an approach is not unlike how the community is managed in many areas today – with overarching IC-wide policy that is locally interpreted and implemented to fit the specific needs of each IC element. When it comes to risk, creating an umbrella IC-wide framework will significantly improve the workforce’s ability to understand acceptable risks and tradeoffs, produce comprehensible and comparable risk determinations across the IC, and provide policymakers the ability to anticipate and mitigate failure and unintended escalation.

Critical Elements of a Risk Framework

A common IC AI risk framework should inform and help prioritize decisions from acquisition or development, to deployment, to performance in a consistent way across the IC. To start, the IC should create common AI risk management principles, like its existing principles of transparency and AI ethics, that include clear and consistent definitions, thresholds, and standards. These principles should drive a repeatable risk assessment process that each IC element can tailor to its individual needs, and should promote policy, governance, and technological approaches that are aligned to risk management.

The successful implementation of this risk framework requires a multi-disciplinary approach involving leaders from across the organization, experts from all relevant functional areas, and managers who can ensure vigilance in implementation. A whole-of-activity methodology that includes technologists, collectors, analysts, innovators, security officers, acquisition officers, lawyers and more, is critical to ensuring a full 360-degree understanding of the opportunities, issues, risks, and potential consequences associated with a particular action, and to enabling the best-informed decision.

Given the many players involved, each IC element must strengthen internal processes to manage the potential disconnects that can lead to unintended risks and to create a culture that instills in every officer a responsibility to proactively consider risk at each stage of the activity. Internal governance should include an interdisciplinary Risk Management Council (RMC) made up of senior leaders from across the organization. The RMC should establish clear and consistent thresholds for when a risk assessment is required, recommended, or not needed given that resource constraints likely will not allow all of the broad and diverse AI activities within organizations to be assessed. These thresholds should be consistent with the IC risk management principles so that as IC elements work together on projects across the community, officers have similar understandings and expectations.

The risk framework itself should provide a common taxonomy and process to:

  • Understand and identify potential failures, including the source, timeline, and range of effects.
  • Analyze failures and risks by identifying internal vulnerabilities or predisposing conditions that could increase the likelihood of adverse impact.
  • Evaluate the likelihood of failure, taking into consideration risks and vulnerabilities.
  • Assess the severity of the potential impact, to include potential harm to organizational operations, assets, individuals, other organizations, or the nation.
  • Consider whether the ultimate risk may be sufficiently mitigated or whether it should be transferred, avoided, or accepted.

AI-related risks may include, among other things, technology failure, biased data, adversarial attacks, supply chain compromises, human error, cost overruns, legal compliance challenges, or oversight issues.

An initial risk level is determined by considering the likelihood of a failure against the severity of the potential impact. For example, is there is a low, moderate, or high likelihood of supply chain compromise? Would such a compromise affect only one discrete system or are there system-wide implications? These calculations will result in an initial risk level. Then potential mitigation measures, such as additional policies, training, or security measures, are applied to lower the initial risk level to an adjusted risk level. For example, physically or logically segmenting an organization’s systems so that a compromise only touches one system would significantly decrease the risk level associated with that particular technology. The higher the likelihood of supply chain compromise, the lower the severity of its impact must be to offset the risk, and vice versa. Organizations should apply the Swiss Cheese Model of more than one preventative or mitigative action for a more effective layered defense. Organizations then must consider the adjusted risk level in relation to their tolerance for risk; how much risk (and potential consequence) is acceptable in pursuit of value? This requires defining the IC’s risk tolerance levels, within which IC elements may again define their own levels based upon their unique missions.

Understanding and considering the risk of action is an important step forward for the IC, but it is not the last step. Sometimes overlooked in risk assessment practices is the consideration of the risk of inaction. To fully evaluate potential options, decision-makers must consider whether the overall risk of doing something is outweighed by the risks of not doing it. If the IC does not pursue particular AI capabilities, what is the opportunity cost of that inaction? Any final determination about whether to take action must consider whether declining to act would cause greater risk of significant harm. While the answer will not always be yes, in the case of AI and emerging technology, it is a very realistic possibility.

And, finally, a risk framework only works if people know about it. Broad communication – about the existence of the framework, how to apply it, and expectations for doing so – is vital. We cannot hold people accountable for appropriately managing risk if we do not clearly and consistently communicate and help people use the structure and mechanisms for doing so.

Buy-in To Enhance Confidence

An IC-wide AI risk framework will help IC officers understand risks and determine when and how to take advantage of innovative emerging technologies like AI, increasing comfort with uncertainty and risk-taking in the pursuit of new capabilities. Such a risk framework will have even greater impact if it is accepted – explicitly or implicitly – by the IC’s congressional overseers. The final article in this series will delve more deeply into needed changes to further improve the crucial relationship between the IC and its congressional overseers. It will also provide a link to a full report that provides more detail on each aspect of the series, including a draft IC AI Risk Framework.

Although Congress is not formally bound by such a framework, given the significant accountability measures that often flow from these overseers, a meeting of the minds between the IC and its congressional overseers is critical. Indeed, these overseers should have awareness of and an informal ability to provide feedback into the framework as it is being developed. This level of transparency and partnership would lead to at least two important benefits: first, increased confidence in the framework by all; and second, better insight into IC decision-making for IC overseers.

Ultimately, such a mutual understanding would encourage exactly what the IC needs to truly take advantage of next-generation technology like AI: a culture of experimentation, innovation, and creativity that sees reasonable risk and failure as necessary steps to game-changing outcomes.

Read also AI and the IC: The Tangled Web of Budget and Acquisition

Read also Artificial Intelligence in the IC: Culture is Critical

Read also AI and the IC: The Challenges Ahead

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief

The post A Roadmap for AI in the IC appeared first on The Cipher Brief.

find more fun & mates at SoShow now !

Vaccination of all Hungarian citizens against Covid-19 is inevitable, PM Viktor Orban has said, stating that even the most hardline anti-vaxxers will ultimately face a choice between dying with the virus and getting a jab.

Speaking to Kossuth radio on Friday, the Hungarian leader lashed out at those reluctant to get vaccinated against coronavirus, branding them a threat “not only to themselves but to all others.”

In the end, everyone will have to be vaccinated; even the anti-vaxxers will realize that they will either get vaccinated or die. So, I urge everyone to take this opportunity.

The EU member state is currently experiencing its fourth wave of coronavirus, Orban stated, blaming the situation on those who had not got vaccinated. “If everybody were inoculated, there would be no fourth wave or it would be just a small one,” the PM claimed.

Read more

FILE PHOTO. ESSEN, GERMANY. © AFP / Ina FASSBENDER
Covid rates take Germany to ‘nationwide state of emergency’

Apart from urging the unvaccinated to go and finally get their jabs, Orban also promoted booster shots, revealing that he had already taken three doses of a coronavirus vaccine.

“The only thing that protects us from the virus is vaccination. And we are now also seeing, at least the experts are unanimous in saying, that four to six months after the second vaccination, the protective power of the vaccine weakens. Therefore, a third vaccination is justified,” he said.

Hungary has already announced new anti-Covid measures, though somewhat short of the strict measures proposed by the nation’s Medical Chamber on Wednesday. The medical body called for a blanket ban on mass events, and suggested making entry to restaurants, theaters and other indoor venues conditional on bearing a Covid-19 inoculation certificate. Instead, Budapest rolled out compulsory mask wearing for most indoor environments, as well as making booster shots mandatory for all medical workers, starting from Saturday.

A nation of 10 million, Hungary’s total tally of logged Covid cases is hovering just below the one million mark. On Friday, it registered a new daily record, with nearly 11,300 new Covid infections. More than 32,700 people in Hungary have succumbed to the disease over the course of the pandemic.

Think your friends would be interested? Share this story!

find more fun & mates at SoShow now !

A Roadmap for AI in the Intelligence Community

(Editor’s Note: This article was first published by our friends at Just Security and is the fourth in a series that is diving into the foundational barriers to the broad integration of AI in the IC – culture, budget, acquisition, risk, and oversight.  This article considers a new IC approach to risk management.)

OPINION — I have written previously that the Intelligence Community (IC) must rapidly advance its artificial intelligence (AI) capabilities to keep pace with our nation’s adversaries and continue to provide policymakers with accurate, timely, and exquisite insights. The good news is that there is strong bipartisan support for doing so. The not-so-good news is that the IC is not well-postured to move quickly and take the risks required to continue to outpace China and other strategic competitors over the next decade.

In addition to the practical budget and acquisition hurdles facing the IC, there is a strong cultural resistance to taking risks when not absolutely necessary. This is understandable given the life-and-death nature of intelligence work and the U.S. government’s imperative to wisely execute national security funds and activities. However, some risks related to innovative and cutting-edge technologies like AI are in fact necessary, and the risk of inaction – the costs of not pursuing AI capabilities – is greater than the risk of action.

The Need for a Risk Framework

For each incredible new invention, there are hundreds of brilliant ideas that have failed. To entrepreneurs and innovators, “failure” is not a bad word. Rather, failed ideas are often critical steps in the learning process that ultimately lead to a successful product; without those prior failed attempts, that final product might never be created. As former President of India A.P.J. Abdul Kalam once said, “FAIL” should really stand for “First Attempt In Learning.”

The U.S. government, however, is not Silicon Valley; it does not consider failure a useful part of any process, especially when it comes to national security activities and taxpayer dollars. Indeed, no one in the U.S. government wants to incur additional costs or delay or lose taxpayer dollars. But there is rarely a distinction made within the government between big failures, which may have a lasting, devastating, and even life-threatening impact, and small failures, which may be mere stumbling blocks with acceptable levels of impact that result in helpful course corrections.


The Cipher Brief hosts private briefings with the world’s most experienced national and global security experts.  Become a member today.


As a subcommittee report of the House Permanent Select Committee on Intelligence (HPSCI) notes “[p]rogram failures are often met with harsh penalties and very public rebukes from Congress which often fails to appreciate that not all failures are the same. Especially with cutting-edge research in technologies … early failures are a near certainty …. In fact, failing fast and adapting quickly is a critical part of innovation.” There is a vital difference between an innovative project that fails and a failure to innovate. The former teaches us something we did not know before, while the latter is a national security risk.

Faced with congressional hearings, inspector general reports, performance evaluation downgrades, negative reputational effects, and even personal liability, IC officers are understandably risk-averse and prefer not to introduce any new risk. That is, of course, neither realistic nor the standard the IC meets today. The IC is constantly managing a multitude of operational risks – that its officers, sources, or methods will be exposed, that it will miss (or misinterpret) indications of an attack, or that it will otherwise fail to produce the intelligence policymakers need at the right time and place. Yet in the face of such serious risks, the IC proactively and aggressively pursues its mission. It recognizes that it must find effective ways to understand, mitigate, and make decisions around risk, and therefore it takes action to make sure potential ramifications are clear, appropriate, and accepted before any failure occurs. In short, the IC has long known that its operations cannot be paralyzed by a zero-risk tolerance that is neither desirable nor attainable. This recognition must also be applied to the ways in which the IC acquires, develops, and uses new technology.

This is particularly important in the context of AI. While AI has made amazing progress in recent years, the underlying technology, the algorithms and their application, are still evolving and the resulting capabilities, by design, will continue to learn and adapt. AI holds enormous promise to transform a variety of IC missions and tasks, but how and when these changes may occur is difficult to forecast and AI’s constant innovation will introduce uncertainty and mistakes. There will be unexpected breakthroughs, as well as failures in areas that initially seemed promising.

The IC must rethink its willingness to take risks in a field where change and failure is embraced as part of the key to future success. The IC must experiment and iterate its progress over time and shift from a culture that punishes even reasonable risk to one that embraces, mitigates, and owns it. This can only be done with a systematic, repeatable, and consistent approach to making risk-conscious decisions.

Today there is no cross-IC mechanism for thinking about risk, let alone for taking it. When considering new activities or approaches, each IC element manages risk through its own lens and mechanisms, if at all. Several individual IC elements have created internal risk assessment frameworks to help officers understand the risks of both action and inaction, and to navigate the decisions they are empowered to make depending upon the circumstances. These frameworks increase confidence that if an activity goes wrong, supervisors all the way up the chain will provide backing as long as the risk was reasonable, well-considered and understood, and the right leaders approved it. And while risk assessments are often not precise instruments of measurement – they reflect the quality of the data, the varied expertise of those conducting the assessments, and the subjective interpretation of the results – regularized and systematic risk assessments are nevertheless a key part of effective risk management and facilitate decision-making at all levels.


Go beyond the headlines with expert perspectives on today’s news with The Cipher Brief’s Daily Open-Source Podcast.  Listen here or wherever you listen to podcasts.


Creating these individual frameworks is commendable and leading-edge for government agencies, but more must be done holistically across the IC. Irregular and inconsistent risk assessments among IC elements will not provide the comfort and certainty needed to drive an IC-wide cultural shift to taking risk. At the same time, the unique nature of the IC, comprised of 18 different elements, each with similar and overlapping, but not identical, missions, roles, authorities, threats and vulnerabilities, does not lend itself to a one-size-fits-all approach.

For this reason, the IC needs a flexible but common strategic framework for considering risk that can apply across the community, with each element having the ability to tailor that framework to its own mission space. Such an approach is not unlike how the community is managed in many areas today – with overarching IC-wide policy that is locally interpreted and implemented to fit the specific needs of each IC element. When it comes to risk, creating an umbrella IC-wide framework will significantly improve the workforce’s ability to understand acceptable risks and tradeoffs, produce comprehensible and comparable risk determinations across the IC, and provide policymakers the ability to anticipate and mitigate failure and unintended escalation.

Critical Elements of a Risk Framework

A common IC AI risk framework should inform and help prioritize decisions from acquisition or development, to deployment, to performance in a consistent way across the IC. To start, the IC should create common AI risk management principles, like its existing principles of transparency and AI ethics, that include clear and consistent definitions, thresholds, and standards. These principles should drive a repeatable risk assessment process that each IC element can tailor to its individual needs, and should promote policy, governance, and technological approaches that are aligned to risk management.

The successful implementation of this risk framework requires a multi-disciplinary approach involving leaders from across the organization, experts from all relevant functional areas, and managers who can ensure vigilance in implementation. A whole-of-activity methodology that includes technologists, collectors, analysts, innovators, security officers, acquisition officers, lawyers and more, is critical to ensuring a full 360-degree understanding of the opportunities, issues, risks, and potential consequences associated with a particular action, and to enabling the best-informed decision.

Given the many players involved, each IC element must strengthen internal processes to manage the potential disconnects that can lead to unintended risks and to create a culture that instills in every officer a responsibility to proactively consider risk at each stage of the activity. Internal governance should include an interdisciplinary Risk Management Council (RMC) made up of senior leaders from across the organization. The RMC should establish clear and consistent thresholds for when a risk assessment is required, recommended, or not needed given that resource constraints likely will not allow all of the broad and diverse AI activities within organizations to be assessed. These thresholds should be consistent with the IC risk management principles so that as IC elements work together on projects across the community, officers have similar understandings and expectations.

The risk framework itself should provide a common taxonomy and process to:

  • Understand and identify potential failures, including the source, timeline, and range of effects.
  • Analyze failures and risks by identifying internal vulnerabilities or predisposing conditions that could increase the likelihood of adverse impact.
  • Evaluate the likelihood of failure, taking into consideration risks and vulnerabilities.
  • Assess the severity of the potential impact, to include potential harm to organizational operations, assets, individuals, other organizations, or the nation.
  • Consider whether the ultimate risk may be sufficiently mitigated or whether it should be transferred, avoided, or accepted.

AI-related risks may include, among other things, technology failure, biased data, adversarial attacks, supply chain compromises, human error, cost overruns, legal compliance challenges, or oversight issues.

An initial risk level is determined by considering the likelihood of a failure against the severity of the potential impact. For example, is there is a low, moderate, or high likelihood of supply chain compromise? Would such a compromise affect only one discrete system or are there system-wide implications? These calculations will result in an initial risk level. Then potential mitigation measures, such as additional policies, training, or security measures, are applied to lower the initial risk level to an adjusted risk level. For example, physically or logically segmenting an organization’s systems so that a compromise only touches one system would significantly decrease the risk level associated with that particular technology. The higher the likelihood of supply chain compromise, the lower the severity of its impact must be to offset the risk, and vice versa. Organizations should apply the Swiss Cheese Model of more than one preventative or mitigative action for a more effective layered defense. Organizations then must consider the adjusted risk level in relation to their tolerance for risk; how much risk (and potential consequence) is acceptable in pursuit of value? This requires defining the IC’s risk tolerance levels, within which IC elements may again define their own levels based upon their unique missions.

Understanding and considering the risk of action is an important step forward for the IC, but it is not the last step. Sometimes overlooked in risk assessment practices is the consideration of the risk of inaction. To fully evaluate potential options, decision-makers must consider whether the overall risk of doing something is outweighed by the risks of not doing it. If the IC does not pursue particular AI capabilities, what is the opportunity cost of that inaction? Any final determination about whether to take action must consider whether declining to act would cause greater risk of significant harm. While the answer will not always be yes, in the case of AI and emerging technology, it is a very realistic possibility.

And, finally, a risk framework only works if people know about it. Broad communication – about the existence of the framework, how to apply it, and expectations for doing so – is vital. We cannot hold people accountable for appropriately managing risk if we do not clearly and consistently communicate and help people use the structure and mechanisms for doing so.

Buy-in To Enhance Confidence

An IC-wide AI risk framework will help IC officers understand risks and determine when and how to take advantage of innovative emerging technologies like AI, increasing comfort with uncertainty and risk-taking in the pursuit of new capabilities. Such a risk framework will have even greater impact if it is accepted – explicitly or implicitly – by the IC’s congressional overseers. The final article in this series will delve more deeply into needed changes to further improve the crucial relationship between the IC and its congressional overseers. It will also provide a link to a full report that provides more detail on each aspect of the series, including a draft IC AI Risk Framework.

Although Congress is not formally bound by such a framework, given the significant accountability measures that often flow from these overseers, a meeting of the minds between the IC and its congressional overseers is critical. Indeed, these overseers should have awareness of and an informal ability to provide feedback into the framework as it is being developed. This level of transparency and partnership would lead to at least two important benefits: first, increased confidence in the framework by all; and second, better insight into IC decision-making for IC overseers.

Ultimately, such a mutual understanding would encourage exactly what the IC needs to truly take advantage of next-generation technology like AI: a culture of experimentation, innovation, and creativity that sees reasonable risk and failure as necessary steps to game-changing outcomes.

Read also AI and the IC: The Tangled Web of Budget and Acquisition

Read also Artificial Intelligence in the IC: Culture is Critical

Read also AI and the IC: The Challenges Ahead

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief

The post A Roadmap for AI in the IC appeared first on The Cipher Brief.

find more fun & mates at SoShow now !