The planned introduction of chemical castration for serial rapists in Pakistan has been dropped due to objections from experts in Islamic law, who said such punishment would be counter to Sharia.

The controversial clause in a bill amending criminal law in Pakistan was dropped before the National Assembly voted on it on Wednesday, a parliament official said on Friday. If it were passed, it would have been unconstitutional, Parliamentary Secretary for Law and Justice Maleeka Bokhari explained. The basic law of the country requires all its laws to be in line with the Sharia and the Koran.

Bokhari said the decision to drop the clause was taken due to objections from the Council of Islamic Ideology, a constitutional body that advises the government of Pakistan on the intricacies of Islamic law.

Read more

© publicdomainpictures.net
Outrage as teen’s rapist spared jail time

The bill amends Pakistan’s Penal Code and Criminal Procedure Code to streamline investigations and prosecutions of sexual crimes as part of wider anti-rape reform. Some conservative lawmakers vocally argued against the castration clause as the piece of legislation was moving towards approval. Senator Mushtaq Ahmed from the Islamist Jamaat-i-Islami party argued that rapists should be hanged publicly, while castration was never mentioned in Sharia.

A separate bill also approved by the parliament on Wednesday introduces a system of special regional investigators for rape allegations to be appointed by the prime minister, as well as new protections for victims, and punishments for officials who fail to investigate their complaints properly. Among other things, it makes evidence that a victim is “generally of immoral character” inadmissible in court.

The reform is necessary because currently deterrence of sexual crimes in Pakistan is undermined by “poor investigation, archaic procedures and rules of evidence and delay in the trial,” the bill said.

If you like this story, share it with a friend!

find more fun & mates at SoShow now !

Denmark’s air force showed off its brand-new electric-powered planes on Thursday, saying its test flights have so far proven that the cheaper-to-run, more eco-friendly technology has potential.

It had obtained the two Velis Electro jets from Slovenian manufacturer Pipistrel, becoming the first military in the world to operate this type of hardware.

“The aircraft are 100% emission-free, very quiet, and otherwise cheap to operate,” Lieutenant Colonel Casper Børge Nielsen of the Defense Ministry’s material and procurement agency said. Initial tests indicate “there may be perspectives in using electric aircraft when the technology becomes mature,” he added.

Denmark has leased the planes for two years, rather than buying it, to avoid the “risk of ending up with equipment that we can’t really use,” Børge Nielsen said. During the lease period, the Danish Air Force hopes to gain an insight into the benefits and disadvantages of the jets’ technology to decide how it can be applied in the future.

Read more

© CIAM / Facebook
Russia to debut world’s first electric plane at MAKS 2021 airshow

The pilots described flying the one-man light electric planes, which are powered by two lithium batteries, as “exciting,” saying they were “built well and fly well.” 

Last year, the United States military said it had been keeping an eye on the development of electric-powered planes, describing their ability to approach targets silently as “tremendous.” However, their battery capacity isn’t currently sufficient to meet the US Air Force’s needs.

Work on electric aircraft has been underway since the 1970s, but the battery issue has been a stumbling block in the way of wider adoption of the technology. Global military interest could change all that, stimulating research and investment.

The switch to electric power is likely to be a win-win across the board, as it will drastically reduce CO2 emissions and also make flying much cheaper for commercial carriers and their passengers.

Think your friends would be interested? Share this story!

find more fun & mates at SoShow now !

Corin Stone, Washington College of Law

Corin Stone is a Scholar-in-Residence and Adjunct Professor at the Washington College of Law.  Stone is on leave from the Office of the Director of National Intelligence (ODNI) where, until August 2020, she served as the Deputy Director of National Intelligence for Strategy & Engagement, leading Intelligence Community (IC) initiatives on artificial intelligence, among other key responsibilities. From 2014-2017, Ms. Stone served as the Executive Director of the National Security Agency (NSA).

(Editor’s Note: This article was first published by our friends at Just Security and is the third in a series that is diving into the foundational barriers to the broad integration of AI in the IC – culture, budget, acquisition, risk, and oversight.)

OPINION — As I have written earlier, there is widespread bipartisan support for radically improving the nation’s ability to take advantage of artificial intelligence (AI). For the Intelligence Community (IC), that means using AI to more quickly, easily, and accurately analyze increasing volumes of data to produce critical foreign intelligence that can warn of and help defuse national security threats, among other things. To do that, the IC will have to partner closely with the private sector, where significant AI development occurs. But despite the billions of dollars that may ultimately flow toward this goal, there are basic hurdles the IC still must overcome to successfully transition and integrate AI into the community at speed and scale.

Among the top hurdles are the U.S. government’s slow, inflexible, and complex budget and acquisition processes. The IC’s rigid budget process follows the standard three-year cycle for the government, which means it takes years to incorporate a new program and requires confident forecasting of the future. Once a program overcomes the necessary hurdles to be included in a budget, it must follow a complex sequence of regulations to issue and manage a contract for the actual goods or services needed. These budget and acquisition processes are often considered separately as they are distinct, but I treat them together because they are closely related and inextricably intertwined in terms of the government’s purchasing of technology.

Importantly, these processes were not intended to obstruct progress; they were designed to ensure cautious and responsible spending, and for good reason. Congress, with its power of the purse, and the Office of Management and Budget (OMB), as the executive branch’s chief budget authority, have the solemn duty to ensure wise and careful use of taxpayer dollars. And their roles in this regard are vital to the U.S. government’s ability to function.

Unfortunately, despite the best of intentions, as noted by some in Congress itself, the budget process has become so “cumbersome, frustrating, and ineffective” that it has weakened the power of the purse and Congress’ capacity to govern. And when complicated acquisition processes are layered on top of the budget process, the result is a spider web of confusion and difficulty for anyone trying to navigate them.

The Need for Speed … and Flexibility and Simplicity

As currently constructed, government budget and acquisition processes cause numerous inefficiencies for the purchase of AI capabilities, negatively impacting three critical areas in particular: speed, flexibility, and simplicity. When it comes to speed and flexibility, the following difficulties jump out:

  • The executive branch has a methodical and deliberate three-year budget cycle that calls for defined and steady requirements at the beginning of the cycle. Changing the requirements at any point along the way is difficult and time-consuming.
  • The IC’s budgeting processes require that IC spending fit into a series of discrete sequential steps, represented by budget categories like research, development, procurement, or sustainment. Funds are not quickly or easily spent across these categories.
  • Most appropriations expire at the end of each fiscal year, which means programs must develop early on, and precisely execute, detailed spending plans or lose the unspent funds at the end of one year.
  • Government agencies expend significant time creating detailed Statements of Work (SOWs) that describe contract requirements. Standard contract vehicles do not support evolving requirements, and companies are evaluated over the life of the contract based on strict compliance with the original SOW created years earlier.

These rules make sense in the abstract and result from well-intentioned attempts to buy down the risk of loss or failure and promote accountability and transparency. They require the customer to know with clarity and certainty the solution it seeks in advance of investment and they narrowly limit the customer’s ability to change the plan or hastily implement it. These rules are not unreasonably problematic for the purchase of items like satellites or airplanes, the requirements for which probably should not and will not significantly change over the course of many years.

However, because AI technology is still maturing and the capabilities themselves are always adapting, developing, and adding new functionality, the rules above have become major obstacles to the quick integration of AI across the IC. First, AI requirements defined with specificity years in advance of acquisition – whether in the budget or in a statement of work – are obsolete by the time the technology is delivered. Second, as AI evolves there is often not a clear delineation between research, development, procurement, and sustainment of the technology – it continuously flows back and forth across these categories in very compressed timelines. Third, it is difficult to predict the timing of AI breakthroughs, related new requirements, and funding impacts, so money might not be spent as quickly as expected and could be lost at the end of the fiscal year. Taken together, these processes are inefficient and disruptive, cause confusion and delay, and discourage engagement from small businesses, which have neither the time nor the resources to wait years to complete a contract or to navigate laborious, uncertain processes.


Engage personally with experts on Artificial Intelligence and national security  at The Cipher Brief Threat Conference October 24-26.  If you are an actively working in the national security field, we invite you to apply to attend.  Seats are limited.  


Simply put, modern practices for fielding AI have outpaced the IC’s decades-old approach to budgeting and acquisition. That AI solutions are constantly evolving, learning, and improving both undermines the IC’s ability to prescribe a specific solution and, in fact, incentivizes the IC to allow the solution to evolve with the technology. The lack of flexibility and speed in how the IC manages and spends money and acquires goods and services is a core problem when it comes to fully incorporating AI into the IC’s toolkit.

Even while we introduce more speed and agility into these processes, however, the government must continue to ensure careful, intentional, and appropriate spending of taxpayer dollars. The adoption of an IC risk framework and modest changes to congressional oversight engagements, which I address in upcoming articles, will help regulate these AI activities in the spirit of the original intent of the budget and acquisition rules.

As for the lack of simplicity, the individually complex budget and acquisition rules are together a labyrinth of requirements, regulations, and processes that even long-time professionals have trouble navigating. In addition:

  • There is no quick or simple way for practitioners to keep current with frequent changes in acquisition rules.
  • The IC has a distributed approach that allows each element to use its various acquisition authorities independently rather than cohesively, increasing confusion across agency lines.
  • Despite the many federal acquisition courses aimed at demystifying the process, there is little connection among educational programs, no clear path for IC officers to participate, and no reward for doing so.

The complexity of the budget and acquisition rules compounds the problems with speed and flexibility, and as more flexibility is introduced to support AI integration, it is even more critical that acquisition professionals be knowledgeable and comfortable with the tools and levers they must use to appropriately manage and oversee contracts.

Impactful Solutions: A Target Rich Environment

Many of these problems are not new; indeed, they have been highlighted and studied often over the past few years in an effort to enable the Department of Defense (DOD) and the IC to more quickly and easily take advantage of emerging technology. But to date, DOD has made only modest gains and the IC is even further behind. While there are hundreds of reforms that could ease these difficulties, narrowing and prioritizing proposed solutions will have a more immediate impact. Moreover, significant change is more likely to be broadly embraced if the IC first proves its ability to successfully implement needed reforms on a smaller scale. The following actions by the executive and legislative branches – some tactical and some strategic – would be powerful steps to ease and speed the transition of AI capabilities into the IC.

Statements of Objectives

A small but important first step to deal with the slow and rigid acquisition process is to encourage the use of Statements of Objectives (SOO) instead of SOWs, when appropriate. As mentioned, SOWs set forth defined project activities, deliverables, requirements, and timelines, which are used to measure contractor progress and success. SOWs make sense when the government understands with precision exactly what is needed from the contractor and how it should be achieved.

SOOs, on the other hand, are more appropriate when the strategic outcome and objectives are clear, but the steps to achieve them are less so. They describe “what” without dictating “how,” thereby encouraging and empowering industry to propose innovative solutions. SOOs also create clarity about what is important to the government, leading companies to focus less on aggressively low pricing of specific requirements and more on meeting the ultimate outcomes in creative ways that align with a company’s strengths. This approach requires knowledgeable acquisition officers as part of the government team, as described below, to ensure the contract includes reasonable milestones and decision points to keep the budget within acceptable levels.


The Cipher Brief hosts private briefings with the world’s most experienced national and global security experts.  Become a member today.


New Authorities for the IC

Two new authorities would help the IC speed and scale its use of AI capabilities: Other Transaction Authority (OTA)  and Commercial Solutions Openings (CSO). Other Transaction Authority allows specific types of transactions to be completed outside of the traditional federal laws and regulations that apply to standard government procurement contracts, providing significantly more speed, flexibility, and accessibility than traditional contracts. While OTA is limited in scope and not a silver bullet for all acquisition problems, OTA has been used to good effect since 1990 by the Defense Advanced Research Projects Activity (DARPA), DOD’s over-the-horizon research and development organization, among others.

CSOs are a simplified and relatively quick solicitation method to award firm fixed price contracts up to $100 million. CSOs can be used to acquire innovative commercial items, technologies, or services that close capability gaps or provide technological advances through an open call for proposals that provide offerors the opportunity to respond with technical solutions of their own choosing to a broadly defined area of government interest. CSOs are considered competitively awarded regardless of how many offerors respond.

Both OTA and CSO authority should be immediately granted to the IC to improve the speed and flexibility with which the IC can acquire and transition AI into the IC.

Unclassified Sandbox

The predictive nature of the IC’s work and the need to forecast outcomes means the IC must be able to acquire AI at the point of need, aligned to the threat. Waiting several years to acquire AI undermines the IC’s ability to fulfill its purpose. But with speed comes added risk that new capabilities might fail. Therefore, the IC should create an isolated unclassified sandbox, not connected to operational systems, in which potential IC customers could test and evaluate new capabilities alongside developers in weeks-to-months, rather than years. Congress should provide the IC with the ability to purchase software quickly for test and evaluation purposes only to buy down the risk that a rapid acquisition would result in total failure. The sandbox process would allow the IC to test products, consider adjustments, and engage with developers early on, increasing the likelihood of success.

Single Appropriation for Software

DOD has a pilot program that funds software as a single budget item – allowing the same money to be used for research, production, operations, and sustainment – to improve and speed software’s unique development cycle. AI, being largely software, is an important beneficiary of this pilot. Despite much of the IC also being part of DOD, IC-specific activities do not fall within this pilot. Extending DOD’s pilot to the IC would not only speed the IC’s acquisition of AI, but it would also increase interoperability and compatibility of IC and DOD projects.

No-Year Funds

Congress should reconsider the annual expiration of funds as a control lever for AI. Congress already routinely provides no-year funding when it makes sense to do so. In the case of AI, no-year funds would allow the evolution of capabilities without arbitrary deadlines, drive more thoughtful spending throughout the lifecycle of the project, and eliminate the additional overhead required to manage the expiration of funds annually. Recognizing the longer-term nature of this proposal, however, the executive branch also must seek shorter-term solutions in the interim.

A less-preferable alternative is to seek two-year funding for AI. Congress has a long history of proposing biennial budgeting for all government activities. Even without a biennial budget, Congress has already provided nearly a quarter of the federal budget with two-year funding. While two-year funding is not a perfect answer in the context of AI, it would at a minimum discourage parties from rushing to outcomes or artificially burning through money at the end of the first fiscal year and would provide additional time to fulfill the contract. This is presumably why DOD recently created a new budget activity under their Research, Development, Test and Evaluation (RDT&E) category, which is typically available for two years, for “software and digital technology pilot programs.”

AI Technology Fund

Congress should establish an IC AI Technology Fund (AITF) to provide kick-starter funds for priority community AI efforts and enable more flexibility to get those projects off the ground. To be successful, the AITF must have no-year funds, appropriated as a single appropriation, without limits on usage throughout the acquisition lifecycle. The AITF’s flexibility and simplicity would incentivize increased engagement by small businesses, better allowing the IC to tap into the diversity of the marketplace, and would support and speed the delivery of priority AI capabilities to IC mission users.


Go beyond the headlines with expert perspectives on today’s news with The Cipher Brief’s Daily Open-Source Podcast.  Listen here or wherever you listen to podcasts.


ICWERX  

To quickly take advantage of private sector AI efforts at scale, the IC must better understand the market and more easily engage directly with the private sector. To do so, the IC should create an ICWERX, modeled after AFWERX, an Air Force innovation organization that drives agile public-private sector collaboration to quickly leverage and develop cutting-edge technology for the Air Force. AFWERX aggressively uses innovative, flexible, and speedy procurement mechanisms like OTA and the Small Business Innovation Research and Small Business Technology Transfer programs (SBIR/STTR) to improve the acquisition process and encourage engagement from small businesses. AFWERX is staffed by acquisition and market research experts who are comfortable using those authorities and understand the market. While the IC’s needs are not identical, an ICWERX could serve as an accessible “front door” for prospective partners and vendors, and enable the IC to more quickly leverage and scale cutting-edge AI.

De-mystify Current Authorities

While there is much complaining about a lack of flexible authorities in the IC (and a real need for legal reform), there is flexibility in existing rules that has not been fully utilized. The IC has not prioritized the development or hiring of people with the necessary government acquisition and contracts expertise, so there are insufficient officers who know how to use the existing authorities and those who do are overworked and undervalued. The IC must redouble its efforts to increase its expertise in, and support the use of, these flexibilities in several ways.

First, the IC should create formal partnerships and increase engagement with existing U.S. government experts. The General Services Administration’s Technology Transformation Services (TTS) and FEDSIM, for example, work across the federal government to build innovative acquisition solutions and help agencies more quickly adopt AI. In addition, DOD’s Joint AI Center has built significant acquisition expertise that the IC must better leverage. The IC also should increase joint duty rotations in this area to better integrate and impart acquisition expertise across the IC.

Second, the IC must prioritize training and education of acquisition professionals. And while deep acquisition expertise is not necessary for everyone, it is important for lawyers, operators, technologists, and innovators to have a reasonable understanding of the acquisition rules, and the role they each play in getting to successful outcomes throughout the process. Collaboration and understanding across these professions and up and down the chain of command will result in more cohesive, speedy, and effective outcomes.

To that end, the Office of the Director of National Intelligence (ODNI) should work with the many existing government acquisition education programs, as well as the National Intelligence University, to develop paths for IC officers to grow their understanding of and ability to navigate and successfully use acquisition rules. The ODNI also should strengthen continuing education requirements and create incentive pay for acquisition professionals.

Third, the IC should prioritize and use direct hire authority to recruit experts in government acquisition, to include a mix of senior term-limited hires and junior permanent employees with room to grow and the opportunity for a long career in the IC. Such a strategy would allow the IC to quickly tackle the current AI acquisition challenges and build a bench of in-house expertise.

Finally, practitioners should have an easily accessible reference book to more quickly discover relevant authorities, understand how to use them, and find community experts. A few years ago, the ODNI led the creation of an IC Acquisition Playbook, which describes common IC acquisition authorities, practices, and usages. The ODNI should further develop and disseminate this Playbook as a quick win for the IC.

Incentivize Behavior

To encourage creative and innovative acquisition practices, as well as interdisciplinary collaboration, the IC must align incentives with desired outcomes and create in acquisition professionals a vested interest in the success of the contract. Acquisition officers today are often brought into projects only in transactional ways, when contracts must be completed or money must be obligated, for example. They are rarely engaged early as part of a project team, so they are not part of developing the solutions and have minimal investment in the project’s success. Reinforcing this, acquisition professionals are evaluated primarily on the amount of money they obligate by the end of the fiscal year, rather than on the success of a project.

Therefore, to start, project teams should be required to engage acquisition officers early and often, both to seek their advice and to ensure they have a good understanding of the project’s goals. In addition, evaluation standards for acquisition officers should incorporate effective engagement and collaboration with stakeholders, consideration of creative alternatives and options, and delivery of mission outcomes. If an officer uses innovative practices that fail, that officer also should be evaluated on what they learned from the experience that may inform future success.

Lastly, the ODNI should reinvigorate and highlight the IC acquisition awards to publicly reward desired behavior, and acquisition professionals should be included in IC mission team awards as a recognition of their impact on the ultimate success of the mission.

Conclusion

Between the government’s rigid budget and acquisition processes and confusion about how to apply them, there is very little ability for the IC to take advantage of a fast-moving field that produces new and updated technology daily. Tackling these issues through the handful of priority actions set forth above will begin to drive the critical shift away from the IC’s traditional, linear processes to the more dynamic approaches the IC needs to speed and transform the way it purchases, integrates, and manages the use of AI.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief

 

The post AI and the IC: The Tangled Web of Budget and Acquisition appeared first on The Cipher Brief.

find more fun & mates at SoShow now !

The French government has kicked off a €14 million national campaign to tackle underage prostitution and pimping. It comes months after a report found as many as 10,000 youngsters, mostly teen girls, are involved in the sex trade.

The campaign, launched by the Ministry for Solidarity and Health on Monday, is expected to be fully rolled out in 2022. The ministry described the problem as a “growing phenomenon that society can no longer ignore” and about which “too little is known.”

The government programme is expected to “increase awareness” while helping to “inform and provide a better understanding of the phenomenon.” It also aims to help “identify the young people involved” and “prosecute clients and pimps more effectively.”

Read more

© Unsplash / Dainis Graveris
Russian sex workers & Only Fans stars offered free therapy sessions

According to RFI, the prevalence of underage prostitution has increased by as much as 70% over the past five years, with social media believed to be compounding the problem. The public broadcaster noted that the situation had worsened during the Covid-19 pandemic when young people spent more time online.

In July, a working group produced a damning report that found between 7,000 and 10,000 young people were involved in prostitution across the country. The majority are young girls aged between 15 and 17, but a ministry statement noted that the “entry point” into prostitution was increasingly becoming younger at around 14-15 years.

“There’s really a normalisation of prostitution of young people because girls say that selling sex is a way of making lots of money easily and that it can help them reach their dream life,” deputy public prosecutor Raphaelle Wach told the news outlet France 24.

In its statement, the ministry noted that many minors did not consider themselves victims and valued the “financial autonomy” and feelings of “belonging to a group” and “regaining control” over their lives.

“These minors are however in danger, both physically and psychologically,” the ministry warned.

“Covid played a considerable role because social networking provided new ways of being able to hook in underage girls very easily,” Geneviève Collas, who runs an NGO fighting human trafficking, told RFI. She added that recruiting minors has been made “easier” with short-term apartment rental apps like Airbnb helping mask the scale of the problem on the streets.

Like this story? Share it with a friend!

find more fun & mates at SoShow now !

Scientists have used artificial intelligence to “predict” formulas for new designer drugs, with the stated goal of helping to improve their regulation. The AI generated formulas for nearly nine million potential new drugs.

Researchers with the University of British Columbia (UBC) used a deep neural net for the job, teaching it to make up chemical structures of potential new drugs. According to their study, released this week, the computer intelligence fared better at the task than the scientists had expected.

The research team used a database of known designer drugs – synthetic psychoactive substances – to train the AI on their structures. The market for designer drugs is ever-changing, since their manufacturers are constantly tweaking their formulas to circumvent restrictions and produce new “legal” substances, while cracking their structure takes months for law enforcement agencies, the researchers said.

Read more

FILE PHOTO: A man living on the streets displays what he says is the synthetic drug fentanyl, in the Tenderloin section of San Francisco, California, February 27, 2020 © Reuters / Shannon Stapleton
Drug overdose deaths in US hit all-time record

“The vast majority of these designer drugs have never been tested in humans and are completely unregulated. They are a major public-health concern to emergency departments across the world,” one of the researchers, UBC medical student Dr. Michael Skinnider has said.

After its training, the AI was able to generate some 8.9 million potential designer drugs. Afterwards, researchers ran a data sheet of some 196 new drugs, which had emerged in real life after the model was trained, and found that more than 90% of these have been already predicted by the computer.

“The fact that we can predict what designer drugs are likely to emerge on the market before they actually appear is a bit like the 2002 sci-fi movie, Minority Report, where foreknowledge about criminal activities about to take place helped significantly reduce crime in a future world,” senior author Dr. David Wishart, a professor of computing science at the University of Alberta, has said.

Identifying completely unknown substances remains an issue for the AI, the research team has noted, but they hope it might potentially help with that task, since the computer was also able to predict which formulas of designer drugs were more likely to be created and hit the market. The model “ranked the correct chemical structure of an unidentified designer drug among the top 10 candidates 72 percent of the time,” while throwing in spectrometry analysis, which is an easily obtained measurement, bumped the accuracy to some 86%.

“It was shocking to us that the model performed this well, because elucidating entire chemical structures from just an accurate mass measurement is generally thought to be an unsolvable problem,” Skinnider stated.

Think your friends would be interested? Share this story!

find more fun & mates at SoShow now !

A Roadmap for AI in the Intelligence Community

(Editor’s Note: This article was first published by our friends at Just Security and is the fourth in a series that is diving into the foundational barriers to the broad integration of AI in the IC – culture, budget, acquisition, risk, and oversight.  This article considers a new IC approach to risk management.)

OPINION — I have written previously that the Intelligence Community (IC) must rapidly advance its artificial intelligence (AI) capabilities to keep pace with our nation’s adversaries and continue to provide policymakers with accurate, timely, and exquisite insights. The good news is that there is strong bipartisan support for doing so. The not-so-good news is that the IC is not well-postured to move quickly and take the risks required to continue to outpace China and other strategic competitors over the next decade.

In addition to the practical budget and acquisition hurdles facing the IC, there is a strong cultural resistance to taking risks when not absolutely necessary. This is understandable given the life-and-death nature of intelligence work and the U.S. government’s imperative to wisely execute national security funds and activities. However, some risks related to innovative and cutting-edge technologies like AI are in fact necessary, and the risk of inaction – the costs of not pursuing AI capabilities – is greater than the risk of action.

The Need for a Risk Framework

For each incredible new invention, there are hundreds of brilliant ideas that have failed. To entrepreneurs and innovators, “failure” is not a bad word. Rather, failed ideas are often critical steps in the learning process that ultimately lead to a successful product; without those prior failed attempts, that final product might never be created. As former President of India A.P.J. Abdul Kalam once said, “FAIL” should really stand for “First Attempt In Learning.”

The U.S. government, however, is not Silicon Valley; it does not consider failure a useful part of any process, especially when it comes to national security activities and taxpayer dollars. Indeed, no one in the U.S. government wants to incur additional costs or delay or lose taxpayer dollars. But there is rarely a distinction made within the government between big failures, which may have a lasting, devastating, and even life-threatening impact, and small failures, which may be mere stumbling blocks with acceptable levels of impact that result in helpful course corrections.


The Cipher Brief hosts private briefings with the world’s most experienced national and global security experts.  Become a member today.


As a subcommittee report of the House Permanent Select Committee on Intelligence (HPSCI) notes “[p]rogram failures are often met with harsh penalties and very public rebukes from Congress which often fails to appreciate that not all failures are the same. Especially with cutting-edge research in technologies … early failures are a near certainty …. In fact, failing fast and adapting quickly is a critical part of innovation.” There is a vital difference between an innovative project that fails and a failure to innovate. The former teaches us something we did not know before, while the latter is a national security risk.

Faced with congressional hearings, inspector general reports, performance evaluation downgrades, negative reputational effects, and even personal liability, IC officers are understandably risk-averse and prefer not to introduce any new risk. That is, of course, neither realistic nor the standard the IC meets today. The IC is constantly managing a multitude of operational risks – that its officers, sources, or methods will be exposed, that it will miss (or misinterpret) indications of an attack, or that it will otherwise fail to produce the intelligence policymakers need at the right time and place. Yet in the face of such serious risks, the IC proactively and aggressively pursues its mission. It recognizes that it must find effective ways to understand, mitigate, and make decisions around risk, and therefore it takes action to make sure potential ramifications are clear, appropriate, and accepted before any failure occurs. In short, the IC has long known that its operations cannot be paralyzed by a zero-risk tolerance that is neither desirable nor attainable. This recognition must also be applied to the ways in which the IC acquires, develops, and uses new technology.

This is particularly important in the context of AI. While AI has made amazing progress in recent years, the underlying technology, the algorithms and their application, are still evolving and the resulting capabilities, by design, will continue to learn and adapt. AI holds enormous promise to transform a variety of IC missions and tasks, but how and when these changes may occur is difficult to forecast and AI’s constant innovation will introduce uncertainty and mistakes. There will be unexpected breakthroughs, as well as failures in areas that initially seemed promising.

The IC must rethink its willingness to take risks in a field where change and failure is embraced as part of the key to future success. The IC must experiment and iterate its progress over time and shift from a culture that punishes even reasonable risk to one that embraces, mitigates, and owns it. This can only be done with a systematic, repeatable, and consistent approach to making risk-conscious decisions.

Today there is no cross-IC mechanism for thinking about risk, let alone for taking it. When considering new activities or approaches, each IC element manages risk through its own lens and mechanisms, if at all. Several individual IC elements have created internal risk assessment frameworks to help officers understand the risks of both action and inaction, and to navigate the decisions they are empowered to make depending upon the circumstances. These frameworks increase confidence that if an activity goes wrong, supervisors all the way up the chain will provide backing as long as the risk was reasonable, well-considered and understood, and the right leaders approved it. And while risk assessments are often not precise instruments of measurement – they reflect the quality of the data, the varied expertise of those conducting the assessments, and the subjective interpretation of the results – regularized and systematic risk assessments are nevertheless a key part of effective risk management and facilitate decision-making at all levels.


Go beyond the headlines with expert perspectives on today’s news with The Cipher Brief’s Daily Open-Source Podcast.  Listen here or wherever you listen to podcasts.


Creating these individual frameworks is commendable and leading-edge for government agencies, but more must be done holistically across the IC. Irregular and inconsistent risk assessments among IC elements will not provide the comfort and certainty needed to drive an IC-wide cultural shift to taking risk. At the same time, the unique nature of the IC, comprised of 18 different elements, each with similar and overlapping, but not identical, missions, roles, authorities, threats and vulnerabilities, does not lend itself to a one-size-fits-all approach.

For this reason, the IC needs a flexible but common strategic framework for considering risk that can apply across the community, with each element having the ability to tailor that framework to its own mission space. Such an approach is not unlike how the community is managed in many areas today – with overarching IC-wide policy that is locally interpreted and implemented to fit the specific needs of each IC element. When it comes to risk, creating an umbrella IC-wide framework will significantly improve the workforce’s ability to understand acceptable risks and tradeoffs, produce comprehensible and comparable risk determinations across the IC, and provide policymakers the ability to anticipate and mitigate failure and unintended escalation.

Critical Elements of a Risk Framework

A common IC AI risk framework should inform and help prioritize decisions from acquisition or development, to deployment, to performance in a consistent way across the IC. To start, the IC should create common AI risk management principles, like its existing principles of transparency and AI ethics, that include clear and consistent definitions, thresholds, and standards. These principles should drive a repeatable risk assessment process that each IC element can tailor to its individual needs, and should promote policy, governance, and technological approaches that are aligned to risk management.

The successful implementation of this risk framework requires a multi-disciplinary approach involving leaders from across the organization, experts from all relevant functional areas, and managers who can ensure vigilance in implementation. A whole-of-activity methodology that includes technologists, collectors, analysts, innovators, security officers, acquisition officers, lawyers and more, is critical to ensuring a full 360-degree understanding of the opportunities, issues, risks, and potential consequences associated with a particular action, and to enabling the best-informed decision.

Given the many players involved, each IC element must strengthen internal processes to manage the potential disconnects that can lead to unintended risks and to create a culture that instills in every officer a responsibility to proactively consider risk at each stage of the activity. Internal governance should include an interdisciplinary Risk Management Council (RMC) made up of senior leaders from across the organization. The RMC should establish clear and consistent thresholds for when a risk assessment is required, recommended, or not needed given that resource constraints likely will not allow all of the broad and diverse AI activities within organizations to be assessed. These thresholds should be consistent with the IC risk management principles so that as IC elements work together on projects across the community, officers have similar understandings and expectations.

The risk framework itself should provide a common taxonomy and process to:

  • Understand and identify potential failures, including the source, timeline, and range of effects.
  • Analyze failures and risks by identifying internal vulnerabilities or predisposing conditions that could increase the likelihood of adverse impact.
  • Evaluate the likelihood of failure, taking into consideration risks and vulnerabilities.
  • Assess the severity of the potential impact, to include potential harm to organizational operations, assets, individuals, other organizations, or the nation.
  • Consider whether the ultimate risk may be sufficiently mitigated or whether it should be transferred, avoided, or accepted.

AI-related risks may include, among other things, technology failure, biased data, adversarial attacks, supply chain compromises, human error, cost overruns, legal compliance challenges, or oversight issues.

An initial risk level is determined by considering the likelihood of a failure against the severity of the potential impact. For example, is there is a low, moderate, or high likelihood of supply chain compromise? Would such a compromise affect only one discrete system or are there system-wide implications? These calculations will result in an initial risk level. Then potential mitigation measures, such as additional policies, training, or security measures, are applied to lower the initial risk level to an adjusted risk level. For example, physically or logically segmenting an organization’s systems so that a compromise only touches one system would significantly decrease the risk level associated with that particular technology. The higher the likelihood of supply chain compromise, the lower the severity of its impact must be to offset the risk, and vice versa. Organizations should apply the Swiss Cheese Model of more than one preventative or mitigative action for a more effective layered defense. Organizations then must consider the adjusted risk level in relation to their tolerance for risk; how much risk (and potential consequence) is acceptable in pursuit of value? This requires defining the IC’s risk tolerance levels, within which IC elements may again define their own levels based upon their unique missions.

Understanding and considering the risk of action is an important step forward for the IC, but it is not the last step. Sometimes overlooked in risk assessment practices is the consideration of the risk of inaction. To fully evaluate potential options, decision-makers must consider whether the overall risk of doing something is outweighed by the risks of not doing it. If the IC does not pursue particular AI capabilities, what is the opportunity cost of that inaction? Any final determination about whether to take action must consider whether declining to act would cause greater risk of significant harm. While the answer will not always be yes, in the case of AI and emerging technology, it is a very realistic possibility.

And, finally, a risk framework only works if people know about it. Broad communication – about the existence of the framework, how to apply it, and expectations for doing so – is vital. We cannot hold people accountable for appropriately managing risk if we do not clearly and consistently communicate and help people use the structure and mechanisms for doing so.

Buy-in To Enhance Confidence

An IC-wide AI risk framework will help IC officers understand risks and determine when and how to take advantage of innovative emerging technologies like AI, increasing comfort with uncertainty and risk-taking in the pursuit of new capabilities. Such a risk framework will have even greater impact if it is accepted – explicitly or implicitly – by the IC’s congressional overseers. The final article in this series will delve more deeply into needed changes to further improve the crucial relationship between the IC and its congressional overseers. It will also provide a link to a full report that provides more detail on each aspect of the series, including a draft IC AI Risk Framework.

Although Congress is not formally bound by such a framework, given the significant accountability measures that often flow from these overseers, a meeting of the minds between the IC and its congressional overseers is critical. Indeed, these overseers should have awareness of and an informal ability to provide feedback into the framework as it is being developed. This level of transparency and partnership would lead to at least two important benefits: first, increased confidence in the framework by all; and second, better insight into IC decision-making for IC overseers.

Ultimately, such a mutual understanding would encourage exactly what the IC needs to truly take advantage of next-generation technology like AI: a culture of experimentation, innovation, and creativity that sees reasonable risk and failure as necessary steps to game-changing outcomes.

Read also AI and the IC: The Tangled Web of Budget and Acquisition

Read also Artificial Intelligence in the IC: Culture is Critical

Read also AI and the IC: The Challenges Ahead

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief

The post A Roadmap for AI in the IC appeared first on The Cipher Brief.

find more fun & mates at SoShow now !

A cleaner for Israel’s defense minister has been accused of espionage after allegedly offering to place malware on his boss’ household computer for an Iran-linked hacking group.

In a statement on Thursday, the Shin Bet security service said that Omri Goren, a housekeeper for Defense Minister Benny Gantz, and a former bank robber according to Israeli media, corresponded with an unnamed person over social media shortly before his arrest. 

Read more

American Businessman Bill Gates (FILE PHOTO) © Jeff J Mitchell/Pool via REUTERS
Bill Gates predicts Covid-19 mortality rate

Goren reached out earlier this month to “a figure affiliated with Iran and offered to help him in different ways, in light of his access to the minister’s home,” the statement read, according to the Times of Israel.

It is understood that Goren offered to spy and place malware on Gantz’s computer on behalf of a hacking group, reportedly called ‘Black Shadow’ and associated with Iran, Tel Aviv’s perennial enemy. It is also said that he provided photos of Gantz’s residence to prove he had access. 

A Central District prosecutor filed espionage charges against Goren on Thursday. If convicted, the accused could face a sentence of between 10 and 15 years, according to the Times of Israel.

The 37-year-old Lod resident has previously served four prison sentences, the most recent of which was for four years. Goren was found guilty of five crimes between 2002 to 2013, two of the convictions were for bank robbery.

The Shin Bet said they would review their processes for staff background checks “with the goal of limiting the possibility of cases like this repeating themselves in the future.”

Speaking on Kan public radio, Gal Wolf, the attorney representing Goren, suggested his client had intended to extract money from the Iranians without carrying out any spying.

If you like this story, share it with a friend!

find more fun & mates at SoShow now !

The EU Commission has released draft legislation aimed at tackling the destruction of woodland by introducing import restrictions on products not certified as ‘deforestation-free’.

The draft proposal, which the commission hopes will become binding rules for all member states, seeks to limit the import of beef, cocoa, coffee, palm, soy, and wood if it is not proven “deforestation-free.”

Outlining the legislation, the EU commissioner for climate action policy, Virginijus Sinkevicius, called it a “ground-breaking” proposal that will help fight “illegal deforestation” and “deforestation driven by agricultural expansion.”

The bill comes after nations at the COP26 summit agreed to work to end deforestation by 2030. It would impose two criteria on imports, requiring items to have been produced in accordance with the origin country’s laws, and not on land that has been deforested or degraded since the start of 2021.

It is not clear when the rules would come into effect; legislative proposals by the commission have to be debated and considered by both the EU Parliament and the Council of the EU before they are passed. The implementation of measures could potentially impact the EU’s trade relations with countries like Brazil, where clearing of the Amazon rainforest hit a new record in October.

During the recent COP26 climate summit, 110 world leaders – whose countries contain around 85% of the world’s woodland – committed to ending and reversing deforestation by 2030, pledging around £14 billion ($18.84 billion) of public and private funds towards the goal.

Like this story? Share it with a friend!

find more fun & mates at SoShow now !

Reuters has apologized for its poor choice of photo to illustrate a story about a monkey brain study that was deemed offensive and racist in China.

On Thursday, Reuters published a story titled “Monkey-brain study with link to China’s military roils top European university.” The report was about a Chinese professor studying how a monkey’s brain functions at extreme altitude.

The study was done with the help of Beijing’s People’s Liberation Army (PLA) with the aim of developing new drugs to prevent brain damage, Reuters said.

The news agency promoted the story on Twitter with a photo of smiling Chinese soldiers in an oxygen chamber.

The tweet prompted outrage in China, with people calling it racist on social media. Reuters responded on Friday night by deleting the original tweet because the photo of Chinese soldiers was unrelated to the story and “could have been read as offensive.”

“As soon as we became aware of our mistake, the tweet was deleted and corrected, and we apologize for the offense it caused,” Reuters said in a statement to the Global Times, China’s state-run newspaper.

It was not the first time the leading Western news agency had run into trouble in China. In July, the Chinese Embassy in Sri Lanka criticized Reuters for using a photo of Chinese weightlifter and Tokyo 2020 Olympics gold medalist Hou Zhihui that the country’s state media described as “ugly” and “disrespectful to the athlete.”

Think your friends would be interested? Share this story!

find more fun & mates at SoShow now !

Clinics in the Austrian region of Salzburg have set up a special assessment team tasked with identifying Covid patients who have a higher chance of survival; the rest may soon have to take a back seat.

Amid a dramatic spike in Covid cases, medical personnel warn they may soon have to make the heart-wrenching choice of which patients get life-saving treatment and which ones will have to wait, Austrian media report. Intensive care units in the Salzburg region are packed, with the number of patients treated there setting a new grim record on Tuesday, reaching 33. The region ranks amid Austria’s hardest-hit, logging more than 1,500 new infections per 100,000 residents in a week. In an emotional plea for help to the local government, the head of Salzburg’s hospitals warned that soon clinics would likely not be able to guarantee the existing level of standards in terms of medical treatment. A representative for the city clinics likened the situation to “running into a wall.

The region’s governor, Wilfried Haslauer, announced on Tuesday that some of the Covid patients whose condition was no longer life-threatening would be transferred from hospitals to rehabilitation centers to make room for more serious cases.

Read more

FILE PHOTO.GRAZ, AUSTRIA. © AFP /CREDITERWIN SCHERIAU
Austria imposes compulsory vaccination from February 1 & nationwide lockdown starting Monday

In neighboring Upper Austria, the situation is no better, with the number of deaths in intensive care units surpassing figures seen in all the previous Covid waves. Speaking to Austria’s Der Standard paper on condition of anonymity, healthcare workers there said they had free beds “because the infected are dying.

For the time being, the creation of a so-called ‘triage team’ in Salzburg hospitals is being described as a “precautionary measure.” The panel is made up of six people: one legal expert and five providers from various medical disciplines. If push comes to shove, they will be deciding which patients stand a chance and which treatments have little prospect of success.

Like this story? Share it with a friend!

find more fun & mates at SoShow now !