KARACHI, Nov 19 (IPS) – Balance is the absolute key, if you ask Alia Chughtai, a journalist who started a catering service with filmmaker Akhlaque Mahesar, by the name of Aur Chaawal (And Rice), two years ago.
People around the world will need to get a jab against Covid-19 once a year, at least when it comes to the Pfizer vaccine, BioNTech’s CEO Ugur Sahin said in an interview on Sunday, as he praised the quality of its booster shot.
In an interview with Germany’s Bild newspaper on Sunday, Sahin said he considers the vaccine, co-developed by his company, to be “very effective.”
When asked whether people should be worried about the “breakthrough infections” – in which those vaccinated with the Pfizer-BioNTech vaccine still developed Covid-19 symptoms – he dismissed such concerns, saying that the jab offers a “90 percent protection” against cases that require intensive care in those aged over 60.
A “very high” level of protection against severe illness lasts for up to nine months, the BioNTech CEO maintained. He said this level starts decreasing “from the fourth month,” however. To maintain the protection, Sahin strongly pushed for booster shots, arguing that they would not just restore levels of antibodies but would potentially help “to break … chains of infection.”
He also encouraged doctors to be “as pragmatic as possible” when it comes to greenlighting vaccination and “not to send people home unvaccinated even though they could be vaccinated without any problems.”
In the future, people might need to get booster shots once a year, the BioNTech CEO believes. He said that he expects protection from a booster shot to “last longer” than the initial immunity one acquires after getting two doses of the vaccine.
“Subsequent … vaccinations may only be needed every year – just like [with] influenza,” he said. Currently, the German Federal Center for Health Education – an agency subordinated to the Health Ministry – recommends a booster shot six months after one gets the second dose of a vaccine. It also says that “booster vaccination makes sense after a minimum interval of about four months.”
Sahin’s interview comes days after it was revealed that Pfizer, BioNTech and Moderna are making a combined profit of $65,000 every minute – all thanks to their Covid-19 jabs. That is according to estimates made by the People’s Vaccine Alliance (PVA) – a coalition demanding wider access to vaccines.
The PVA estimated that the three companies are to earn a total of $34 billion in combined pre-tax profits this year alone, which roughly translates into more than $1,000 a second and $93.5 million a day.
PVA has slammed the three companies over their refusal to allow vaccine technology transfer despite receiving a combined $8 billion in public funding. Such a move could increase global supply and save millions of lives as well as drive down prices, the coalition said.
“Pfizer, BioNTech and Moderna have used their monopolies to prioritize the most profitable contracts with the richest governments, leaving low-income countries out in the cold,” said Maaza Seyoum of the African Alliance and People’s Vaccine Alliance Africa.
Think your friends would be interested? Share this story!
Scientists have used artificial intelligence to “predict” formulas for new designer drugs, with the stated goal of helping to improve their regulation. The AI generated formulas for nearly nine million potential new drugs.
Researchers with the University of British Columbia (UBC) used a deep neural net for the job, teaching it to make up chemical structures of potential new drugs. According to their study, released this week, the computer intelligence fared better at the task than the scientists had expected.
The research team used a database of known designer drugs – synthetic psychoactive substances – to train the AI on their structures. The market for designer drugs is ever-changing, since their manufacturers are constantly tweaking their formulas to circumvent restrictions and produce new “legal” substances, while cracking their structure takes months for law enforcement agencies, the researchers said.
“The vast majority of these designer drugs have never been tested in humans and are completely unregulated. They are a major public-health concern to emergency departments across the world,” one of the researchers, UBC medical student Dr. Michael Skinnider has said.
After its training, the AI was able to generate some 8.9 million potential designer drugs. Afterwards, researchers ran a data sheet of some 196 new drugs, which had emerged in real life after the model was trained, and found that more than 90% of these have been already predicted by the computer.
“The fact that we can predict what designer drugs are likely to emerge on the market before they actually appear is a bit like the 2002 sci-fi movie, Minority Report, where foreknowledge about criminal activities about to take place helped significantly reduce crime in a future world,” senior author Dr. David Wishart, a professor of computing science at the University of Alberta, has said.
Identifying completely unknown substances remains an issue for the AI, the research team has noted, but they hope it might potentially help with that task, since the computer was also able to predict which formulas of designer drugs were more likely to be created and hit the market. The model “ranked the correct chemical structure of an unidentified designer drug among the top 10 candidates 72 percent of the time,” while throwing in spectrometry analysis, which is an easily obtained measurement, bumped the accuracy to some 86%.
“It was shocking to us that the model performed this well, because elucidating entire chemical structures from just an accurate mass measurement is generally thought to be an unsolvable problem,” Skinnider stated.
Think your friends would be interested? Share this story!
Criminals convicted of multiple cases of rape could face chemical castration in Pakistan as the country’s parliament supported new legislation aimed at tackling the rise in sexual offenses there.
The amendments to existing legislation, which allow for speedy conviction and harsher punishments for rapists, have been voted in by the MPs on Wednesday.
They introduce the death penalty or a life sentence for gang rape as well as chemical castration for repeat sex offenders, with the consent of the convict.
Chemical castration was described in the bill as a process through which “a person is rendered incapable of performing sexual intercourse for any period of his life, as may be determined by the court through administration of drugs.”
It’s planned to establish special courts across the country to make sure that verdicts in sexual assault cases are delivered “expeditiously, preferably within four months.” If chemical castration is assigned as a punishment, it “shall be conducted through a notified medical board,” according to the new legislation.
Mushtaq Ahmed, a senator for the religious Jamaat-i-Islami party, had earlier denounced the bill as un-Islamic. Ahmed argued that there was no mention of chemical castration in Sharia law and that rapists are to be hanged in public.
By resorting to drugs to reduce the libido of repeat sex offenders, Pakistan joins South Korea, Poland, the Czech Republic and some US states, where chemical castration has been introduced.
The measure was put on the table a year ago by Pakistani President Arif Alvi in response to a vast public outcry over a spike across the country in cases of rape involving both women and children.
Back then, Amnesty International decried chemical castration as a “cruel, inhumane” treatment, advising Islamabad to instead focus on reforming its “flawed” justice system and to ensure justice for the victim.
Local NGO War Against Rape told Reuters last year that less than 3% of sexual assault or rape prosecutions in Pakistan result in a conviction.
Despite COVID-19 lockdowns and restrictions, some 650,000 women and girls were provided with gender-based violence services through a joint UN and European Union (EU) programme working to stamp out what is arguably one of the most prevalent human rights violations.
Australia’s government could be forced to spend tens of millions in payouts after receiving more than 10,000 compensation claims from people who suffered side effects and loss of income due to Covid-19 vaccines.
Under its no-fault indemnity scheme, eligible claimants can apply for compensation amounts between AU$5,000 (US$3,646) to AU$20,000 (US$14,585) to cover medical costs and lost wages as a result of being hospitalized after getting the shot. The scheme’s online portal is scheduled to be launched next month.
Official figures suggest, however, that over 10,000 people have already indicated their intention to make a claim since registration opened on the health department’s website in September. If each claim was approved, the government could face a bill of at least AU$50 million (US$36.46 million).
There were around 78,880 adverse events to Covid-related vaccination in Australia as of November 7, according to the Therapeutic Goods Administration, which regulates national health products. The majority of side effects were minor, including headaches, nausea, and arm soreness.
Only people who experienced a moderate to significant adverse reaction that resulted in a hospital stay of at least one night are eligible for coverage under the government’s scheme. Those seeking $20,000 or less have to provide proof their claims are vaccine-related – although there has been no information as yet on exactly what evidence would be acceptable.
“Adverse events, even though they happen to a tiny proportion of people, for the people it does impact it’s really quite devastating,” Clare Eves, the head of medical negligence at injury compensation firm Shine Lawyers, told the Sydney Morning Herald.
Among the adverse reactions covered are the blood clotting disorder “thrombosis with thrombocytopenia syndrome (TTS)” linked to the AstraZeneca vaccine and the “myocarditis and pericarditis” heart conditions associated with the Pfizer vaccine. Other reportedly accepted side effects are Guillain-Barré syndrome, a rare neurological condition, and immune thrombocytopenia (excessive bleeding due to low platelet levels).
Claims for over $20,000, including those for vaccine-related deaths, will be assessed by an independent legal panel of legal experts and compensation paid on its recommendations. Nine people have reportedly died after an adverse reaction to one of the three vaccines in the country.
Eves told the Morning Herald that her firm was representing a number of litigants over the vaccine side effects, including several who are not eligible for the scheme.
Clinics in the Austrian region of Salzburg have set up a special assessment team tasked with identifying Covid patients who have a higher chance of survival; the rest may soon have to take a back seat.
Amid a dramatic spike in Covid cases, medical personnel warn they may soon have to make the heart-wrenching choice of which patients get life-saving treatment and which ones will have to wait, Austrian media report. Intensive care units in the Salzburg region are packed, with the number of patients treated there setting a new grim record on Tuesday, reaching 33. The region ranks amid Austria’s hardest-hit, logging more than 1,500 new infections per 100,000 residents in a week. In an emotional plea for help to the local government, the head of Salzburg’s hospitals warned that soon clinics would likely not be able to guarantee the existing level of standards in terms of medical treatment. A representative for the city clinics likened the situation to “running into a wall.”
The region’s governor, Wilfried Haslauer, announced on Tuesday that some of the Covid patients whose condition was no longer life-threatening would be transferred from hospitals to rehabilitation centers to make room for more serious cases.
In neighboring Upper Austria, the situation is no better, with the number of deaths in intensive care units surpassing figures seen in all the previous Covid waves. Speaking to Austria’s Der Standard paper on condition of anonymity, healthcare workers there said they had free beds “because the infected are dying.”
For the time being, the creation of a so-called ‘triage team’ in Salzburg hospitals is being described as a “precautionary measure.” The panel is made up of six people: one legal expert and five providers from various medical disciplines. If push comes to shove, they will be deciding which patients stand a chance and which treatments have little prospect of success.
Clinics in the Austrian region of Salzburg have set up a special assessment team tasked with identifying Covid patients who have a higher chance of survival; the rest may soon have to take a back seat.
Amid a dramatic spike in Covid cases, medical personnel warn they may soon have to make the heart-wrenching choice of which patients get life-saving treatment and which ones will have to wait, Austrian media report. Intensive care units in the Salzburg region are packed, with the number of patients treated there setting a new grim record on Tuesday, reaching 33. The region ranks amid Austria’s hardest-hit, logging more than 1,500 new infections per 100,000 residents in a week. In an emotional plea for help to the local government, the head of Salzburg’s hospitals warned that soon clinics would likely not be able to guarantee the existing level of standards in terms of medical treatment. A representative for the city clinics likened the situation to “running into a wall.”
The region’s governor, Wilfried Haslauer, announced on Tuesday that some of the Covid patients whose condition was no longer life-threatening would be transferred from hospitals to rehabilitation centers to make room for more serious cases.
In neighboring Upper Austria, the situation is no better, with the number of deaths in intensive care units surpassing figures seen in all the previous Covid waves. Speaking to Austria’s Der Standard paper on condition of anonymity, healthcare workers there said they had free beds “because the infected are dying.”
For the time being, the creation of a so-called ‘triage team’ in Salzburg hospitals is being described as a “precautionary measure.” The panel is made up of six people: one legal expert and five providers from various medical disciplines. If push comes to shove, they will be deciding which patients stand a chance and which treatments have little prospect of success.
(Editor’s Note: This article was first published by our friends at Just Security and is the fourth in a series that is diving into the foundational barriers to the broad integration of AI in the IC – culture, budget, acquisition, risk, and oversight. This article considers a new IC approach to risk management.)
OPINION — I have written previously that the Intelligence Community (IC) must rapidly advance its artificial intelligence (AI) capabilities to keep pace with our nation’s adversaries and continue to provide policymakers with accurate, timely, and exquisite insights. The good news is that there is strong bipartisan support for doing so. The not-so-good news is that the IC is not well-postured to move quickly and take the risks required to continue to outpace China and other strategic competitors over the next decade.
In addition to the practical budget and acquisition hurdles facing the IC, there is a strong cultural resistance to taking risks when not absolutely necessary. This is understandable given the life-and-death nature of intelligence work and the U.S. government’s imperative to wisely execute national security funds and activities. However, some risks related to innovative and cutting-edge technologies like AI are in fact necessary, and the risk of inaction – the costs of not pursuing AI capabilities – is greater than the risk of action.
The Need for a Risk Framework
For each incredible new invention, there are hundreds of brilliant ideas that have failed. To entrepreneurs and innovators, “failure” is not a bad word. Rather, failed ideas are often critical steps in the learning process that ultimately lead to a successful product; without those prior failed attempts, that final product might never be created. As former President of India A.P.J. Abdul Kalam once said, “FAIL” should really stand for “First Attempt In Learning.”
The U.S. government, however, is not Silicon Valley; it does not consider failure a useful part of any process, especially when it comes to national security activities and taxpayer dollars. Indeed, no one in the U.S. government wants to incur additional costs or delay or lose taxpayer dollars. But there is rarely a distinction made within the government between big failures, which may have a lasting, devastating, and even life-threatening impact, and small failures, which may be mere stumbling blocks with acceptable levels of impact that result in helpful course corrections.
The Cipher Brief hosts private briefings with the world’s most experienced national and global security experts. Become a member today.
As a subcommittee report of the House Permanent Select Committee on Intelligence (HPSCI) notes “[p]rogram failures are often met with harsh penalties and very public rebukes from Congress which often fails to appreciate that not all failures are the same. Especially with cutting-edge research in technologies … early failures are a near certainty …. In fact, failing fast and adapting quickly is a critical part of innovation.” There is a vital difference between an innovative project that fails and a failure to innovate. The former teaches us something we did not know before, while the latter is a national security risk.
Faced with congressional hearings, inspector general reports, performance evaluation downgrades, negative reputational effects, and even personal liability, IC officers are understandably risk-averse and prefer not to introduce any new risk. That is, of course, neither realistic nor the standard the IC meets today. The IC is constantly managing a multitude of operational risks – that its officers, sources, or methods will be exposed, that it will miss (or misinterpret) indications of an attack, or that it will otherwise fail to produce the intelligence policymakers need at the right time and place. Yet in the face of such serious risks, the IC proactively and aggressively pursues its mission. It recognizes that it must find effective ways to understand, mitigate, and make decisions around risk, and therefore it takes action to make sure potential ramifications are clear, appropriate, and accepted before any failure occurs. In short, the IC has long known that its operations cannot be paralyzed by a zero-risk tolerance that is neither desirable nor attainable. This recognition must also be applied to the ways in which the IC acquires, develops, and uses new technology.
This is particularly important in the context of AI. While AI has made amazing progress in recent years, the underlying technology, the algorithms and their application, are still evolving and the resulting capabilities, by design, will continue to learn and adapt. AI holds enormous promise to transform a variety of IC missions and tasks, but how and when these changes may occur is difficult to forecast and AI’s constant innovation will introduce uncertainty and mistakes. There will be unexpected breakthroughs, as well as failures in areas that initially seemed promising.
The IC must rethink its willingness to take risks in a field where change and failure is embraced as part of the key to future success. The IC must experiment and iterate its progress over time and shift from a culture that punishes even reasonable risk to one that embraces, mitigates, and owns it. This can only be done with a systematic, repeatable, and consistent approach to making risk-conscious decisions.
Today there is no cross-IC mechanism for thinking about risk, let alone for taking it. When considering new activities or approaches, each IC element manages risk through its own lens and mechanisms, if at all. Several individual IC elements have created internal risk assessment frameworks to help officers understand the risks of both action and inaction, and to navigate the decisions they are empowered to make depending upon the circumstances. These frameworks increase confidence that if an activity goes wrong, supervisors all the way up the chain will provide backing as long as the risk was reasonable, well-considered and understood, and the right leaders approved it. And while risk assessments are often not precise instruments of measurement – they reflect the quality of the data, the varied expertise of those conducting the assessments, and the subjective interpretation of the results – regularized and systematic risk assessments are nevertheless a key part of effective risk management and facilitate decision-making at all levels.
Go beyond the headlines with expert perspectives on today’s news with The Cipher Brief’s Daily Open-Source Podcast. Listen here or wherever you listen to podcasts.
Creating these individual frameworks is commendable and leading-edge for government agencies, but more must be done holistically across the IC. Irregular and inconsistent risk assessments among IC elements will not provide the comfort and certainty needed to drive an IC-wide cultural shift to taking risk. At the same time, the unique nature of the IC, comprised of 18 different elements, each with similar and overlapping, but not identical, missions, roles, authorities, threats and vulnerabilities, does not lend itself to a one-size-fits-all approach.
For this reason, the IC needs a flexible but common strategic framework for considering risk that can apply across the community, with each element having the ability to tailor that framework to its own mission space. Such an approach is not unlike how the community is managed in many areas today – with overarching IC-wide policy that is locally interpreted and implemented to fit the specific needs of each IC element. When it comes to risk, creating an umbrella IC-wide framework will significantly improve the workforce’s ability to understand acceptable risks and tradeoffs, produce comprehensible and comparable risk determinations across the IC, and provide policymakers the ability to anticipate and mitigate failure and unintended escalation.
Critical Elements of a Risk Framework
A common IC AI risk framework should inform and help prioritize decisions from acquisition or development, to deployment, to performance in a consistent way across the IC. To start, the IC should create common AI risk management principles, like its existing principles of transparency and AI ethics, that include clear and consistent definitions, thresholds, and standards. These principles should drive a repeatable risk assessment process that each IC element can tailor to its individual needs, and should promote policy, governance, and technological approaches that are aligned to risk management.
The successful implementation of this risk framework requires a multi-disciplinary approach involving leaders from across the organization, experts from all relevant functional areas, and managers who can ensure vigilance in implementation. A whole-of-activity methodology that includes technologists, collectors, analysts, innovators, security officers, acquisition officers, lawyers and more, is critical to ensuring a full 360-degree understanding of the opportunities, issues, risks, and potential consequences associated with a particular action, and to enabling the best-informed decision.
Given the many players involved, each IC element must strengthen internal processes to manage the potential disconnects that can lead to unintended risks and to create a culture that instills in every officer a responsibility to proactively consider risk at each stage of the activity. Internal governance should include an interdisciplinary Risk Management Council (RMC) made up of senior leaders from across the organization. The RMC should establish clear and consistent thresholds for when a risk assessment is required, recommended, or not needed given that resource constraints likely will not allow all of the broad and diverse AI activities within organizations to be assessed. These thresholds should be consistent with the IC risk management principles so that as IC elements work together on projects across the community, officers have similar understandings and expectations.
The risk framework itself should provide a common taxonomy and process to:
Understand and identify potential failures, including the source, timeline, and range of effects.
Analyze failures and risks by identifying internal vulnerabilities or predisposing conditions that could increase the likelihood of adverse impact.
Evaluate the likelihood of failure, taking into consideration risks and vulnerabilities.
Assess the severity of the potential impact, to include potential harm to organizational operations, assets, individuals, other organizations, or the nation.
Consider whether the ultimate risk may be sufficiently mitigated or whether it should be transferred, avoided, or accepted.
AI-related risks may include, among other things, technology failure, biased data, adversarial attacks, supply chain compromises, human error, cost overruns, legal compliance challenges, or oversight issues.
An initial risk level is determined by considering the likelihood of a failure against the severity of the potential impact. For example, is there is a low, moderate, or high likelihood of supply chain compromise? Would such a compromise affect only one discrete system or are there system-wide implications? These calculations will result in an initial risk level. Then potential mitigation measures, such as additional policies, training, or security measures, are applied to lower the initial risk level to an adjusted risk level. For example, physically or logically segmenting an organization’s systems so that a compromise only touches one system would significantly decrease the risk level associated with that particular technology. The higher the likelihood of supply chain compromise, the lower the severity of its impact must be to offset the risk, and vice versa. Organizations should apply the Swiss Cheese Model of more than one preventative or mitigative action for a more effective layered defense. Organizations then must consider the adjusted risk level in relation to their tolerance for risk; how much risk (and potential consequence) is acceptable in pursuit of value? This requires defining the IC’s risk tolerance levels, within which IC elements may again define their own levels based upon their unique missions.
Understanding and considering the risk of action is an important step forward for the IC, but it is not the last step. Sometimes overlooked in risk assessment practices is the consideration of the risk of inaction. To fully evaluate potential options, decision-makers must consider whether the overall risk of doing something is outweighed by the risks of not doing it. If the IC does not pursue particular AI capabilities, what is the opportunity cost of that inaction? Any final determination about whether to take action must consider whether declining to act would cause greater risk of significant harm. While the answer will not always be yes, in the case of AI and emerging technology, it is a very realistic possibility.
And, finally, a risk framework only works if people know about it. Broad communication – about the existence of the framework, how to apply it, and expectations for doing so – is vital. We cannot hold people accountable for appropriately managing risk if we do not clearly and consistently communicate and help people use the structure and mechanisms for doing so.
Buy-in To Enhance Confidence
An IC-wide AI risk framework will help IC officers understand risks and determine when and how to take advantage of innovative emerging technologies like AI, increasing comfort with uncertainty and risk-taking in the pursuit of new capabilities. Such a risk framework will have even greater impact if it is accepted – explicitly or implicitly – by the IC’s congressional overseers. The final article in this series will delve more deeply into needed changes to further improve the crucial relationship between the IC and its congressional overseers. It will also provide a link to a full report that provides more detail on each aspect of the series, including a draft IC AI Risk Framework.
Although Congress is not formally bound by such a framework, given the significant accountability measures that often flow from these overseers, a meeting of the minds between the IC and its congressional overseers is critical. Indeed, these overseers should have awareness of and an informal ability to provide feedback into the framework as it is being developed. This level of transparency and partnership would lead to at least two important benefits: first, increased confidence in the framework by all; and second, better insight into IC decision-making for IC overseers.
Ultimately, such a mutual understanding would encourage exactly what the IC needs to truly take advantage of next-generation technology like AI: a culture of experimentation, innovation, and creativity that sees reasonable risk and failure as necessary steps to game-changing outcomes.