By Jacob Parker, ACSS Reporter
February 4, 2023
Artificial intelligence (AI) is progressing rapidly with tremendous benefits. In healthcare, it provides greater insights into complex medical datasets in timeframes once unimaginable. Its algorithms are used in finance, insurance and economics to extrapolate data and make predictions on patterns generated from automated tasks once the preserve of humans, whose pace is sloth-like by comparison.
Applications of AI have a dark side. In export controls, AI components have dual uses – one for nefarious means, such as warfare.
AI’s fast microprocessors enable sophisticated machine-learning algorithms to compute terabytes of sensitive intelligence and data in mere seconds. An object can be recognized by specific features such as size, shape, and color. In war, such technology can help predict tactics and provide greater precision.
Russian President Vladimir Putin latched onto AI’s use early. In 2017, the state-funded Russia Today network reported a speech made by Putin to students covered in the Western press: “Artificial intelligence is the future, not only for Russia, but for all humankind.… Whoever becomes the leader in this sphere will become the ruler of the world.”
AI will Alter Capital and Labor Relationships
AI-powered technology and applications will alter relationships between capital and labor. As AI technology evolves, so will the ubiquity of AI in the workplace. Human capital leveraging the power of AI applications to spur innovation, creativity, and efficiency in their workplace will be in high demand. Those whose jobs are characterized by high repetition and low complexity will need to find alternative work.
Superiority in the information domain will be derived primarily from AI’s ability to sift through terabytes of data. IBM applications such as IBM Streams and DataTorrent assist firms in unearthing anomalies among terabytes of data, giving greater insight into consumer buying patterns and behaviors. The same AI-computing capability could automate personalized disinformation posts across social media platforms, increasing polarized opinions among constituents and jeopardizing the political stability of a country.
AI fosters duality. While consumers marvel at the wonders of long-range package delivery by drones flying above the congestion of urban traffic, the same technology in part or parcel in the hands of state or non-state actors can direct a lethal explosive to the outskirts of Kyiv in Ukraine.
Who Regulates AI?
US regulations of AI technologies are enforced through the International Traffic in Arms Regulations (ITAR), and the Export Administration Regulations (EAR). While both regulate the import and export of dual-use materials and technologies pertinent to US National Security, ITAR is administered by the US Department of State alone. ITAR’s US munition list extends its governing jurisdiction to include materials and technologies such as defense articles, defense services and technical data.
Alongside ITAR and EAR, the Committee of Foreign Investment in the US (CFIUS) administers the Foreign Investment Risk Review Modernization Act of 2018 (FIRRMA). FIRRMA’s task is to oversee foreign investment in US-based companies exporting sensitive material, technologies and data.
For nations such as China, US-led AI export controls might not have the intended effect. William Alan Reinsch, senior advisor and Scholl chair in international business for the Center for Strategic and International Studies (CSIS), said the controls might even expedite China’s development of the technology.
For Reinsch, “assembling sets of chips individually less capable, but collectively can work together to produce the desired capabilities” undermines the efficacy of AI export controls. Although China is restricted from importing AI chips with a dimension of 14 nanometers or less, scaling can produce such replication. China can recreate the capacity of a 14-nanometer chip with pairs of 28-nanometer AI chips manufactured domestically.
Government and Business Response
If narrowly targeted with a modest objective, AI export controls can be highly effective in preserving the preeminence of Western technological leadership. This can also restrict access to actors utilizing AI capabilities toward nefarious ends.
Such was the case when the US Commerce Department’s Bureau of Industry and Security (BIS) published an interim final rule regulating the export of specific AI software concerning geospatial imagery analysis (GIS). In the rule, BIS officials targeted specific GIS software applications for training deep convolutional neural networks (DCNN) automating GIS imagery and point cloud analysis.
DCNN is a machine-learning method that mimics the human brain. Point clouds are sets of data points plotted in 3-D space. Some of the targeted functionality includes:
- identifying objects with graphical user interfaces that enable the extraction of positive match samples of an object of interest;
- performing scale, color, and rotational calibration on positive image matches, thus reducing variation in pixels;
- training a deep convolutional neural network to detect an object of interest from samples; and
- identifying objects using the deep convolutional neural network through matching rotational patterns.
Firms interested in designing GIS AI software for commercial ends or receiving funding through investments transacted by foreign individuals or an entity will encounter Committee on Foreign Investment in the United States (CFIUS) regulations. Under the CFIUS framework, a filing may be required for foreign investments, controlling or non-controlling, into a US business that designs, develops, tests, produces, or fabricates a critical technology and then uses that technology in certain industries.
Digital Export Controls
The Export Control Reform Act (ECRA) of 2018 spurred BIS officials to identify mechanisms for regulating the export of emerging technologies critical to US national security. Microsoft and OpenAI were keen to design and develop a framework that would assist BIS.
The solution was a three-pronged approach. Software features such as identity verification and tagging enable real-time controls against prohibited end-users and/or end-user activities. Hardware rootkits synchronized with software-based solutions mandate authorization based on the input of secure codes or data generated by the hardware. Tamper-resistant tools installed on hardware and software provide system hardening and protection against unauthorized access and subversion.
Such solutions enhance export control systems by enabling identifying and restricting suspicious users and/or activity in breach of BIS export controls end-use criteria.
The Artificial Intelligence Act (AIA) is a regulation proposed by the European Commission to introduce a common AI framework. The act aims to mitigate the risks associated with adopting AI technologies by establishing safe, trusted, and ethical AI outcomes that respect fundamental human rights laws and EU values.
To enable the EU’s ethical AI mission, the AIA aims to:
- Identify AI systems that present collateral risk from governmental social scoring metrics. Every action, interaction, and movement put forth by an individual would be calculated and graded. Individuals with higher scores will have greater opportunities.
- Apply strict obligations to AI systems that present a high risk. AI systems subjected to such regulations include critical infrastructures such as transport, which could put the life and health of citizens at risk; educational or vocational training, which may determine the access to education and professional course of someone’s life, such as the marking of exams; safety components of products, among them the AI application in robot-assisted surgery; and law enforcement that may interfere with people’s fundamental rights, such as evaluating the reliability of the evidence.
Technology drives forward relentlessly. Governments are poised to curtail access to critical AI software applications and hardware equipment with dual-uses. Firms can avoid violating export control and/or sanction compliance restrictive measures by establishing comprehensive internal compliance systems incorporating risk assessments, auditing, testing, and keeping abreast of compliance regulations and policies.
As an emerging industry will plenty of scope, firms exploiting AI need to keep on top of their compliance obligations. The rewards look very likely to be worth the effort.