Home
Blog
Contact
HRK
Hello World
Back

Key Takeaways from Beijing’s AI Conference

I attended the Beijing Association of Artificial Intelligence Conference last week, and took away an understanding of the need for cross-cultural coordination. There is a pressing prerogative for humans to take back responsibility in the global governance of AI

Disclaimer: This summary was produced for the Berggruen Institute China Institute. Find out more about their work.

Beijing set the stage for the coming together of AI experts from around the world at this year’s Beijing Association of Artificial Intelligence (BAAI) conference at the beginning of November. The intended cross-cultural communication channel opened by conferences such as this one, was particularly pertinent in the panel discussing AI Ethics. The guest speakers from different cultures shared their different ideas but one commonality emerged: the requirement of humanity to take responsibility in the careful design of AI to attain collective goals of harmony, transparency and diversity, themes repeated in national AI principles produced around the world. The Collingridge dilemma famously introduces a “pacing problem” whereby the rate of technological innovation is increasingly outstripping the rate of required regulation. In light of this, the key takeaway from the panel is the onerous on humans, in the capacity as the academic community, the governments, the companies or the public, to take responsibility now with the foresight to design a future where artificial intelligence is for the benefit of all, not the few.

Wallach’s opening address was characterised by his emphasis on cooperation and coordination, seeing our current climate as an inflection point of uncertainty in technological development. Ethics in navigating this uncertainty requires flexibility to overcome the technological determinism inherent to Collingridge’s dilemma. Technological determinism seems more likely when we consider Liu Zhe’s point on the differential treatment of autonomy in an engineer’s definition, versus a philosopher’s. In Wallach’s words, engineers, ethicists and legislators must all speak the same language to find a robust set of parameters for decision making and tools for guidance at each level of responsibility. Yet adaptive and agile governance remains an ideal, not a practical implementation. Greater precision is required in laying concrete foundations, what Liu Zhe calls functional morality, for the mechanics of global AI governance in order to close the gap between principles and implementation.

Professor van den Hoven shared the concern on how we design an ethical system which can be applied to the 21st century condition:

21st century condition: How can we collectively assist people unlike us, far away, and in a distant future, by refraining from using certain complex technologies now to benefit them later in accordance with standards unfamiliar to us?

To meet this condition requires a reinvention of ‘old ethics’, a reconceptualization of a new manual with updated guidelines, which van den Hoven analogises with the different safety rules written for different types of ship: an oil-tanker is unfit for tourists and a cruise ship unfit for oil.  In constructing our new approach, responsibility, as the cornerstone of all legal and ethical frameworks, must remain central.

Trust and confidence are often conflated but speakers on this panel called for their philosophical distinction. Van den Hoven implores trust is a human system and ignorance of human responsibility allows for ‘agency laundering’. Demarcating an artificially intelligent agent as trustworthy abstracts from who made it or who designed it. The moral abdication of human responsibility to the shoulders of AI inadequately distributes risk. In blackbox algorithms, responsibility is blurred and without responsibility there is no imperative  for the designers of AI to consider ethical design at all. Instead, to mitigate plausible deniability of blame, designers need to ground machines as based on human choice and human responsibility to maintain our knowledge, control and freedom.

Three examples illustrate this transition from epistemic enslavement to epistemic empowerment, where humans avoid becoming slaves to algorithmic autonomous decisions by retaining a responsibility over moral risk. The first example is provided by van den Hoven who criticises automatic deployment of safety systems. When undeserved full trust is placed in a less than perfect algorithmic system, a human operator cannot disengage the system even when they consider the judgement to be erroneous. By shifting responsibility to the machine, the human must comply or bear an unacceptably high weight of moral risk. Instead, while algorithms can recommend a decision, the human operator should maintain autonomy and therefore responsibility over the final outcome. Zeng Yi provides two further examples. Deep neural nets are efficient image classifiers yet as Zeng’s study shows, changing crucial pixels can confuse an algorithm into mistaking a turtle for a rifle. To avoid this misclassification having real-world consequences, once again a human must take responsibility for moderating the machine’s judgement. Finally, the case against moral abdication in favour for responsibility retention is perhaps best exemplified by the development of Brain-Computer interfaces. In the situation we do not carefully ascribe risk to different actors, would killing at the hand of a robotic arm be the responsibility of the human attached to the arm, the roboticist who designed the technology, or the robot itself? To avoid such situations, ethical and responsible governance is required for human AI ‘optimising symbiosis’.

Beyond the specific recommendations of retaining responsibility in the redesign of ethical systems, the panel implored an understanding of cross-cultural attitudes to AI. Yolanda Lannquist sees considerable common ground across the 84 AI ethical principles in other countries versus one developed with the help of Zeng Yi here in Beijing. The national strategies share goals such as accessibility, accountability, safety, fairness and transparency. Such alignment of goals provides an a priori positive case for the scope of global coordination in AI Governance. Yet from the panel, a more nuanced understanding lands. As Danit Gal summarised, theoretical similarities of shared AI ethics exist globally but the application or cultural understanding happens locally. Allowing for and understanding different interpretations of these AI principles, and keeping vagueness in international mandates, retains flexibility of culturally-specific technological development. Internationally cooperative AI strategy does not necessarily in force identical national AI strategies. Three comments on cross-cultural differences were central to the panel: priorities, interpretation and partnership.

Japanese professor Arise Ema highlights priorities are likely to be different. A country will focus on principles which align with their most pressing problems. In the Japanese case, the national strategy focuses on human-machine labour as complements or substitutes due to the country-specific backdrop of a super-aging society and an industry comprised of many automatable jobs.

Gal warns of the differential interpretation of the same ethical code in different countries. In China, accessibility and privacy are placed in governmental hands, with the assimilation of data in government repositories where they can monitor misuse or violations of privacy. Data in the West is often stored in datasilos owned by tech companies so the concepts of accessibility and privacy are borne out in a different context entirely. Further, our notion of controllability depends inextricably on cultural treatment of hierarchy, where in South Korea a clear human-machine hierarchy moderates design of controllable AI but in Japan, the human-machine hierarchy appears flatter in structure, exemplified by world’s first Robot Hotel in Tokyo. By inspecting the hotel’s workings, Professor Arise Ema reveals beneath the surface it is unclear when humans or robots are treated as more valuable. The manager admitted while robots could clean corridors, humans cleaned the rooms. Yet at reception, robots took full control of pleasant well-mannered guests, and humans were left to deal with the emotional labour from angry or difficult cases. In this shared structure of responsibility, it is unclear who is in control. Finally, what we consider to be a fair or prejudiced decision depends on societal structure, and experience of diverse populations within that society. A product developed in China may be fair for Han ethnicity citizens but in another country could display considerable bias. These examples only begin to illustrate the complexity of cross-border coordination in AI strategy.

Finally, Zeng Yi considers how priorities and interpretations direct different levels partnership between humans and AI. He triangulates these relationships into tools, competitors and partners. In the round table discussion, in response to an audience question, the speakers considered the root of these different human-machine relationship, specifically whether the media and creative industries play a role in mediating or manipulating public opinion, a relevant consideration given the timely release of the most recent Terminator film. The contrasting portrayal of mutually-destructive human and machine interaction in western films such as this one, versus the mutually-benefit friendship robots in Japanese cinema introduces a discrepancy of public expectations to whether future technologies will help or harm. Crucially, Zeng’s trifurcation allows for these dynamic considerations: as artificial general intelligence or artificial super intelligence appears on the horizon, a tool-only based relationship becomes less realistic. Understanding the current climate of cross-cultural attitudes to AI can inform our judgement on whether future humans see AI as antagonising competition or harmonising partners. This judgement of the future should remain central to designing our present-day governance principles because as Zeng warns, by building different pathways into AI, we build in different risks.

The general consensus arising from this panel is the need for a globally inclusive governance of AI. The assimilation of key takeaway from each speaker share the recommendation of retaining human responsibility in a re-invention of ethical systems whilst maintaining flexibility to apply these ethical principles of AI across borders and cultures. Yet the question remains of how such recommendations are implemented in practice. As Brian Tse realises, the international relations landscape will be substantially altered by unbalanced AI development paths, especially between global superpowers. How can we replace this international competition with a cooperative rivalry? Yolanda Lannquist proposes a multi-agent approach across many stakeholders, where collaboration between those in academia, nonprofits, government and the public is required to synthesise common norms. Arise Ema hopes to build greater Chinese involvement at the global discussion table, by drawing on the existing Japanese-German-France initiative which fosters a dialogue between experts in the East and the West. Wendell Wallach proposes the delegation of obligation to International Governance Coordination Committees, whose purpose would be to devise ethical principles at an institutional level for the global community and then advise policy locally for national application. Wallach’s 1st International Congress for the Governance of AI (ICGAI) happening in Prague next year holds promise in producing an agenda for successfully implementing ‘Agile and Comprehensive International Governance of AI’. Implementation challenges remain and uncertainties about the future cloud precise policy prescriptions, but in the meantime, as this panel demonstrates, conferences like these are a step in the right direction to foster the diversity of discussion required for inclusive and mutually-beneficial AI.