Key Takeaways from Beijing’s AI Conference

I attended the Beijing Association of Artificial Intelligence Conference last week, and took away an understanding of the need for cross-cultural coordination. There is a pressing prerogative for humans to take back responsibility in the global governance of AI

Disclaimer: This summary was produced for the Berggruen Institute China Institute. Find out more about their work.

Beijing set the stage for the coming together of AI experts from around the world at this year’s Beijing Association of Artificial Intelligence (BAAI) conference at the beginning of November. The intended cross-cultural communication channel opened by conferences such as this one, was particularly pertinent in the panel discussing AI Ethics. The guest speakers from different cultures shared their different ideas but one commonality emerged: the requirement of humanity to take responsibility in the careful design of AI to attain collective goals of harmony, transparency and diversity, themes repeated in national AI principles produced around the world. The Collingridge dilemma famously introduces a “pacing problem” whereby the rate of technological innovation is increasingly outstripping the rate of required regulation. In light of this, the key takeaway from the panel is the onerous on humans, in the capacity as the academic community, the governments, the companies or the public, to take responsibility now with the foresight to design a future where artificial intelligence is for the benefit of all, not the few.

Wallach’s opening address was characterised by his emphasis on cooperation and coordination, seeing our current climate as an inflection point of uncertainty in technological development. Ethics in navigating this uncertainty requires flexibility to overcome the technological determinism inherent to Collingridge’s dilemma. Technological determinism seems more likely when we consider Liu Zhe’s point on the differential treatment of autonomy in an engineer’s definition, versus a philosopher’s. In Wallach’s words, engineers, ethicists and legislators must all speak the same language to find a robust set of parameters for decision making and tools for guidance at each level of responsibility. Yet adaptive and agile governance remains an ideal, not a practical implementation. Greater precision is required in laying concrete foundations, what Liu Zhe calls functional morality, for the mechanics of global AI governance in order to close the gap between principles and implementation.

Professor van den Hoven shared the concern on how we design an ethical system which can be applied to the 21st century condition:

21st century condition: How can we collectively assist people unlike us, far away, and in a distant future, by refraining from using certain complex technologies now to benefit them later in accordance with standards unfamiliar to us?

To meet this condition requires a reinvention of ‘old ethics’, a reconceptualization of a new manual with updated guidelines, which van den Hoven analogises with the different safety rules written for different types of ship: an oil-tanker is unfit for tourists and a cruise ship unfit for oil.  In constructing our new approach, responsibility, as the cornerstone of all legal and ethical frameworks, must remain central.

Source: van den Hoven

Ethics 1.0 Ethics 2.0
Actions Omissions
Basic and non-mediated Technologically mediated
Between natural individual persons Joint/ Collective
Space time contiguity Future and remote
Causally standard context Causally wayward chains
Negative duty not to harm Positive duty to assist

Trust and confidence are often conflated but speakers on this panel called for their philosophical distinction. Van den Hoven implores trust is a human system and ignorance of human responsibility allows for ‘agency laundering’. Demarcating an artificially intelligent agent as trustworthy abstracts from who made it or who designed it. The moral abdication of human responsibility to the shoulders of AI inadequately distributes risk. In blackbox algorithms, responsibility is blurred and without responsibility there is no imperative  for the designers of AI to consider ethical design at all. Instead, to mitigate plausible deniability of blame, designers need to ground machines as based on human choice and human responsibility to maintain our knowledge, control and freedom. 

Three examples illustrate this transition from epistemic enslavement to epistemic empowerment, where humans avoid becoming slaves to algorithmic autonomous decisions by retaining a responsibility over moral risk. The first example is provided by van den Hoven who criticises automatic deployment of safety systems. When undeserved full trust is placed in a less than perfect algorithmic system, a human operator cannot disengage the system even when they consider the judgement to be erroneous. By shifting responsibility to the machine, the human must comply or bear an unacceptably high weight of moral risk. Instead, while algorithms can recommend a decision, the human operator should maintain autonomy and therefore responsibility over the final outcome. Zeng Yi provides two further examples. Deep neural nets are efficient image classifiers yet as Zeng’s study shows, changing crucial pixels can confuse an algorithm into mistaking a turtle for a rifle. To avoid this misclassification having real-world consequences, once again a human must take responsibility for moderating the machine’s judgement. Finally, the case against moral abdication in favour for responsibility retention is perhaps best exemplified by the development of Brain-Computer interfaces. In the situation we do not carefully ascribe risk to different actors, would killing at the hand of a robotic arm be the responsibility of the human attached to the arm, the roboticist who designed the technology, or the robot itself? To avoid such situations, ethical and responsible governance is required for human AI ‘optimising symbiosis’.

Beyond the specific recommendations of retaining responsibility in the redesign of ethical systems, the panel implored an understanding of cross-cultural attitudes to AI. Yolanda Lannquist sees considerable common ground across the 84 AI ethical principles in other countries versus one developed with the help of Zeng Yi here in Beijing. The national strategies share goals such as accessibility, accountability, safety, fairness and transparency. Such alignment of goals provides an a priori positive case for the scope of global coordination in AI Governance. Yet from the panel, a more nuanced understanding lands. As Danit Gal summarised, theoretical similarities of shared AI ethics exist globally but the application or cultural understanding happens locally. Allowing for and understanding different interpretations of these AI principles, and keeping vagueness in international mandates, retains flexibility of culturally-specific technological development. Internationally cooperative AI strategy does not necessarily in force identical national AI strategies. Three comments on cross-cultural differences were central to the panel: priorities, interpretation and partnership.

Japanese professor Arise Ema highlights priorities are likely to be different. A country will focus on principles which align with their most pressing problems. In the Japanese case, the national strategy focuses on human-machine labour as complements or substitutes due to the country-specific backdrop of a super-aging society and an industry comprised of many automatable jobs.

Gal warns of the differential interpretation of the same ethical code in different countries. In China, accessibility and privacy are placed in governmental hands, with the assimilation of data in government repositories where they can monitor misuse or violations of privacy. Data in the West is often stored in datasilos owned by tech companies so the concepts of accessibility and privacy are borne out in a different context entirely. Further, our notion of controllability depends inextricably on cultural treatment of hierarchy, where in South Korea a clear human-machine hierarchy moderates design of controllable AI but in Japan, the human-machine hierarchy appears flatter in structure, exemplified by world’s first Robot Hotel in Tokyo. By inspecting the hotel’s workings, Professor Arise Ema reveals beneath the surface it is unclear when humans or robots are treated as more valuable. The manager admitted while robots could clean corridors, humans cleaned the rooms. Yet at reception, robots took full control of pleasant well-mannered guests, and humans were left to deal with the emotional labour from angry or difficult cases. In this shared structure of responsibility, it is unclear who is in control. Finally, what we consider to be a fair or prejudiced decision depends on societal structure, and experience of diverse populations within that society. A product developed in China may be fair for Han ethnicity citizens but in another country could display considerable bias. These examples only begin to illustrate the complexity of cross-border coordination in AI strategy.

Finally, Zeng Yi considers how priorities and interpretations direct different levels partnership between humans and AI. He triangulates these relationships into tools, competitors and partners. In the round table discussion, in response to an audience question, the speakers considered the root of these different human-machine relationship, specifically whether the media and creative industries play a role in mediating or manipulating public opinion, a relevant consideration given the timely release of the most recent Terminator film. The contrasting portrayal of mutually-destructive human and machine interaction in western films such as this one, versus the mutually-benefit friendship robots in Japanese cinema introduces a discrepancy of public expectations to whether future technologies will help or harm. Crucially, Zeng’s trifurcation allows for these dynamic considerations: as artificial general intelligence or artificial super intelligence appears on the horizon, a tool-only based relationship becomes less realistic. Understanding the current climate of cross-cultural attitudes to AI can inform our judgement on whether future humans see AI as antagonising competition or harmonising partners. This judgement of the future should remain central to designing our present-day governance principles because as Zeng warns, by building different pathways into AI, we build in different risks.

Source:  Zeng Yi

The general consensus arising from this panel is the need for a globally inclusive governance of AI. The assimilation of key takeaway from each speaker share the recommendation of retaining human responsibility in a re-invention of ethical systems whilst maintaining flexibility to apply these ethical principles of AI across borders and cultures. Yet the question remains of how such recommendations are implemented in practice. As Brian Tse realises, the international relations landscape will be substantially altered by unbalanced AI development paths, especially between global superpowers. How can we replace this international competition with a cooperative rivalry? Yolanda Lannquist proposes a multi-agent approach across many stakeholders, where collaboration between those in academia, nonprofits, government and the public is required to synthesise common norms. Arise Ema hopes to build greater Chinese involvement at the global discussion table, by drawing on the existing Japanese-German-France initiative which fosters a dialogue between experts in the East and the West. Wendell Wallach proposes the delegation of obligation to International Governance Coordination Committees, whose purpose would be to devise ethical principles at an institutional level for the global community and then advise policy locally for national application. Wallach’s 1st International Congress for the Governance of AI (ICGAI) happening in Prague next year holds promise in producing an agenda for successfully implementing ‘Agile and Comprehensive International Governance of AI’. Implementation challenges remain and uncertainties about the future cloud precise policy prescriptions, but in the meantime, as this panel demonstrates, conferences like these are a step in the right direction to foster the diversity of discussion required for inclusive and mutually-beneficial AI.

 

What Exactly is High-Level Machine Intelligence?

High-Level Machine Intelligence (HMLI) has been considered as “achieved when unaided machines can accomplish every task better and more cheaply than human workers” (Grace et al., 2018), but is this definitional approach appropriate?

I don’t think so. The definition requires a threefold change: specifying human-level versus human-like intelligence, specifying a set of non-infinite tasks and specifying our judgement of ‘better’.

‘Human-like’ versus ‘Human-level’

At current, Professor Boden (2015) describes artificial intelligence as a “bag of highly specialised tricks”, seeing the advent of true ‘human-level’ intelligence a distant threat. Present AI is sometimes labelled as an ‘idiot savant’, drastically outperforming humans in specific tasks but this ‘performance’ cannot be extended to many other tasks which humans complete on a day-to-day basis with such little cognition power a child or even an infant could master. Diversity is thus key: human-level intelligence requires people can learn models and skills to apply them to arbitrary new tasks and goals. For a machine to beat a human at Go requires a neural network to be trained on hundreds of millions of games. While it can be successful in this domain specific scenario, consider now we change the objective function to purposeful losing, to being the last player to pick up a stone or even to beating opponent but only by enough to not embarrass them. While a human player could adapt to these new situations quickly,  an AI model would require substantial retraining and reconfiguring. A crucial aspect of human intelligence thus requires transfer learning. One change to the question must state that one AI agent trained on one set of data, however large this may be, can adapt to different objective functions and different constraints. Otherwise AI can remain as “a bag highly specialised tricks”, where each separately trained model can excel at just one task but not across the board; yet diversity is a key component of human-level intelligence. Humans also have the ability to learn their weaknesses and improve in the future. It may be required then that a machine cannot only perform the task better than a human worker, but continually improve its own performance by for example rewriting its own python scripts analogous to the human process of self-development.

Yudkowsky (2008) considers the nexus of artificial intelligence and existential risk arising from convergent instrument goals, avoiding the trap of anthropomorphising AI. Human-level ≠ human-like. Lake et al. (2015) are advocates of ‘building machines that learn and think like people’. They consider incorporating intuitive physics and psychology. This richer starting point would allow technologies such as neural networks to converge to human performance with less training examples, making human-like not only human-level decisions. Grace et al. (2018) ascribes somewhat to this view by asking experts: when can a machine can beat the best human Go players but with a similar level of training examples, in the tens of thousands, rather than requiring hundreds of millions of games. While using the human brain as our best example of developed intelligence can provide fruitful research ventures, such a requirement for human-level intelligence to be human-like is overly restrictive.   If we abide by true human-like learning, as Crick (1989) famously criticised, the commonly used technique of backpropagation requires that information be transmitted backwards along the axon, an impossible process in the reality of neuronal function. A less puritanical criticism is why must machines must think like humans if they achieve the same outcome. Whilst the paper requires an artificial Go player to have the same experience as a human player, it does not specify an artificial musician must have learned to the same number of songs as Taylor Swift to imitate her writing style. Nor do we require a neural network to classify images using a lens, a cornea and an optic nerve. Admittedly the black-box of machine learning algorithms is an area requiring study but our definition of human-level intelligence must be clear in whether this is required to be human-like intelligence and if we want to enforce this stricture. Despite its sophistication, human intelligence relies on behavioural biases and heuristics which can give rise to irrational or racially and sexist discriminatory actions, raising the philosophical question of what human-level intelligence really means and whether mimicking its imperfections is a desirable development path to take.

Non-infinite Task Set

One can easily come up with a set of tasks that we do not require AI to perform better in. To list a few, do we require an AI to dance better, to go to the toilet better, or to offer companionship for the elderly better? As Kai Fu Lee, a leading Chinese AI specialist and AI optimist notes that some tasks, especially those requiring empathy and compassion, are profoundly human and can stay that way. Reaching human-level intelligence need not be limited by developing human-level emotional capacity if such capabilities are not required in the tasks AI must perform. In fact, in the literature on the future of capitalism, advocates of AI hope for digital socialism where humans maintain their comparative advantage over machines in a subset of non-automated tasks requiring exactly the aspects of human nature which cannot easily be coded or trained into a machine’s learning process. We thus require a subset of tasks, perhaps 95%, leaving the remaining for human workers.

Towards a Better Definition of ‘Better’

Being ‘better’ at a task is measurable in a number of different ways. AI may reach human-level or even superhuman performance at certain tasks but retain subpar performance in other components. The cost component has here been specified but vagueness in detail creates vagueness in prediction. If an AI can do what a human can do for 1000 times the hourly wage, this is clearly sub-optimal. However, stating an AI must be ‘cheaper’ than one human worker is also naive if a machine has a higher than 1:1 replacement ratio. This can be overcome by referring to human workers in the plural. Yet vagueness remains in the term ‘better’ , thus introducing scope for different interpretations of this survey question. Does better mean quicker, more accurate or making more efficient use of resources? To illustrate consider the following personal example. After being in a road accident last week and suffering a few broken bones, I have lost use of my arm. My human capability to type this blog post is severely limited. Instead I have used voice dictation software for speech to text recognition. On one hand, this technology is faster, cheaper and less demanding of external resources compared to dictating to fellow human. Yet, on the other, it cannot offer me grammatical, topical or semantic advice, nor does it recognise less frequently used words such as ‘Bayesian’, ‘a priori’ or ‘Nick Bostrom’. Equally, unlike a human, it does not understand if I am making a true statement so cannot warn me to validate claims or delete certain sentences. If weighing up whether this technology is ‘better’ than human help, on which metrics should we put more weight? Critically, our parameterisation of the definition depends on our primary concern so should be treated as domain specific.

Considering all of these points I would change the definition to address the following changes. To better confine interpretations of the requirements I offer one example of domain bifurcation:

 

  • Labour market domain: High-level machine intelligence (HLMI) is a machine which can perform all the composite tasks comprising 95 percent of human occupations currently listed on Census Occupation Tier 2 Classification, at an equivalent speed and accuracy rate as a median worker in that occupation, at the same cost as the human workers it replaces.

 

  • Security Domain: High-level machine intelligence (HLMI) is a machine which can perform all the composite tasks comprising 5% percent of human occupations in AI research, cybersecurity, governance intelligent units and military strategy at an equivalent speed and accuracy rate as a median worker in that occupation, at cost as the human workers it replaces.

 

While these alternative definitions do mitigate some problems of vagueness and variability of interpretation, they do not remove it entirely. The unknown nature of undeveloped technologies advancing on an uncertain timeline inevitably renders the question of when AI will reach high-level intelligence definitionally ambiguous to some degree.

Diversity and Dating: Does Online Dating Help or Hinder Diversity of Matches?

On one hand, online dating websites connect users simultaneously to hundreds or thousands of profiles, promising enormous expansion in partner diversity. On the other, filtering and algorithmic matchmaking introduces risk for the pool of partners to be less diverse by ethnicity, by personality, or by any other (potentially irrelevant) input to the black-box models. So which is it? More diversity or more similarity?

The Case for MORE diversity

Matchmaking has existed for millennia, but in the 21st century, the search for love has gone online, and for some individuals is now mediated by sophisticated mathematical algorithms. Under traditional forms of matchmaking, third parties – religious leaders, parents and other connections within a closed-form social network – recommended romantic partners from a narrow pool of individuals (Finkel et al., 2012). Selection from this restricted `field of eligibles’ (Kerckhoff, 1964), endorsed endogamy where partners from the same social group (ethnicity, religion, or culture) come together, making exogamy, the act of marrying a diverse partner, a rarity. The field-of-eligibles hypothesis is upheld as an explaination for spousal correlations, in contrast to the possibility of similarity preference in partner attributes (Berscheid & Reis, 1998). Assortive matching, where potential matches share educational or economic circles (Becker, 1973), occurs even in populations with no individual preference for homogeneity (Burley, 1983). 

The advent of online dating has changed the fundamentals of searching for prospective partners, altering both the romantic acquaintance process and the compatibility matching process. The platform is not only penetrating more of society with over 41% of the world active (Paisley 2018) but also more socially accepted (Whitty and Carr 2006). The legacy of such sites is beginning to emerge with 20% of current committed romantic relationships beginning online (Xie et al., 2014). In particular, the pervasiveness of online dating has expanded users’ access to potential romantic partners which they would otherwise be unlikely to encounter. The Internet permits not only a death of distance geographically, with the ability to communicate without face-to-face encounters, but also death of distance in social network interactions. The pool is no longer defined by community or culture. Applying Feld’s focus theory, online dating is a hyper-focus platform, compared to traditional foci characterised by socially homogenous groups- such as religious congregations, workplaces or nightclubs (Feld 1982; Schmitz 2014). Online interactions can go beyond an existing network, introducing the potential for social discovery among high socio-structural heterogeneity. A priori, the case for greater diversity in online dating is clear.

The Case for MORE Similarity

With great choice, comes great choice fatigue, self-selection and intuition are unfeasible with such a large array of potential partners. In China, out of its 200 million single people, a quarter (54 million) used online dating services in 2016. The ability to browse and potentially match with over 50 million people is a daunting prospect. Instead, dating websites offer new choice infrastructures in filtering and recommending but both restrict diversity of viewed profiles. 

Filter theory, proposed by Kerchoff and Davis (1962), describes homogeny in partner selection where people interact partners filtered by similarity of social demographic factors. Online search introduces greater scope for individuals to filter over selective and defined criteria, enforcing an unprecedented degree of parametrisation of potential partners. Precise attribute selection tools allow dating site users to eliminate huge groups of the population who don’t meet specific desires. However, by creating extensive checklists they may be closing their minds to possibilities, especially given the compression of compatibility criteria to modular attributes such as education or income, excluding important offline forms of self-presentation such as facial expressions or humour (Schmitz 2014). Some studies exist on how this filtering mechanism operationally affects successful matches, with Rudder (2014) considering how probability of messages or likes are derived from different partner attributes. 

Technology has expanded the inputs to matchmaking, where not only hundreds of profile traits but also second-by-second user interaction behaviours can be used in mathematical algorithms to recommend potential partners. These digital traces introduce new complexity into designs of recommer systems. Content and collaborative filtering algorithms have wide commercial applications, working under the assumption `if you like person x, you will like person y’. However, unlike purchase or movie recommendations, successful matches critically require a ‘double coincidence of wants’ (Hitsch et al., 2010). Reciprocal recommenders such as RECON (Pizzato et al., 2011) offer higher success rates of conversion between recommendations, initiations and matches. Distance scoring systems attempt to minimise the Euclidean distance between partners across attributes (Hu et al., 2019). What these systems have in common is a structural design based on likeness. As Finkel et al., (2012) summarises, “[s]imilarity is a potent principle in online dating.” The recommended set of partners may indeed be even more homogenous than the traditional field of eligibles. Consider a dating website which demands users fill out a personality test, measuring attributes such as morality, extraversion or self-confidence. A matching algorithm could purposefully not recommend partners who misalign on these measures despite their potential compatibility in the real world. In fact, self-expansion theory argues people gain confidence from acceptance by dissimilar partners (Aron & Aron, 1997; Aron et al., 2006). As such, although users believe they can access a wider field, the irony is they are actually accessing users much more similar to themselves. The potential for recommender systems to diminish diversity has considerable consequence. Across social science literature, evidence suggests contact with dissimilar others and expansion of information flows beyond a close social network can broaden perspective and deepen empathy for other racial, religions or socioeconomic groups (Wright et al., 1997). Resnick et al., (2013) consider recommender systems as creators of content bubbles, responsible for the entrenchment of inaccurate or polarised beliefs. If recommender systems are trained on implicit feedback data such as clicks or messages, the training data is pre-biased by the filters already applied by the user, reducing diversity and impeding social discovery yet further. 

Which have you experienced?

 

How Human is Too Human? The Domain Specificity of Machine Decision Making

How could our understanding of AI be bettered by better understanding ourselves?

The development of Artificial Intelligence has started us on a path to potentially creating a new version of the human mind in digital form. Bostrom (2014) introduces the danger of what happens when our computers acquire our species’ long-held privilege of superior intelligence. At current, Professor Boden (2015) describes artificial intelligence as a “bag of highly specialised tricks”, seeing the advent of true ‘human-level’ intelligence a distant threat. However, it is precisely generalised AI which can make broad domain based decisions alone which comprises anthropogenic risk (Shanahan 2015). Anthropogenic risk itself poses a more pressing problem than natural extinction risk (Ord 2015). With the possibility of human-level artificial intelligence, the imperative to understand the way in which humans make decisions is stronger than ever. The tenets of decision-making bound the artificial minds which we create, imposing restrictions or freedoms in the way these machines will make decisions of their own. Machines optimise for the best solution against a set of constraints and artificial minds can be faster and more reliable at solving problems than even the most skilled human optimiser. In many situations, the speed of finding solutions to complex problems has driven the rates of technology adoption today. Yet the fastest, the highest value, the rational route to a solution may not be consistent with societal preference. The ruthlessly efficient optimiser does not align with harmonious co-existence. In some domains, ‘a mind like ours’ (Shanahan 2015), capable of nuanced moral-based judgement is preferable. Yudkowsky (2008) considers the nexus of artificial intelligence and existential risk arising from convergent instrument goals, avoiding the trap of anthropomorphising AI. Human-level ≠ human-like. Such inconsistencies between the way humans and machines behave across different domains create a tension between the decisions made by increasingly autonomous machines and the greater interest of human society at large. The prerogative to identify the grey area of decision-making thus becomes apparent.

There is the need for research asking “when is a moral judgement preferable and when is a value judgement required?” In which domains do we prefer predictability and perfect rationality and in which domains should we fall back on our humanness and subjectivity? I propose a method building on Mero’s (1990) advocation “If a man’s thinking mechanisms were understood better, then somehow it would also become possible to model and stimulate them by artificial means” (p.3). How could our understanding of AI be bettered by better understanding ourselves? Perhaps, one starting points lies in conducting experiments testing the transitivity of human preferences across varied moral and value domains to expose where caution must be exercised in the abdication of such decisions to artificial minds.

The Heuristical Human Mind

Starting from the seminal Kahneman and Tversky (1979) paper establishing “losses loom greater than gains”, a wealth of evidence has been experimentally produced and empirically tested highlighting the inconsistencies of our decision-making processes. In a bounded rationally domain, the human mind is only able to optimise over a restricted set of information, using heuristics and biases to ease the decision burden. Rarely do humans make perfectly rational value or time-consistent judgements. We do not consider the price of every coffee mug in every shop worldwide, simply buying one that is “good enough”. Incorporating such deviant preferences into algorithms has important applications for marketing (Evans et al., 2016). In some decision domains, we are able to reach a decision in the fastest way or the most appropriate way exactly by falling back on these instincts. Such a ‘gut instinct’ becomes particularly advantageous when time, informational or financial constraints bind more tightly.

Perfect Rationality, Perfect Track Records and Algorithm Aversion

It is clear that in some situations, perfectly functioning algorithms are the only acceptable default. The scope for artificial intelligence in certain domains requires precisely the removal of erroneous human decisions. An aircraft’s autopilot is designed to function better than a human would, taking in enormous quantities of available information, addressing feasibility of solutions and not getting stressed or distracted along the way. Similarly, we rely on a satellite navigation system when we require the shortest route possible and not occasionally taking a wrong turn or stopping to enjoy the scenery. In fact, in the study of Dietvorst et al., (2014), experimental subjects display strong algorithm aversion when a machine is anything less than perfect. If the decision making process coded in the algorithm appears to be at all faulty, we lose trust in delegating our decisions to it. Indeed the research shows that we lose far more trust than is rational when confronted with imperfect algorithms. To quote the authors: “It seems that errors that we tolerate in humans become less tolerable when machines make them”.

The Grey Area: Moral Judgements

The situations where we trust algorithmic not human judgement are highly domain specific (Logg 2017). The difficulty comes not in domains where perfect rationality is clearly optimal for the machine and for mankind, but in domains which require value-based judgements. When a ‘mistake’ is less clearly defined, choice and values introduces the requirement of outcome preference. But how do we specify the means to the end of these outcome preferences? How should they be determined? Under what conditions? Under what domains? In their consideration of “Moral Machines”, Wallach and Allen (2008) question “Does humanity really want computers to make moral decisions?” (p.6), citing many philosophers who advise against this abdication of responsibility. However, I make the call for research considering precisely when we cannot clearly define the domains where we are agreeing of abdication and the situations where we are not.

To address this trade-off between rational value and instinctual moral judgements, consider how machines make decisions over loss of human life: a focus central to existential risk. A self-driving car is a tangible example of AGI machine which must make autonomous decisions of outcome preference in high- cost, crisis situations. Even though a car is not inherently a moral machine, it is exposed to moral dilemmas when choosing between two objects to hit. Most would agree a random decision to swerve left or right is too clinical. Coding this basic command ignores too much of the last minute, crisis management “gut instinct” a human driver might make. The next most ‘rational’ way to code the decision would perhaps be a value judgement based on the ‘greater good’ of a particular decision to society. Indeed, if the correct decision is the one that optimises to the collective benefit of us all, perhaps we have arrived at a decision-making process which prolongs the existence of man in harmonious existence with machine. Yet, the optimisation problem is not so simple, and solutions relying on a value judgement or a moral judgement often do not match.

The Tension Between Value and Moral Judgements: An Example

Return to the useful (albeit somewhat clichéd) self-driving car example. Start with the simplest choice: do you hit a cone or a person? Easy, by any metric the cone is the obvious choice. Even between a hedgehog and a person the ‘correct’ outcome preference is clear. Now consider the creeping grey area of more complex choices. Consider a cancer researcher and a criminal? A cancer researcher and 17 criminals? A cancer researcher versus a CEO of a FTSE-100 company? A criminal versus a CEO of a FTSE-100 company? A baby versus a CEO of a FTSE-100 company? A baby versus a retired CEO of a FTSE-100 company with one month left to live? One can go on.

One ‘rational’ optimisation solution to this problem might be to consider the net value to society the individual or object bequests. Perhaps if we want to consider a further iteration, net present value of the individual or object. But consider a final choice: a Bugatti Veyron or a person? Legislative procedures in states across the globe require a tag to be placed on the cost of human life, with UK government placing it between $7-9 million. Some other estimates are as low as $2 million. The Bugatti Veyron, the Malibu mansion, even the motorway bridge come out tops every time.

Under this value system, the machine kills and it does so ‘rationally’. Assume such information is acquired from an internet search – the algorithm finds the ‘price’ documented by all of human information online and makes decisions accordingly. Crucially, our way of valuing human life is inadvertently laying the foundations for human extinction. Thus, the specificity of where certain decision making processes are appropriate and where they are not arises as a key issue for existential risk.

More complexly, specifying a machine’s outcome preferences, the rules of right and wrong, creates a reflexive problem suggesting humans could be prosecuted for making the ‘wrong’ decision. “It is hard enough for humans to develop their own virtues, let alone developing appropriate virtues for computers” (Wallach and Allen 2008, p.8).

Towards an Example Experimental Methodology

Appreciating how these decisions should be approached requires the identification of the troublesome ‘grey area’. Experimental research could help us learn more about how human subjects behave in different domains so that lessons can be applied to the demarcation of machine outcome preference. Identifying systematic deviations from ‘ruthless optimisation’ allows machine outcome preferences to be updated in a way consistent with complex human domain-specific processes. Considering Husserl’s “notion of a lifeworld”, whereby intelligent decision making relies on a “background of things and relations against which lived experience is meaningful” (from Shanahan 2015). Identifying target areas in which human subjective experience is required helps us make AI more human where it is required. 

Decision-making theory assumes homo-economicus has perfectly rational preferences. One requirement of these preferences is transitivity. To have transitive preferences, a person, group, or society that prefers choice option 𝑥 to 𝑦 and 𝑦 to 𝑧 must prefer 𝑥 to 𝑧:

𝑥≻𝑦 and 𝑦 ≻𝑧 ⇒ 𝑥 ≻𝑧

Preference transitivity can be tested by presenting subjects with lots of pairwise choices. Whenever preferences violate transitivity, this is where irrationality is salient in the decision-making process. If across the sample, similar places repeatedly arise, these represent target areas where something ‘human’ is overriding the pure value judgement. In order to greater understand whether nominal value helps or hinders the decision process, one treatment group could face  ‘price tags’ on every choice whilst the other group make subjective value choices.

Limitations and Conclusion

As Mero (1990) notes “those wishing to build artificial intelligence, all pry into the essential nature of reason” (p.1). Yet, this top down development approach of imposing human morals from decision data is not a panacea. Bottom-up approaches have also been proposed, whereby moral capacity evolves from general aspects of increasingly sophisticated intelligence (Wallach et al., 2008). Additionally, the simplicity of a laboratory experiment makes it an imperfect investigation into specifying domain-based machine decisions. Finally, Shanahan (2010) encourages us to consider Metzinger’s proscription. Introducing scope for moral decisions into a machine’s optimisation frame is concomitantly introducing the ability to suffer and the requirement of rights or freedoms.

Despite limitations, approaching AI research from a behavioural and experimental economics allows us to better understand how we make decisions ourselves before trying to understand how an AI does the same. This approach represents a preliminary first take at how we can begin to code the outcome preferences of artificial intelligence to ensure, at worst, we are not inadvertently causing our own extinction by wrongly publishing the value of our own lives online and at best, we align the goals of a human-level artificial mind with human-like decisions.

References

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. 2014.

Boden, Margaret. “Human-level AI: Is it Looming or Illusory?” In The Centre for the Study of Existential Risk’s Lecture, June 2015.

Bostrom, Nick. “The superintelligent will: Motivation and instrumental rationality in advanced artificial agents.” Minds and Machines 22, no. 2 (2012): 71-85.

Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. “Algorithm aversion: People erroneouslyavoid algorithms after seeing them err.” Journal of Experimental Psychology: General 144, no. 1 (2015): 114.

Evans, Owain, Andreas Stuhlmüller, and Noah Goodman. “Learning the preferences of ignorant, inconsistent agents.” In Thirtieth AAAI Conference on Artificial Intelligence. 2016.

FlØistad, Guttorm. “The Concept of World in Existentialism.” Revue Internationale de Philosophie (1981): 28-40.

Kahneman, Daniel, and Amos Tversky. “Prospect theory: An analysis of decision under risk.” In Handbook of the fundamentals of financial decision making: Part I, pp. 99-127. 2013. First published 1979.

Logg, Jennifer M. “Theory of Machine: When Do People Rely on Algorithms?” Harvard Business School Working Paper, No. 17-086, March 2017

Méro, László. Ways of thinking: The limits of rational thought and artificial intelligence. World Scientific, 1990.

Ord, Toby. “Will We Cause Our Own Extinction” In The Centre for the Study of Existential Risk’s Lecture, April 2015.

Rossi, Francesca, Kristen Brent Venable, and Toby Walsh. “A short introduction to preferences: between artificial intelligence and social choice.” Synthesis Lectures on Artificial Intelligence and Machine Learning 5.4 (2011): 1-102.

Shanahan, Murray. “Minds Like Ours: An Approach to AI Risk” In The Centre for the Study of Existential Risk’s Lecture, February 2015.

Shanahan, Murray. Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds. Oxford University Press, USA, 2010.

Wallach, Wendell, and Colin Allen. Moral machines: Teaching robots right from wrong. Oxford University Press, 2008.

Wallach, Wendell, Colin Allen, and Iva Smit. “Machine morality: bottom-up and top-down approaches for modelling human moral faculties.” AI & Society 22, no. 4 (2008): 565-58

‘All Under a Warming Atmosphere’ : What Chinese Ancient Philosophy Can Teach Us About Collective Climate Change Policy’

A climate crisis is indiscriminate to citizenship, cultural affiliation or race, instead affecting ‘all under a warming atmosphere

Our world and its climate is common to all global citizens. Governance of our shared environment requires collective commitment but Westphalian philosophy, compartmentalized into individual-state units, entrenches national self-interest. Turning to ancient Chinese philosophy could provide an alternative frame for global governance. The  3000 year-old concept of Tianxia (天下)translates literally as “all under heaven” but more generally describes an over-arching governance system prioritising the collective, not the individual, an all-inclusive harmonious existence. Employing this system, China’s longest lasting dynasty, the Zhou (1046-256 BC) prevailed over the powerful, state-centered yet fragmented Shang. In modern times, a fractious battlefield of individual states has raised nationalist parties across Europe and the States. Within borders, identity politics is best described by theorist Carl Schmidt’s “us versus them”; the same bifurcation damages dyadic international relations. Through philosophers’ eyes, climate tensions make a dismal Hobbesian chaos of failed states not an unlikely eventuality, and the certainty of Kant’s ‘perpetual peace’ an impossibility. If indeed history repeats itself, our current climate, both in environmental and political terms, calls for a new concept to transform hostility to harmony, 3000 years after the Zhou Dynasty flourished.

International cooperation and conflict is complex, yet the underlying behaviours approximate to strategic decision-making in multi-agent games. A ‘defecting’ strategy can win out in a single, one-shot interaction but introduce the ability to learn across time and an opponent’s undesirable behaviour becomes common knowledge. If each copies their opponents, players can be ‘equally stupid’, by all defecting, or ‘equally smart’, by playing harmoniously thus increasing payoffs for everyone. Reaching the mutually-beneficial long-term equilibrium requires what game theorists call a ‘stable evolutionary strategy’, what Chinese philosophers called Tianxia. In today’s global playing field, a new strategy of governance is welcome.

One of China’s most influential modern philosophers, Zhao Tingyang, upholds Tianxia as a unique notion of governance from unity across three realms. Firstly, respect for all environments ‘under heaven’, the natural world irrespective of human territorial claim. Secondly, a like-minded agreement between all citizens, relying on shared factors contributing to general wellbeing. Finally, a universal governance system responsible for maintaining the first two of these realms, the people and their environment, by shouldering a responsibility for world order. Harmonious existence, a state of Tianxia, requires three distinct realms, physical, psychological and political, to meld to one.

The astute reader might question the tenability of such idealistic dreams of world harmony. The Zhou ‘world’ was much smaller, culturally homogenous and collectivistic in structure, reflected even in the evolution of language. The character for public (gong ) appeared in much earlier inscriptions than that for private (si ). Interestingly, each still occupies an ethical meaning, the former synonymous with fairness while the latter with selfishness. A modern plurality of identities, circumstances and culture requires a more robust strategy for universal inclusion and commonality. ‘Common pasture’ became a human problem during the transition from nomadic to pastural life. Enter the Tragedy  of the Commons, a well-known phenomena in economics, pronouncing environmental degradation of shared but unowned land. The world, its atmosphere and its oceans, is one very large, very complex ‘common’. The nature of environmental damage, with geographical and intergenerational spillovers, makes the ‘common’ even more abstract. The scale of the problem is uncontained: further extensive damage to our shared ‘common’ risks its existence for all.

Advocates see Tianxia as a reachable reality only if individual rationality is relinquished for relational rationality with the mindset ‘existence requires coexistence’. Confucius, the well-known Chinese philosopher, framed the fragile individual against their broader environment. His concept of ren (), defines an individual only in relation to others, not as a separate entity. Far from exotic, knowledge of global interconnectedness, the mutuality of life and destruction, explains cold war brinkmanship and nuclear strategy since. International organisations, of course, do already exist. In political governance, the UN has held panels across the globe from Copenhagen to Cancun, Kyoto to Paris, albeit with limited success. In economic terms, the international carbon credit market already applies Tianxia unknowingly, viewing emissions as unconstrained to one country, selling floating allowances across borders but holding total emissions fixed. However, ‘Too Big To Succeed’, supra-national organisations are blunt tools to encourage each government to meet arbitrary emission targets. Further complexities arise from imbalances of allowances and strategies for developed and developing countries. Perversely, despite birthing Tianxia, China itself has demonstrated reluctance to coordinate with these institutions to protect the shared environment.

Clearly, redesigning modern global governance in an all-inclusive Tianxia system is not a panacea. Yet, its past success can teach us how to begin making changes to promote international coordination. A plausible modification is two-fold. Firstly, coordinated governance becomes simpler by making the ‘world’ smaller. Greater homogeneity aligns compatibility of universal will and desirability of shared outcome. Professor David Victor, UC San Diego, advises that countries work in smaller groups to avoid gridlock between an ambitious number of bargaining bodies. With fewer than 10 countries responsible for 70% of emissions, sub-agreements need not involve all 193 UN members. Secondly, collapsing the complexity of the ‘common’ focuses efforts on one task. To mitigate commitment reluctance, an international governing agency requires a sole environmental mandate divorced from other UN matters. A World Environment Organisation has been proposed as a sister to the WTO. Institutionalizing a commitment to fairness, as achieved for trade, encourages a mutual reciprocity for emission reduction, exactly the relational rationality Tianxia is built upon.

What constitutes ‘all under heaven’? Some areas of policy remain nation-bound, where existence within borders is not reliant on the behaviours of those outside. Environmental policy is not one of these isolated domains. A climate crisis is indiscriminate to citizenship, cultural affiliation or race, instead affecting ‘all under a warming atmosphere’. Tianxia advises governance by compatible universalism on matters transcending political, racial and geographical boundaries. Global citizens require an ‘equally smart’ cooperative thinking to protect the common world, avoiding the ‘equally stupid’ mutual climate destruction at detriment to all current and future humans.

Does Technology Break with Tradition? The Endogeneity of Culture and Digital Infrastructure in Post-Reform China

Dynamic, dyadic coevolution deepens as newer technological innovations manifest, posing a key question, what does it mean to be ‘Chinese in a digital age?

China’s socioeconomic transition from 1978 onwards has expedited technological development, but the resulting changes in the prevailing digital infrastructure must contend with existing cultural traditions. Emerging online behaviours are being constantly shaped and reshaped by increasing digitalisation in a self-reinforcing cycle. The bi-directionality of development between technological adoption and cultural traditions is evidenced in the dyadic interactions between individuals subservient to the rise of social media networks. Online communication revolutionised bilateral interactions, yet imbued cultural notions of guanxi (关系) remain key to social capital accumulation in China. In short, technological interactions reveal societal structure, but the underlying infrastructure is endogenous to pre-existing values. As such, I argue that, by exposing relevant dualities, technology does not break with tradition but rather fosters a new expression of oxymoronic ‘modern tradition,’ in which there exists congruence between culturally-imbued behaviours and their technological manifestation.

The Digitalisation of Dyadic Communication: Guanxi in Social Media Networks

McLuhan (1994) first theorised the fundamental role of media in changing and conditioning social interactions. While his affirmation that “the medium is the message” holds true, the directionality of media’s decisive role is less clear. The traditional ‘Uses and Gratification’ perspective views users as active, discerning, and motivated in selecting media to satisfy individual needs, but is limited by its ignorance of distinct cultural traits in defining such needs. More recent convincing arguments propose that changes in social context drive technological development (Race 2015), dispelling previous convictions of technological determinism. In support, examination of website aesthetic forms and design philosophies reveals lessons about the larger social order: that technologies proven to operate efficiently within one culture fail in the context of others if they cannot provide a sufficient apparatus for social interaction (Miller 2000).

Compliance, group norms, and social identification provisions influence user acceptance and diffusion (Lisha et al., 2017). The Technology Acceptance Model (TAM), designed by Davis (1986), considers technology usage as a function of causal relationships from external factors, to beliefs, attitudes, behavioural intention and finally, actual behaviour. In the social media space, two dominant beliefs preside: perceived usefulness and perceived ease of use. Stronger beliefs that a certain social media platform will fulfils the users’ needs increases the probability of uptake and sustained engagement (Pinho and Maria 2011). For example, Gan and Wang (2015) find Chinese users prefer WeChat to other platforms due to the restriction on interactions and shared content to only pre-approved contacts. Such privacy reflects the exclusive, closed social network notion of guanxi. Thus, cross-cultural divergence in offline network structure influences the acceptance of different online platforms.

Yet culture itself is a complex notion, lacking a single standardized definition across the existing literature. At a higher level, culture underpins the values imbued in society (Anderson and Lee 2008), governing an individual’s identity, actions, and relations. The overarching concept of guanxi in China is derived from culturally-based elements such as kinship ties, local traditions, and social norms reinforced by a moral system guided by Confucianism (Fei 1947; Hwang 1987; Yan 1996). The value of guanxi lies in the strength of particularistic ties and the principle of reciprocity for the exchange of gifts, favours, and trust (Lee and Dawes 2006; Qian et al., 2007). Such interpersonal relations beyond blood and kinship are built through careful balancing of emotional and instrumental components (Chen et al., 2013). Guanxi consists of three more component social mechanisms: (1) guanqing(管情), or guarantees underpinning strength of network ties (Gold and Guthrie 2002); (2) mianzi(面子), synonymous to social status and self-image portrayal (Wang 2016); and (3) renqing(人情), reflecting relational effort to obtain mutual benefits (Wang 2007).  Each of these cultural behaviours are represented in online interaction in China, and the cultural appropriateness of behaviour therein drives the acceptance of technological features  in turn.

WeChat is the primary social media platform for formation of dyadic interactions between Chinese netizens (CINIC 2015). Features of low anonymity, high privacy, and closed sharing circles are harmonious with expressions of guanxi through a network of personal connections (Yan 1996). The dominant and distinctive features of Chinese social media technologies transcend cultural constructs, allowing for the maintenance of traditional social structures. The private networking opportunities on the platform recreates the exclusive circle of in-group favourites entitled to social resources such as business favours, heightened trust, and mutually beneficial cooperation. These networks are further strengthened by instant messaging, which allows WeChat users to build good guanqing by creating and maintaining guanxi with only those they deem important. Online ties align with real world connections, with only 10% of WeChat users open to stranger interactions (Chun 2014).  Digital content can be used to promote positive self-image, with online branding as a means to the end goal of accumulating mianzi. Renqing becomes more convenient to accrue, with the new ability to send an electronic hongbao (红包 , red bag) in an instant. The commonalities of successful Chinese technological developments, whether it be the popularity of closed social networks, adoption of diffusion-restricted content exposure, or modernised gift-exchange, are demonstrative of how technology evolves to interface with cultural constructs.

The replication of traditional behaviours into modern mediums in the Chinese context is perhaps best exemplified by electronic hongbaos. First introduced to welcome the 2014 Chinese New Year and honour the tradition of sending money as a form of blessing, hongbaos drew over 8 million users with transactions totalling 400 million yuan in sum within just nine days. This intensity of social interaction during cultural events is an extraordinary phenomenon, even being accredited as distinctly unique to China (Holmes et al., 2015). Furthermore, this technological reinvention of an ancient custom is a display of traditional collectivist values of Chinese society re-emerging in digital form. Cultures are considered collectivist if the society possesses a consistency of world view (Oyserman et al., 2002) where common adherence is applied to interpersonal relationships, reciprocity, and social norms (Hempet et al., 2009; Somech et al., 2008; Pheng and Leong 2001).  Holmes et al., (2015) see social media as a tenable modern method to link such a large but highly collectivist society as China’s. Crucially, these modern hongbaos demonstrate that digital technologies are not individualistic, nor with meaningless functionality or merely technological by nature. Rather, technological development can reinvent traditional forms of cultural behaviours, but does ignore others in the process. While instantaneous and geographically unconstrained hongbaos favour modern expression of guanxi, digitalisation introduces distance between present family members, disturbing notions of filial piety. Lee (2017), using interviews of older generation attendees at a wedding, reports that some Chinese elders disagreed with guests congratulating the bride by scanning a QR code, citing the impersonal nature of gift-giving via mobile payments. In lacking face-to-face exchange, the packages do not constitute perfect substitutes for traditional cultural practice. Nonetheless, both a hongbao and a direct money transfer offer the same financial and technical functionality. As China transitions towards a cashless economy, all types of digital payments are on the rise with the People’s Bank of China reporting record $17 trillion in 2017. Yet, the cultural significance of the hongbao is revealed by breaking down the timing of usage data. There exists an annual regularity of digital payment spikes during festivals, driven specifically by hongbaos with more than 14.2 billion gifted on WeChat for Lunar New Year’s Eve alone in 2017 (Yuan et al., 2017). Thus, while a direct transfer and a hongbao may share functionally, the former is still designated appropriate behaviour at weddings and festivals, while the latter only occupies a transactional function. ­­­

Tensions between resilient cultural influences and technological change are discussed by Anderson and Lee (2008), who question whether cultural attitudes towards business remain tradition-bound or face convergence to Western standards of internet-individual interaction. Davidson (2008) offers a more precise narrative detailing the diminishing importance of guanxi in lieu of effective online intermediaries able to re-contextualise explicit knowledge sharing: by connecting businesses directly, intermediaries like those provided by Alibaba threaten the relevance of traditional guanxi based social networks. Some commentators argue that guanxi connections arose precisely to deal with China’s information-scarce environment plagued by severe asymmetries (Akerlof 1970; Stigler 1961). In order to survive, they assert, businesses forge new ties to mitigate uncertainty in the process of knowledge exchange (Powell et al., 1996). The need for particularistic ties is further strengthened in the competition for resource allocation. During China’s transitional Opening and Reform period, the greater transparency of information offered by internet intermediaries and reduction in physical resources required to build connections had a profound impact on network behaviour (Chang 2011). With over 800 million netizens (CNNIC 2018), China’s online community presents remarkable opportunities for network expansion. Niedermeier (2016) reveals that WeChat interactions are used to expand sales by offering a more tailored personal connection from seller to buyer, an example of technology aligning to and enhancing elements of traditional guanxi formation. While a more extensive digital infrastructure represents change in the institutional environment, this does not restrict actors to abandoning previous networking strategies. Instead, the same central tenants of guanxi remain but with different means applied to reach the same end.

Conclusion

The permanence of cultural tendencies in digital networks relies on persistency of purpose and message, despite structural breaks in the method of forming connections. Crucially, the blending of old and new in the rise of certain technologies and decline of other cultural behaviours demonstrates the coevolution of technology and culture in China’s transition to modernity. Social interactions reflect traditional guanxi ideals of interpersonal influence and connectedness, but the contextual nature of dyadic interactions has transformed to keep pace with a new generation of internet users with new desires, behaviours, and norms.

Technological development inflicts change at a pace incomparable to previous innovations. In a society contending with change which is continuous, not episodic, Chinese culture faces considerable pressure to conform to technological determinism. Yet, it is naïve to ignore the extent to which culture itself is deterministic, rendering the process endogenous. Culture drives digital desires as social discourse shapes internet expression, working to mould the digital infrastructure around a pre-existing framework. Upon documenting the complex coevolution of the internet and society, it becomes apparent that the underlying, culturally-specific yet dynamic tenets of tradition manifest in technology during China’s post-reform transition to modernity.

 

 

References

Akerlof, George A. 1978. “The Market for “lemons”: Quality Uncertainty and the Market Mechanism.” In Uncertainty in Economics, 235–51. Elsevier.

Anderson, Alistair R, and Edward Yiu-chung Lee. 2008. “From Tradition to Modern: Attitudes and Applications of Guanxi in Chinese Entrepreneurship.” Journal of Small Business and Enterprise Development 15 (4): 775–87.

Chang, Kuang-chi. 2011. “A Path to Understanding Guanxi in China’s Transitional Economy: Variations on Network Behavior.” Sociological Theory 29 (4): 315–39.

Chen, Chao C, Xiao-Ping Chen, and Shengsheng Huang. 2013. “Chinese Guanxi: An Integrative Review and New Directions for Future Research.” Management and Organization Review 9 (1): 167–207.

Chen, Si-hua, and Wei He. 2014. “Study on Knowledge Propagation in Complex Networks Based on Preferences, Taking Wechat as Example.” In Abstract and Applied Analysis. Vol. 2014.

Davison, Robert M, and Carol Xiaojuan Ou. 2008. “Guanxi, Knowledge and Online Intermediaries in China.” Chinese Management Studies 2 (4): 281–302.

Fei, Xiaotong. 1947. From the Soil: The Foundations of Chinese Society.

Gold, Thomas, Doug Guthrie, and David Wank. 2002. “Social Connections in China: Institutions.” Culture, and the Changing Nature of Guanxi (Structural Analysis in the Social Sciences) New York: Cambridge University Press.

He, Wu, Shenghua Zha, and Ling Li. 2013. “Social Media Competitive Analysis and Text Mining: A Case Study in the Pizza Industry.” International Journal of Information Management 33 (3): 464–72.

Hempel, Paul S, Zhi-Xue Zhang, and Dean Tjosvold. 2009. “Conflict Management between and within Teams for Trusting Relationships and Performance in China.” Journal of Organizational Behavior: The International Journal of Industrial, Occupational and Organizational Psychology and Behavior 30 (1): 41–65.

Holmes, Kyle, Mark Balnaves, and Yini Wang. 2015. “Red Bags and WeChat (W{\=e}ix{\`\i}n): Online Collectivism during Massive Chinese Cultural Events.” Global Media Journal: Australian Edition 9 (1): 15–26.

Hwang, Kwang-kuo. 1987. “Face and Favor: The Chinese Power Game.” American Journal of Sociology 92 (4): 944–74.

Lee, Don Y, and Philip L Dawes. 2005. “Guanxi, Trust, and Long-Term Orientation in Chinese Business Markets.” Journal of International Marketing 13 (2): 28–56.

Lisha, Chen, Chin Fei Goh, Sun Yifan, and Amran Rasli. 2017. “Integrating Guanxi into Technology Acceptance: An Empirical Investigation of WeChat.” Telematics and Informatics 34 (7): 1125–42.

Liu, Wei, Xudong He, and Peiyi Zhang. 2015. “Application of Red Envelopes–New Weapon of Wechat Payment.” In 2015 International Conference on Education, Management, Information and Medicine. Atlantis Press.

Lu, Jie, John Aldrich, and Tianjian Shi. 2014. “Revisiting Media Effects in Authoritarian Societies: Democratic Conceptions, Collectivistic Norms, and Media Access in Urban China.” Politics & Society 42 (2): 253–83.

Mao, Chun. 2014. “Friends and Relaxation: Key Factors of Undergraduate Students’ WeChat Using.” Creative Education 5 (08): 636.

McLuhan, Marshall. 1994. Understanding Media: The Extensions of Man. MIT press.

Miller, Daniel. 2000. “The Fame of Trinis: Websites as Traps.” Journal of Material Culture 5 (1): 5–24.

Niedermeier, Keith E, Emily Wang, and Xiaohan Zhang. 2016. “The Use of Social Media among Business-to-Business Sales Professionals in China: How Social Media Helps Create and Solidify Guanxi Relationships between Sales Professionals and Customers.” Journal of Research in Interactive Marketing 10 (1): 33–49.

Oyserman, Daphna, Heather M Coon, and Markus Kemmelmeier. 2002. “Rethinking Individualism and Collectivism: Evaluation of Theoretical Assumptions and Meta-Analyses.” Psychological Bulletin 128 (1): 3.

Powell, Walter W, Kenneth W Koput, and Laurel Smith-Doerr. 1996. “Interorganizational Collaboration and the Locus of Innovation: Networks of Learning in Biotechnology.” Administrative Science Quarterly, 116–45.

Qian, Wang, Mohammed Abdur Razzaque, and Kau Ah Keng. 2007. “Chinese Cultural Values and Gift-Giving Behavior.” Journal of Consumer Marketing 24 (4): 214–28.

Race, Kane. 2015. “Speculative Pragmatism and Intimate Arrangements: Online Hook-up Devices in Gay Life.” Culture, Health & Sexuality 17 (4): 496–511.

Slater, Don, and Daniel Miller. 2000. “The Internet: An Ethnographic Approach.” Berg Publishers.

Somech, Anit. 2008. “Managing Conflict in School Teams: The Impact of Task and Goal Interdependence on Conflict Management and Team Effectiveness.” Educational Administration Quarterly 44 (3): 359–90.

Stigler, George J. 1961. “The Economics of Information.” Journal of Political Economy 69 (3): 213–25.

Sui Pheng, Low, and Christopher H Y Leong. 2001. “Asian Management Style versus Western Management Theories–A Singapore Case Study in Construction Project Management.” Journal of Managerial Psychology 16 (2): 127–41.

Wang, Ping, and Qian Zhang. 2016. “Effect of Mianzi to Family Enterprise Employee Voice Behavior: SSG as Intermediary.” Chinese Journal of Ergonomics 4: 4.

Yan, Yunxiang. 1996. “The Culture of Guanxi in a North China Village.” The China Journal, no. 35: 1–25.

Zou, Hongbo, Hsuanwei Michelle Chen, and Sharmistha Dey. 2015. “Exploring User Engagement Strategies and Their Impacts with Social Media Mining: The Case of Public Libraries.” Journal of Management Analytics 2 (4): 295–313.

 

Masters of Metallurgy and Music: The Ancient Bells of Xi’an

As a musician with background in physics and mathematics with new-found knowledge of ancient china, writing a reflection on the bells of Xi’an seemed a matrimonious choice. The world’s first bells produced in China during the Shang and Zhou periods mark the start of a long relationship between political authority, time and sound, symbolised by the pealing tolls of a bell tower . In reverence of this triangulation, I consider both Duke Qin’s Bronze Bells and the Xi’an Bell Tower, considering the techniques, artistry and symbolic importance to a modern observer of Ancient China.

From the Taigon Temple in 1978, archaeologists excavated 5 zhong bells and 3 Bo Bells made during the reign of Duke Wu of Qin (697-678BCE). Before discussing the importance of the bell itself, the inscription on the bells offer important confirmation of the genealogical order of the Dukes as mentioned in the Qin Li[1] and the 135 word text hints at the Qin mandate, opening with the Duke’s words “my foremost ancestors have received the heavenly mandate”[2]. Presenting the role of bells in this way invites parallels to be drawn to the stone steles as “material forms to preserve writings”[3]. The art of music and the functionality of sound bases the development of all human civilization, with importance pertaining both everyday tasks and sacred ritual: the bell tolled time for civilians but equally sang of ritual and religion in imperial context. The imperial court of Zhou Dynasty disseminated the importance of bells and musical stones, first introducing the association of music and political hierarchy. Comparably to how we can paint calligraphy as a unifier of China, bells too symbolise the importance of harmony through music. Bells were even seen to bring harmony to agricultural endeavour, arising from the similar pronunciation of zhong (bell) and zhong (cultivate)[4].  After the Tang Dynasty, bells embodied a ritualistic role, particularly in Buddhist religion[5]. Ledderose (2001)[6] deems bronzeware as the “most impressive and fascinating material remains surviving from Chinese Antiquity” and while the author talks of the ‘modular technical system’ by which items are made, he fails to fully capture the technical erudition of bell manufacture uncovered by researchers in 1977[7] . It was learnt the bells rung differently when struck on the side and centre with the difference in pitch always a third, intervals not recognized as harmonic in Europe until the 12th Century. Only the application of modern day physics and the mathematics of harmonic motion reveals the depth of the contemporaneous craftsman’s precision and cleverness in the light of the dual-pitch design, melding musicality and metallurgy. When a bell is struck, the relative strength of partial frequencies constitutes the sound’s tonal quality, just as wavelengths comprise colour of light. Distinctly to strings, bells display complex acoustical dynamics where the thickness and elasticity of material determines tone. Striking each bell zone (gu and sui) causes vibrations to converge at certain points, called nodal meridians[8]. As Shen (1987) stipulates, “neither the concave rim design nor the precision in identifying the convergence of nodal lines could have been accidental”[9].  The mei nipples seen on the bells  also go beyond ornamental value by balancing the strength of the fundamental partial frequencies. I find not just the physics itself of these bells so fascinating but the clue it gives to the modern observer that the ancient Chinese must have possessed a theoretical grasp of the physics of music far beyond historians’ initial estimates. In studying such sophisticated bronzeware we garner a better understanding of the importance of metallurgy and music across culture and in advancing civilization.

The triangulation between imperial power, time and sound is exemplified by the Xi’an Bell Tower. Classifying the imposing tower as a chronographic facilitator aligns to Needham’s[10] emphasis on the political significance of time-keeping to a Chinese Emperor. While now silent and without audible aspect, the architectural monumentality, the “towering image, central location and sophisticated architecture forms”[11]  hints of the significance of the bell in ancient Chinese cities. The Tang first used the Chang’an bell (and drum) as a centralised system of public time telling, with the peals beginning and ending each day for ancient Chinese city-dwellers. Interestingly, Wu (2003) using records of Beijing’s ancient bell tower discerns sound was only sent out hourly from dusk to dawn, indicating sight and sound are somewhat “mutually exclusive”. This further confirms the role of the bell in promotion of political power and peace, to be heard at night when other more outwardly visual displays of the Emperor’s reign were unseen. The Bell Tower remains an important monument in modern Xi’an and consequently, deals with the tribulations of modern life. The role of traffic[12] and the metro[13] in inducing potentially destablising micro-vibrations has been feverishly studied in recent years, and shares insight from a personal visit to the Xi’an metro depot, where the impact of new lines spanning the city is presented. Such confluence of old and new engineering once again heralds to the harmonious preservation of ancient culture.

To conclude, I consider both the Duke of Qin bells and the Xi’an Bell Tower as reverent examples of the societal sophistication of Ancient Chinese Civilisations, telling the story of emperor and civilian alike through the role of music and sound. The marriage of metallurgy and musicality represents the technological advancement of ancient civilisation beyond its contemporaries in creation of dual-pitch design but we can also identify a second duality in the promotion of religion and political authority through the peals of ancient Xi’an bells.

References

[1] Chang, K. C., Xu, P., Lu, L., & Pingfang, X. (2005). The formation of Chinese civilization: an archaeological perspective. Yale University Press.

[2] Portal, J (2007). The First Emperor: China’s Terracotta Army. Harvard University Press

[3] Lu Yang Lecture, 13th November 2018, Writing Calligraphy and Cultural Memory in Traditional China: The Story of the Forest of the Steles

[4] Explanation and Study of Principles of Composition of Characters, Xu Shen

[5]The role of bells in Tang Buddhism is evinced by the exposition of Buddhist celebrations by Daoist priest Fu Yi presented in the 620s: “strike Chinese bells and gather together”  quoted in Lewis, M. E. (2009). China’s Cosmopolitan Empire, Harvard University Press

[6] Ledderose, L. (2001). Ten Thousand Things: Module and Mass Production in Chinese Art, Princeton University Press, Chapter 2 on Bronze Ware and Chapter 3 on the Qin Terracotta Warriors

[7] Huang Xiang-peng, Lu Ji, Wang Xiang, Gu Bo-bao and their colleague, Shaanxi Province

[8] Shen, S. (1987). Acoustics of ancient Chinese bells. Scientific American256(4), 104-111.

[9] Ibid.

[10] Needham, J. (1974). Science and Civilisation in China: Historical Survey (Vol. 1-7). Cambridge University Press.

[11] Hung, Wu. “Monumentality ofTime: Giant Clocks, the Drum Tower, the Clock Tower.” (2003).

[12] Meng, Z., Chang, Y., Song, L., & Yuan, J. (2009). The Effects of Micro-Vibration Excited by Traffic Vehicles on Xi’an Bell Tower. In International Conference on Transportation Engineering 2009 (pp. 37-42).

[13] Yong-sheng, L. E. I. (2010). Research on protective measures of City Wall and Bell Tower due to underneath crossing Xi’an Metro Line No. 2 [J]. Rock and Soil Mechanics31(1), 223-236.