Is moral clarity a prerequisite or an outcome of Artificial Intelligence (A.I.)? Are we facing the ultimate chicken and egg question?
Machine learning and Artificial Intelligence present some obvious and some not so obvious ethical and moral challenges. Many of the questions of social consequence presented by these technologies have only begun to be asked even as the technologies themselves develop at an incredible pace. Mechanisms to address the social impact of new technologies don’t always co-evolve in step with the development or scaling of the technology itself. It can be argued that several past inventions like the Internet and innovations like social networks also came about in a similar manner when sociological review of the emerging technology was still half-baked. Mark Zuckerberg’s appearance before the Senate commerce and judiciary committees on the 10th of April 2018 to answer questions relating to privacy on Facebook (in light of the data breach involving Cambridge Analytica) is a case in point. Does this current model of technological innovation led by high value technology start-ups give us the best societal outcomes possible? Have we developed agile and smart enough social skills and learning techniques to assimilate new innovations into our lives? Some times it can feel like we are irrevocably drawn into new technologies and any good or adverse consequences they may present without having a say or having sufficient insight into them. While many of these innovations add enormously to our lives, could there be a systemic bias towards focusing on the positives disproportionately in the early stages of these new enterprises than on accounting for the costs? And if so, where does that stem from?
More than 50 years since Gordon Moore’s observation that the number of transistors per square inch of an integrated circuit seemed to double every two years, Moore’s law is still popularly used as a representation of the rapid rate of acceleration of tech innovations which seem to be evolving at an ever accelerating rate. Societal adaptation however, as indicated above isn’t always up to the same speed. A.I. may finally be the frontier that requires this gap to be narrowed before we are too far along the development cycle. If empathy needs to be programmed into machine learning then the time to implement that is now, when the learning and synaptic linkages between bits of code being programmed are beginning to form. With A.I. the additional challenge compared with previous technologies is that the degree of moral clarity required in the people designing the systems is far greater than with previous technological innovations. Unlike in the Facebook case where the challenge is largely posed by its network effects, with A.I. the machine learning aspects make the challenge far more complicated. Facilitating decision-making by computers to pursue certain goals with intent involves the machines learning from experience. This suggests that machine learning is by its very definition path dependent. This makes it harder to post facto correct any lapses in the engineering. An iterative and immersive involvement of stakeholders seems desirable from the get go.
Can we assume that all moral and ethical decisions are within reach of the unaided human mind?
The ultimate question is whether we even possess the capability to answer some of the hard questions that need to be addressed. Our approach to human mortality, animal rights, environmental conservation, economic models of inclusive development, building just systems of reward and punishment, individual freedom versus communal responsibility areamong hundreds of questions that have continued to test our individual and collective wisdomsince the dawn of civilisation. Some of the reasons that we do not have complete and accurate answers to these questions could be to do with the limits of the human brain itself. Our capability to process vast amounts of information or build and test long term scenarios or access and store the quantity of information required to answer some of these questions unaided, may be insufficient. The question is how far do we need to be in having those answers before we develop the next gen. A.I. systems? Or ironically, is it the capabilities of A.I. systems that can help us compute answers to some of these questions? In this case development of the technology would need to come before we are able to approach anything near moral clarity on some of these deeper philosophical questions. As the title suggests, this might be the ultimate chicken and egg question we have ever faced.
Given that survival is perhaps the ultimate intent of intelligent organisms like ourselves, it may be prudent to bring to bear the vast body of human knowledge and understanding to the development of technology that seeks to mirror and exceed our own intelligence. Considered immersion and wider engagement is perhaps the prudent path in this exploration.
A system that helps ideas to bloom into companies and companies to in turn transform industries is indeed an aspirational ecosystem to create. However, early engagement with social and philosophical questions posed by emerging technologies needs to be advocated more ferociously. The challenge in expanding the use of driverless cars surely goes beyond conceiving the technology of allowing an automobile to cruise or self navigate. The challenge for instance, involving ethical choices such as how does one program a self driving car to choose between two alternatives that may both be traumatic? We may not even ourselves always consciously know how we would make some of these choices when faced with them. Yet, we seem to trust human judgement that is adaptive and not closed to making adjustments when applied in new contexts. On the other hand with superior analytical and data processing capabilities and faster access to wider information a programmed machine can compute quicker and decide more logically. Be it Alphabet’s Deep Mind, Open A.I, or Chinese 93 petaflop Sunway TaihuLight super computer, these machines have now proven ability to better humans in some scenarios. So then which is it, the chicken or the egg?
What can we learn for the future of large-scale innovation from past developments such as ubiquitous social networks?
Mark Zuckerberg’s appearance before the Senate Commerce and Judiciary Committees marked a significant moment in the process of assimilating peer to peer information sharing technologies such as Facebook into the social and political system. This came 14 years after Facebook was launched from a dorm room at Harvard in February 2004. The opportunity to build and scale a social media platform, that now has 2.2 billion monthly active users as of Q4 2017 (as per Statista, The Statistics Portal), was afforded by the unique eco-system in the Silicon Valley. Access to funding for innovative technological companies, fast adoption rates by customers, talented tech and management teams and a regulatory environment that is perhaps more open to risk taking than in other countries, are arguably some of the factors at play. However, some of the troubles that have faced Facebook over the past decade involving user privacy, fake news stories and even psychological impact on young users aren’t necessarily issues that couldn’t have been foreseen and perhaps addressed right at the very early stages of Facebook’s founding. Why did we then wait 14 years to ask what seem quite pertinent and some might say obvious questions? How might one attempt to mitigate some of these issues with future innovations that could potentially have a large impact on society?
Mark in his answers to the committee said that by the end of 2018, Facebook would have “more than 20,000 people working on security and content review”. This response to the privacy issue suggests something quite significant. Apart from indicating the scale of the issue, it also suggests a significant gap in the skills and resources currently deployed by Facebook.
Like tech start ups before it, the DNA of Facebook too seems to have remained predominantly homogenised around the Science, Technology, Engineering and Management (STEM) fields for far too long in the early stages.
The question that we need to answer then is whether technology companies are incentivisedto evolve tools of social integration of their innovations as they scale? Are societal costs and risks factored in at the time when the technology is conceived? Regulation can stifle innovation, so it may not be the appropriate response. However, we also cannot have a post facto response system to address issues after they emerge, especially when some of them seem predictable at the time of architecting the technology. The costs get socialised in this kind of system where we do not account for risks early on. The impact on several extremely important general elections in the fake news debate is an example. A proactive approach to identifying and quantifying these costs early on is surely preferable. Dedicated teams of social scientists being involved at the very early and conceptual stages of these companies to proactively flag and help address these issues could go some distance in addressing such questions. With the emergence of Artificial Intelligence and Machine learning, we may have even less room to allow lapses to happen during the formative stages. Once encoded, such lapses could prove extraordinarily expensive and likely near impossible to rectify.
Seeking answers to the extremely hard social and ethical questions in developing A.I. to gain moral clarity is crucial at the early stage. The DNA of start-ups needs to mutate to reflect this need. The right path for scaling the transformative A.I. architecture will need a diversified and well-rounded consideration that perhaps emerges best from heterogeneous teams. That may be the place to begin.
Is there a better model serendipitously available to us through the decentralised public Blockchain technology?
Decentralised technologies such as Blockchain present another promising model for the development of innovative enterprises and creating new technology. In the case of BITCOIN,which introduced the decentralised public ledger to the world, a paper detailing the proposed architecture for a Blockchain was released to the public. A consensus-based open source approach has governed the development of the Blockchain architecture since then. Changes or new features on the Blockchain architecture, such as how to scale the transaction handling capability can be suggested by anyone submitting a Bitcoin Improvement Proposal (BIP). The BIPs are open for all to see and the BITCOIN community including developers, miners, users debate and then decide whether the proposal is implemented. An alternate approach can also be simultaneously implemented by forking the original architecture. Then either the fork is accepted as the new way of transacting on the blockchain or it could result in a hard fork which creates a separate blockchain from the point on resulting in two alternates. This means that due to a lack of consensus, competing Blockchains have developed from within the eco system. The emergence of Bitcoin Cash as a different view to scaling from the original Bitcoin Core is one example of this. What this means is that several approaches are being tried by different groups of community members. A lot of technical issues and also issues pertaining to ethics and design can be weeded out in this process. While the alternatives are being debated and competing theories tested, we have a slightly chaotic, but still more democratic approach that is offering up users many alternatives within the space. Also, no one member, even if it is the creator of the original genesis block controls the direction of the technology. Consensus rules. Bitcoin mining, which is essentially the process of authenticating transactions on the Blockchain for a reward is also open to anyone to participate in. This means that not only the code development but also the running of the Blockchain is crowd sourced and open to all. This allows anyone who can show proof of work on the Blockchain to get rewarded by getting Bitcoins in return. The eco system is therefore completely de centralized and all the important decisions about the architecture,design and running of the Blockchain are done by seeking consensus from the community. You need more that 50% of all machines mining Bitcoins to agree on the state of the Blockchain and that is taken as the consensus chain.
Despite the ebbs and flows in the process of growing the decentralised ledger based Blockchain, it has offered up a new and far more participatory model of technological advancement. Unlike most start-ups till date, there is no CEO or central figure making ethical and design choices for the future development. There are critical learnings from the Blockchain space that should be considered while designing other technologies as well.
Even as we grapple with moral questions of A.I. and consider how much the enhanced computing capabilities of A.I. might help us gain moral clarity, we can put in place mechanisms to debate and evolve different approaches in a democratic, open and transparent manner. Be it greater involvement of social scientists in building start-ups or leveraging a decentralised system of sharing ideas and know how. We can certainly create the right environment and due diligence frameworks even in the absence of all the right answers.
Founder, Himalayan People Ltd. & Five Benches Promotions & Advisory Services Ltd.
MSc. Media & Communications London School of Economics, 2003
MBA Indian Institute of Management Calcutta, 2001
Post: 54, Essendine Mansions, London W9 2LY, U.K.
Phone: +44 (0) 7776238875