April 28, 2019
Ethical AI Is Possible: A Postcard from the Future
Share This Post
As we move into a world of AI First solutions, there has been a lot of debate, fear, and hope about the impact it will have on our lives. A new field of Ethically Aligned AI has emerged as a result, and this article attempts to paint a vision of hope for year 2022, describing the possibilities if the movement achieves its goals, and then explains why I think this future is possible.
Last year my friend suffered a stroke that lay undetected for more than 36 hrs. He is back home now but paralyzed on the right side and faces an uphill battle on the road to recovery.
By the year 2022, catastrophes like this can be totally avoided with our intelligent homes and personal digital assistants like J.A.R.V.I.S. of Marvel’s Iron Man. By then, the cost of such systems will have become affordable and easily accessible to everyone, enabling each of us to achieve our full potential.
J.A.R.V.I.S. will take care of all the routine tasks and assist us with decision making both for our professional and personal lives. It will also come with a “fake news” detection mode, quietly slipping these suggestions in our ears or onto our Virtual Reality goggles. Current anti-virus systems will have evolved to include to include fairness and ethical detection bots to protect us from manipulation and make us aware of false and fake information on the internet.
Advertisements as we know them today will be on the decline as most of the routine purchases will be made by J.A.R.V.I.S. J.A.R.V.I.S in turn will be able to provide full transparency into why it made a particular decision.
Tired of the era of Youtube, Facebook, and Twitter’s toxic content, and inspired by movements like School Strike, new types of Impact organizations will emerge that are governed by the “Well Being Metrics,” with a deep focus on Sustainability and zero harm to the planet. CEOs, company boards, builders and designers will take on a Hippocratic Digital Oath similar to doctors for preserving the well-being of humanity instead of the Autonomous and Intelligent systems; and held accountable for the decisions made by the AI systems built and designed by them.
There will be a universal national identity on blockchain, and companies will source data ethically and pay out “Data Dividends” to customers for use of their data with full audit trail on the blockchain. Many world economies will have adopted a user ownership of data framework, where access and agency over personal data will become a fundamental human right. With the data residing with users, AI models will be trained using federated learning and homomorphic encryption, a method in which the model is sent to users’ devices in encrypted format and sends back the trained insights for aggregation, with all the encryption and transaction handled via blockchain.
Companies will recognize the workforce impacts of their disruptive technologies and take a pledge to reskill and upskill the workers displaced by autonomous systems. To respect the power of competition and nurture the next generation of startups, they will also open source real data for the democratization of AI.
This world may sound like pie-in-sky fantasy or science fiction, but if one looks closer one can see that organizations and individuals around the world have already started taking steps in these directions.
IEEE is developing ethically aligned design guidelines through a transparent process that draws on the wisdom of diverse stakeholders, and the European Commission has issued its own Trustworthy AI guidelines.
GDPR is paving the way towards user ownership of data around the world, and in our own back yard California’s governor Gavin Newsom made a proposal for “Data Dividends” in his inaugural address earlier this year. Countries like Estonia have established a national identity on blockchain, which paves the way towards having a complete audit trail and transparency for user-owned data.
Policy changes like these may sound very controversial at this stage, but given the almost universal negative sentiment against big tech of late, it is only a matter of time before such initiatives start to emerge. The change could come from any number of actors, but whether it happens by disruptive startups looking to democratize the playing field, by a combined push from regulatory bodies, or by big tech acceding to appease to the wider consumer base, the combined pressure of these forces make a shift toward ethical AI inevitable.
That is not to say that mobilizing around ethical AI will be easy. Google has rolled out a formal ethical review process, but its ethics board fell to criticism only one week into its existence, and similar initiatives are accused of being insufficient or even disingenuous.
But the other side of that argument is that we shouldn’t put all of our eggs in one basket. Rather than relying solely on a review process, product designers and engineers need to adopt “Ethical By Design” practices, looking at ethical considerations from the start of the design process. We are at very early stages in the AI space right now, and while penalties and regulatory bodies have their place, they also take time to design thoughtfully. And we will need serious commitment and the right incentive structure for these initiatives to succeed, with leadership right from the top at the CEO or board level. We see the effectiveness of such a strategy in the recent decision by Microsoft to turn down sales of it facial-recognition technology on human rights concerns.
The work has to start with conversations, with educating people both in university and in business settings. This process will look similar to how security was adopted as a pillar of new product rollout—once it was an afterthought, but now it is taken very seriously; most large companies have a Chief Security Officer, and building solutions with “Security By Design” has become the industry best practice.
Ethical design must start with transparency around data usage. This will look different for every product—it could include developing a clear audit trail by incorporating the blockchain, which may become a more common approach as the market adjusts to GDPR. As blockchain-AI pairings become more common, we will start to see more examples of combining blockchain with privacy preserving federated learning, a type of machine learning where a model is trained in decentralized manner by sending the model to user’s devices and the trained weights are sent back for aggregation. These pairings could prove to be a powerful shift—since the data and insights are never all in one place and are passed in a form that humans can’t interpret, this approach alleviates many of the privacy concerns we are faced with today.
Another approach that designers can take towards data privacy for consumers is the use of homomorphic encryption. Products designed this way encrypt the data at the source of its collection before feeding it into the deep learning algorithm. The algorithm understands the data just as well and performance is not affected, yet consumers don’t have to worry about the security of their information.
Greater transparency around data could be as simple as attribution. In the same way we attribute images and video to their proper source through Creative Commons, designers should be transparent about where the data is coming from, even if you don’t end up compensating anyone.
After ensuring transparency, developers should focus on making their autonomous system as robust and resistant to manipulation as possible. Researchers at Keen Security Lab recently found that alterations to a road—even slight ones, like adding stickers—could force a Tesla in Autopilot mode to switch lanes, demonstrating that vulnerabilities like these can threaten the lives of the users AI system are built to serve. As the researchers pointed out, this system is vulnerable because it relies on a single input—visual information, in this case. Designers that want to make more robust AI systems should incorporate multiple inputs to decision making.
Explainable AI techniques will provide the transparency and tools for this. DARPA and other players are investing heavily in this area and early solutions like IBM’s AI Fairness 360 which includes a bias checker, have started to emerge. Combined with human centered design which allows the end users using the AI systems full control over the final decision-making process, such tools will help in building trust and transparency.
Many of the concerns listed here can be addressed by a broader directive: developers should ensure they are optimizing the system for metrics of well-being rather than metrics of engagement or productivity. When engineers solve for the wrong variables, we open our products to charges of being parasitic or exploitative, and rightly so. A variety of metrics are beginning to gain visibility, from macro-level measures like the OECD’s Better Life Index to cutting-edge approaches like affective computing. IEEE’s guide to ethical design principles includes a full list.
Finally, designers should exercise humility in their work. AI technology is radically different from other technologies human beings have worked with in the past, and it is always safe to assume they will produce unintended effects. To account for this possibility, designers should always build user overrides into the rollout of autonomous systems.
Perhaps the most powerful argument of all for a future of ethically aligned AI is that the market rewards it. We can already see some success stories emerge at the startups like Owkin, Numerai, Openmined using combination federated learning, homomorphic encryption, differential privacy, and blockchain to build better AI models, tools, and marketplaces that will eventually democratize the AI models themselves and pave the way towards better and more ethical AI solutions.
First published: SVSG Blog
Share This Post