If there is a major Artificial Intelligence (AI) disaster in consumer goods, it isn’t going to be like something out of the Terminator franchise.  Most likely it will be some rogue algorithm that really distorts the supply chain or destroys decades of brand building.  There have already been reports of algorithmic bias in recruitment practices at Amazon (the algorithm didn’t like women), and I am sure we will see many more examples as we learn to address the deeper legal and ethical concerns that are involved in designing these systems. 

In my teens I met the American science fiction author Isaac Asimov.  He didn’t like flying, so appearances outside of the United States (US) were rare, but he travelled on the QE2 and then to my home town of Birmingham for a talk and a book signing.  I remember him as generous with his time (I was even more demanding then), and I also recall him explaining his thoughts on his famous three Laws of Robotics.  He didn’t see robots as monsters that would destroy humans as he assumed that the people who create intelligent machines would build in moral and ethical safeguards.  

Asimov saw the need to protect humans but not, perhaps, humanity.  That’s a human task.  75 years after he formulated these rules, we are much more aware of the broader legal, moral and ethical concerns.  Putting appropriate guardrails in place (e.g. for learning set selection) has to become a priority for designers and coders, and the people that appoint them and provide direction and governance over their work.

How to create these guardrails forms much of the guidance available.  The Civil Law Rules on Robotics proposed by the European Parliament have as much to say about liability and privacy as they do safety and security.  When the IEEE established the mission for its Global Initiative on Autonomous and Intelligent Systems, the mission was “to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritise ethical considerations so that these technologies are advanced for the benefit of humanity.”

Of course, we already find it difficult to resolve non-AI ethical and societal dilemmas.  If we are to develop AI and robotic systems cost-effectively, we need to manage the risks as we sprint forward.

Waiting until all the unknown unknowns are resolved just won’t work.  Effective governance for ethical and unbiased design and development are important, as are clear accountabilities for user outcomes in the value chain (intentional or not).

As Ruediger Hagedorn, Director of The Consumer Goods Forum’s (CGF) End-to-End Value Chain (E2EVC) initiative, wrote last year, “AI is about to make supply chains more sustainable.”  Perhaps the greatest opportunity for the industry to demonstrate ethical AI is in addressing the UN Sustainable Development Goals.  How will we work together across the value chain to enshrine values such as fairness, justice and inclusion? There are already great examples such as Plant Village from Penn State in the US using AI to lift family farmers out of poverty, but we need more.

In IBM’s booklet entitled ‘Everyday Ethics’ it states “It’s our collective responsibility to understand and evolve these ethical focus areas as AI capabilities increase over time.”  After all, the only intelligence so far in AI is ours, and we have a collective duty of care for the future of our shared humanity.

And, to learn more about how artificial intelligence is impacting the CGF members and the consumer goods industry at large, and the exciting opportunities offered, take a look at the content being published as part of the CGF’s E2EVC initiative. Click the link and search “artificial intelligence” to learn more.


This blog was written and contributed by:

Dr Trevor Davis
FRSA