Controlling the machine

4th October 2019


Web p18 clip

Related Topics

Related tags

  • Business & Industry ,
  • Technology ,
  • Ethics ,
  • UK government

Author

IEMA

How can organisations create frameworks for the ethical and efficient use of AI? What advice are governments providing? Catherine Early reports

The many ways in which artificial intelligence (AI) could help solve environmental problems are well-documented. From distributed energy grids and precision agriculture to sustainable supply chains, environmental monitoring and enforcement, plus enhanced weather and disaster prediction and response, AI could not only make environmental protection far quicker and more efficient, but also enable solutions not previously possible in an analogue world.

Research by consultants at PwC, commissioned by Microsoft, estimates that using AI for environmental applications in agriculture, transport, energy and water could reduce worldwide greenhouse gas emissions by 4% in 2030 – equivalent to the 2030 annual emissions of Australia, Canada and Japan combined.

More and more businesses are considering using AI. A June survey of chief executives, also carried out by PwC, found that 85% believed AI would significantly change the way they do business during the next five years, while close to 66% considered it to be bigger than the internet revolution.

Getting heads together

When it came to questions about how much AI can be trusted, however, opinions were split. More than 75% believed that it was good for society, but 84% believed that AI-based decisions need to be explainable in order to be trusted by the public.

The survey also found that the understanding and application of responsible and ethical AI practices among respondents varied significantly across organisations, and in most cases was immature. Some 25% had not considered AI as part of their corporate strategy, 38% believed that it aligned with their core values, and only 25% said they considered the ethical implications of an AI solution before investing in it.

In July, PwC launched a responsible AI toolkit. “We want to kick-start the conversation around issues such as data ethics and governance. A lot more needs to be done in this space,“ explains Ben Combes, co-founder of PwC UK's innovation and sustainability practice.

In such a rapidly developing technology landscape, the key for businesses is to establish a direction of travel for policy, and then see how things evolve in real time, he says. “We don't know how applications are going to evolve, but we also can't just leave everything and hope that it'll be okay.“ Companies looking to use AI need to think carefully about who to involve across the company, in order to ensure that governance and policies are not developed in silos, he says. “Using AI is strategically important, so it needs the people who would be involved in core strategic planning, which is going to be a lot broader than only your technology or AI team.“

“AI can't reduce inefficiencies and solve problems if your algorithms and data are broken up in silos“

The key is to consider all the angles, then work out who is needed to bring their expertise into decision-making. Companies could set up a steering group comprising a broad spectrum of business roles, with the technology team working on the day-to-day development, he suggests. Sustainability professionals absolutely need to be involved.

“We have big non-linear problems and we increasingly need big exponential solutions. AI, blockchain and other technologies are very much part of the toolkit, but we need to make sure that they're not making environmental and social issues worse. Environment and sustainability teams need to put themselves dead centre,“ he says.

A broad church

Lawyers are also giving attention to developments in responsible AI. Law firm DLA Piper established a dedicated AI team in May to support companies as they navigate the legal landscape of emerging and disruptive technologies, and help them understand the legal and compliance risks emerging from these technologies' creation and deployment.

Danny Tobey, co-chair of the firm's AI practice, agrees that companies need to include a broad selection of staff in the development of AI. “Everyone from the board and senior leadership team down needs to be involved. The whole point is that AI can reduce waste and inefficiencies and solve problems, but it really can't do that if your algorithms and data are broken up in silos,“ he says.

The risks of using AI vary according to industry, he says. AI needs to be trained on good data and applied in the right situations. It is also vital to consider data and algorithmic bias.

“One complication with environmental issues is the vastness of the datasets, which is going to push you towards black box solutions where the final model is not understandable to a human,“ says Tobey. “In situations like that, it's incredibly important to have people involved from the beginning who know what they're doing in AI and data science fields.“

Many tools can be downloaded from the internet, and might then be used by a business without the right checks and balances in-house to use them properly. “That's where companies get in trouble. If you bring in a team from the beginning who have the right people involved, who understand the regulation and what that might look like in the future (because there are so many on the horizon), then you're on a better footing.“ DLA Piper is on the defence counsel on a class action lawsuit alleging that AI produced recommendations for insurance that are contrary to regulations in certain states. “This case is at the forefront of the litigation that we're starting to see emerge, and I think there'll be a lot more of that,“ says Tobey.

Rules and regulations

Governments worldwide have been scrambling to draw up regulations on AI. According to Tobey, the US has some 21 proposed pieces of legislation on the technology. The UK government, meanwhile, has created the Office for Artificial Intelligence, the Centre for Data Ethics and Innovation, and the AI Council. These will work together to set up 'data trusts', to ensure that data governance is implemented ethically.

The European Commission is also scrutinising the sector. In June 2019, it published a report entitled Policy and investment recommendations for trustworthy Artificial Intelligence, which concerns how AI can be made trustworthy. It includes a recommendation to ban its use in certain cases, such as mass-scoring of individuals, and to implement very strict rules on surveillance for security purposes.

Other organisations have been incorporating ethics into their work on AI. Think tank E3G is calling for the government to create an 'International Centre for AI, Energy and Climate', which should consider responsible application of the technology as one of its core functions. Industry body Tech UK has been considering digital ethics more broadly, and will hold its third annual summit on the subject in December.

Time will tell whether the rise of AI will be “the best or the worst thing ever to happen to humanity“, as the late professor Stephen Hawking warned. Larissa Bifano, who also co-chairs DLA Piper's AI practice, is optimistic. “It's really about mitigating any risk that could come from a malfunction or a poorly programmed piece of AI. As long as there's input from regulators and companies implement AI thoughtfully, it should be a win-win for everyone.“

Further reading

  • Read the European Commission's report Policy and investment recommendations for trustworthy Artificial Intelligence at bit.ly/2J7Qpnu
  • Download Tech UK's paper Digital ethics in 2019: Making digital ethics relevant to the lives people lead at bit.ly/2SsFE1I

Catherine Early is a freelance journalist.


Principles for establishing rigorous AI governance

  • Establish a multidisciplinary governing body –
  • companies should consider establishing an oversight body, with representatives from a range of different areas.
  • Create a common playbook for all circumstances –
  • each application of AI will have its own risks and sensitivities that require different levels of governance. Governance structures should, therefore, cater to many possibilities, acting as a 'how to' to approach new initiatives.
  • Develop appropriate data and model governance processes – data and model governance should be considered in tandem.

For data, there should be a clear picture of where the data has come from, how reliable it is, and any regulatory sensitivities that might apply to its use. It should be possible to show an audit trail of everything that has happened to the data over time.

For models, standards and guidelines should be drawn up to ensure AI applications are suitable and sustainable for their intended use, and do not expose the business to undue risk.

Subscribe

Subscribe to IEMA's newsletters to receive timely articles, expert opinions, event announcements, and much more, directly in your inbox.


Transform articles

Scotland to scrap its 2030 climate target

The Scottish government has today conceded that its goal to reduce carbon emissions by 75% by 2030 is now “out of reach” following analysis by the Climate Change Committee (CCC).

18th April 2024

Read more

While there is no silver bullet for tackling climate change and social injustice, there is one controversial solution: the abolition of the super-rich. Chris Seekings explains more

4th April 2024

Read more

Alex Veitch from the British Chambers of Commerce and IEMA’s Ben Goodwin discuss with Chris Seekings how to unlock the potential of UK businesses

4th April 2024

Read more

Five of the latest books on the environment and sustainability

3rd April 2024

Read more

The UK’s major cities lag well behind their European counterparts in terms of public transport use. Linking development to transport routes might be the answer, argues Huw Morris

3rd April 2024

Read more

Ben Goodwin reflects on policy, practice and advocacy over the past year

2nd April 2024

Read more

A hangover from EU legislation, requirements on the need for consideration of nutrient neutrality for developments on many protected sites in England were nearly removed from the planning system in 2023.

2nd April 2024

Read more

It’s well recognised that the public sector has the opportunity to work towards a national net-zero landscape that goes well beyond improving on its own performance; it can also influence through procurement and can direct through policy.

19th March 2024

Read more

Media enquires

Looking for an expert to speak at an event or comment on an item in the news?

Find an expert

IEMA Cookie Notice

Clicking the ‘Accept all’ button means you are accepting analytics and third-party cookies. Our website uses necessary cookies which are required in order to make our website work. In addition to these, we use analytics and third-party cookies to optimise site functionality and give you the best possible experience. To control which cookies are set, click ‘Settings’. To learn more about cookies, how we use them on our website and how to change your cookie settings please view our cookie policy.

Manage cookie settings

Our use of cookies

You can learn more detailed information in our cookie policy.

Some cookies are essential, but non-essential cookies help us to improve the experience on our site by providing insights into how the site is being used. To maintain privacy management, this relies on cookie identifiers. Resetting or deleting your browser cookies will reset these preferences.

Essential cookies

These are cookies that are required for the operation of our website. They include, for example, cookies that enable you to log into secure areas of our website.

Analytics cookies

These cookies allow us to recognise and count the number of visitors to our website and to see how visitors move around our website when they are using it. This helps us to improve the way our website works.

Advertising cookies

These cookies allow us to tailor advertising to you based on your interests. If you do not accept these cookies, you will still see adverts, but these will be more generic.

Save and close