Artificial Intelligence, also known as AI, is advancing the way government agencies operate and deliver services and forward-thinking agencies, ready to seize the opportunities afforded by it, are ramping up their spending on the technology.
According to a recent Accenture survey of 300 government IT leaders from across Europe, 86 percent of respondents said they intend to increase or significantly increase AI spending in the coming year, while 90 percent expect a “medium to high” return on their AI investments.
AI is unlike any other recent technological development, it is truly transformational. However, it is also complex to deploy and requires having solid foundations in place to ensure proper governance and successful delivery. To overcome these challenges, governments must change their operating models and become more agile organisations that can cater to highly disruptive technologies like AI.
Leadership is a critical component of AI implementation. Most survey respondents (81 percent) cited a medium to very high risk of AI deployments being duplicated within their organisation, or within lower levels of government, due to a lack of internal collaboration.
Without proper leadership oversight, AI deployment efforts will, at best, be duplicated and at worst fail. Only with a new model of management and an agreed organisation-wide approach to deployments, undertaken in collaboration with an entire ecosystem of stakeholders, can success be guaranteed.
Government leading the way
Despite the support and enthusiasm for AI expressed by survey respondents, many organisations are experiencing systemic challenges to delivering successful projects. More than two-thirds (71 percent) cited difficulties in procuring the right AI building blocks - notably data integrity and processing capabilities; and more than three-quarters (81 percent) said they experienced challenges when integrating AI technologies into their back-office operations.
To achieve success, government agencies must overcome the challenges posed by outdated legacy IT systems and take steps to ensure data integrity and enhanced security standards for new AI applications.
To build the foundations needed to maximise AI’s potential, government agencies must also increase both the scale and number of AI deployments. While most survey respondents (83 percent) reported a strong ability to scale AI deployments and were positive about the return-on-investment being achieved.
Many (63 percent) reported completing between just five and 10 AI-related projects over the last year and believe that between 100-200 use-cases would be required to achieve real impact. The low-number of use cases demonstrates a significant gap between current and desired states for end-to-end AI supported services within government agencies.
Additionally, almost one-third (31 percent) of survey respondents said their organisation is lacking the necessary talent and skills to scale their AI investments. To overcome skills shortages, agencies must manage and invest in workforce training and skills development to ensure AI projects are a success and create new roles for employees and opportunities to work alongside AI technologies.
Our survey found that customer service and fraud & risk management were the two operational areas favoured most for AI deployments, cited by 25 percent and 23 percent of respondents. This indicates a demand from government agencies for AI technologies and services that enhance customer experience, reduce fraud and improve risk management. This service demand can be seized upon by private-sector companies that have R&D budgets and can innovate solutions to better manage customer service and risk management.
The private sector has a significant role to play as the leaders and integrators of responsible AI innovation by complementing government efforts to set standards (including security standards) and by leading the way on self-regulation. Just as governments must reassure and demonstrate to ecosystem stakeholders that AI is being scaled responsibly, companies must do the same.
Civil society support
History has proven that public and private sector organisations who collaborate with a wide eco-system of partners, including universities and non-profit institutions, can together advance public-spirited goals, such as assisting the disadvantaged and delivering enhanced social services.
In that spirit, academics must make efforts to broaden their efforts to decode and analyse AI related algorithms, to detect and eliminate harmful biases and to make AI explainable to citizens. Other stakeholders, such non-profit organisations, can study the use-cases of AI to expose misuses and raise awareness of the unintended negative effects of AI on citizens and communities. Community organisations can also commit to increasing their knowledge of AI technologies by advocating for enhanced government regulation and highlighting any perceived abuses of the technology.
Civil society can also play a role in facilitating interactions among ecosystem stakeholders. Citizens, community groups and NGO’s have an opportunity to raise awareness of the many ways that AI-powered technologies can make life better for them be it through enhanced healthcare related technologies or the more personalised delivery of government services to them.
Toward Responsible AI
As AI spending accelerates, ecosystem partners must address these challenges and build the foundations needed to maximise AI’s potential and ensure its deployed responsibly and successfully. Doing so requires a new model of management and an agreed approach to AI deployments, undertaken in collaboration with an entire ecosystem of stakeholders. Each stakeholder in the GovTech ecosystem has a part to play by collaborating to make the technology-enabled future both responsible and prosperous.
This article is sponsored by Accenture