AWS recognises the transformative potential of Artificial Intelligence (AI) and we strive to democratise access to it, by making it easy for our customers and partners to build and scale generative AI solutions that are customised for their data, use cases, and customers. For public sector organisations, there is enormous potential in harnessing data and AI to streamline processes, enhance operational efficiency, and provide seamless experiences to the communities they serve. To get started, we recommend organisations apply a mindset of curiosity, growth, and guardrails when using generative AI.

Here’s how AWS customers are using generative AI to streamline tasks and make work more productive, and enjoyable.

What is generative AI?

Generative AI refers to AI services that can create new content, such as text, images, videos, and audio. Generative AI models are powered by foundation models (FMs), which once trained with data can become Large Language Models (LLMs). FMs and LLMs can quickly process and analyse vast amounts of data, and this capability can help public sector organisations gain valuable insights, make more informed decisions, and realise citizen outcomes.

What insights have our customers shared?

Our customers have said they want access to a variety of FMs, and the flexibility to optimise and fine tune them for cost, performance, or latency. This is because they’ve found that no single FM is always appropriate for a specific use case.

AWS was the first cloud service provider to offer a broad choice of leading FMs from companies including AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, and Stability AI. We empower customers to access a broad range of FMs, alongside enterprise-grade security and privacy features, via Amazon Bedrock – which allows customers to customise their desired FM privately and securely with their own data to improve performance.

Customers and partners across all industries and of all sizes are using Amazon Q to transform the way their employees get work done.

AWS also offers customers services like Amazon Q to bring to life their historical organisational knowledge, and help their team find answers to questions faster than traditional methods, such as searching an Intranet.

Based on our experience working with customers, we recommend the following approach to generative AI:

  1. Ideate by working backwards from business outcome
  2. Baseline the performance of existing non-AI processe
  3. Select the appropriate model(s)
  4. Experiment, fine-tune and optimise the models
  5. Ensure any deployment of AI is aligned with responsible AI practices
  6. Validate hypothesis, measure accuracy and performance of LLM outputs
  7. Repeat and iterate

Start by working backwards from business and citizen outcomes

We recommend public sector organisations who want to adopt generative AI start by working backwards from their desired business and citizen outcomes. Working Backwards is how we innovate, vet ideas, and create new products at AWS. The objective is to start by defining the customer (or citizen) experience, then iteratively work backwards from that point until your team forms a clear idea on what to build.

Once customers have an idea on what to build, we guide them to consider the best FM or LLM for their needs. This means thinking about the right evaluation criteria, and considering the range of FMs and LLMs we offer. When selecting an FM or LLM, we recommend customers make a ‘two-way door decision’ – that is, choose an FM or LLM they can test, and then change if desired, with limited and reversible consequences.

Be curious about the value generative AI can deliver

One of the key advantages of generative AI is lowering the cost of organisational curiosity - while providing a secure, governed experimentation environment that meets data residency requirements. We recommend public sector organisations take these advantages and be curious, experiment, and identify the value, insights, and efficiency they want to unlock.

We think that establishing a bias for action, and leading by example through experimentation, can unlock productivity gains across the value chain of digital government service delivery, even if these gains may take time to manifest due to the transformative nature of this technology.

For example, generative AI can empower public sector organisations to unlock powerful insights from their data. It can accelerate intelligence gathering by rapidly summarising key findings across teams and preparing concise briefs for leadership. It can also enable real-time sentiment analytics on stakeholder interactions and contact centre communications. These actions can all enhance decision-making speed, improve operational efficiency, and deliver exceptional stakeholder experiences.

Generative AI can also help public sector organisations meet their unique needs by leveraging machine learning capabilities using natural language. For example, when users are empowered to summarise, search, analyse, and draw conclusions in seconds—from information across their organisation or the internet—they can make informed decisions faster than before. This could help them answer customer queries faster, pinpoint administrative changes more easily, and more accurately assess possible fraud or scam risks.

Nasdaq (Nasdaq: NDAQ) announced on May 15, 2024, the integration of a new AI-powered feature into its Market Surveillance technology solution. This innovative enhancement is designed to significantly improve the quality, speed, and efficiency of market abuse investigations conducted by Nasdaq's clients. The solution leverages generative AI to streamline the triage and examination process involved in investigating suspected market manipulation and insider dealing, empowering regulators and marketplace clients to more effectively monitor and detect potential market abuse. During proof-of-concept testing, surveillance analysts estimated a 33% reduction in investigation time, with improved overall outcomes. This represents a substantial gain in investigation efficiency. Nasdaq is planning to leverage the generative AI enabled functionality for its US equity market surveillance.

Grow through optimisation and fine tuning

Customers tell us that their key learning from adopting generative AI is that multiple iterations of optimisation and fine tuning are necessary to build a production generative AI application - with the required enterprise quality, and at the right cost. We’ve found that time spent on optimisation and fine tuning are just as important as selecting the right FM or LLM. It’s analogous to how successful organisations onboard new employees and then commit to ongoing training and professional development in the spirit of hiring and developing the best workforce.

Generative AI technologies, whilst holding a vast amount of information, can create incredible value, and a strong return on investment. Customers tell us that the conditions for success involve privately training in a secure environment, optimising patiently, and iterating quickly. With generative AI, quality trumps the quantity of business data available.

At AWS, we recommend organisations embrace a growth mindset that enables technology experimentation - underpinned by resilience and persistence - as you ideate, experiment, test, measure, learn, and repeat. We think humility is key to learning from early experimentation phases, and then recommend doubling down on optimisation and fine tuning until you get the right level of accuracy for your use case.

Set additional security and privacy guardrails

At AWS we say security is “job zero” - it’s more important than anyone’s number one priority. Our top priority is safeguarding the security and confidentiality of our customer’s data and workloads. We help customers by providing AI infrastructure and services that have security and privacy features built-in and give customers control over their data. Customers can rest assured that their data is being handled securely across the AI lifecycle including for data preparation, training, and inferencing. We ensure no customer data is ever used to train our models, and all customer data is encrypted and stays within their Virtual Private Cloud (VPC) in their nominated AWS Region for storing data. Even then, we recommend customers still set additional security and privacy guardrails based on their use cases and responsible AI policies to ensure a consistent user experience and standardised safety and privacy controls.

For example, Guardrails for Amazon Bedrock offers industry-leading safety protection on top of the native capabilities of FMs. Customers can create multiple security and privacy guardrails tailored to their use cases and apply them across multiple FMs. This can help customers block as much as 85% of more harmful content than some FMs native capabilities on Amazon Bedrock today.

By embracing generative AI with a mindset of curiosity, responsible adoption, and strategic iteration, the public sector can unlock new avenues for innovation, productivity, and improved service delivery.

Learn more about generative AI at Amazon Web Services: