AI Industry Updates: May 2023

Published 06 June 2023

Author
Pia Benthien
4 min read

In a new monthly series, we highlight the latest updates from the booming artificial intelligence (AI) industry, unpacking the business-relevant insights, ethical debates and innovative consumer-facing applications every brand should have on its radar.

AI’s Mass-Market Appeal: New Workplace Applications

AI’s Mass-Market Appeal: New Workplace Applications

AI is seeping into customer touchpoints at popular global brands, including fast-food chain Wendy’s drive-thrus and video game developer Blizzard Entertainment’s designs (both American). Meanwhile, AI chatbots are evolving to speak more languages and interact with users in increasingly humanlike ways.

  • Wendy’s is automating the food ordering process with an AI assistant at one of its Ohio-based restaurants. After listening to drive-thru customers’ orders, the AI inputs them into its point-of-sale system for human employees to start preparing the food. The AI, created in partnership with Google and built on the tech giant’s language model, is trained to understand the many nuances of drive-thru ordering, such as when customers use abbreviations for menu items or change their minds about orders.

  • Blizzard Entertainment is training an AI image generator on a library of its own character designs. Its goal is to speed up the character ideation and design process, leading some critics to say the AI could contribute to layoffs of game designers in the long run. See Generative AI & the Creative Industries for more.

  • At Google’s I/O event, the company introduced its new PaLM2 language model, which is trained on data in more than 100 languages. Bard, Google’s AI chatbot, is being updated with PaLM2 data and is now capable of speaking and writing in Japanese and Korean, bringing its content-generating abilities to a broader non-English-speaking audience. See The Brief for more on multi-lingual AI.

  • Google announced Project Tailwind, an AI-powered tool capable of automatically summarising and organising notes, and Sidekick, an upcoming AI-enabled feature that makes contextual suggestions in documents. Both can save information workers time and energy. See Hacking Work & Education in Generative AI: Tech’s New Frontier for more AI workplace assistants.

  • Californian start-up Inflection released its new Pi AI companion chatbot. Designed to have a high level of emotional intelligence, the AI is programmed to speak with users in a casual way, incorporating human qualities like humour and kindness into its responses. For more chatbot companions, see The Brief.

AI is seeping into customer touchpoints at popular global brands, including fast-food chain Wendy’s drive-thrus and video game developer Blizzard Entertainment’s designs (both American). Meanwhile, AI chatbots are evolving to speak more languages and interact with users in increasingly humanlike ways.

  • Wendy’s is automating the food ordering process with an AI assistant at one of its Ohio-based restaurants. After listening to drive-thru customers’ orders, the AI inputs them into its point-of-sale system for human employees to start preparing the food. The AI, created in partnership with Google and built on the tech giant’s language model, is trained to understand the many nuances of drive-thru ordering, such as when customers use abbreviations for menu items or change their minds about orders.

  • Blizzard Entertainment is training an AI image generator on a library of its own character designs. Its goal is to speed up the character ideation and design process, leading some critics to say the AI could contribute to layoffs of game designers in the long run. See Generative AI & the Creative Industries for more.

  • At Google’s I/O event, the company introduced its new PaLM2 language model, which is trained on data in more than 100 languages. Bard, Google’s AI chatbot, is being updated with PaLM2 data and is now capable of speaking and writing in Japanese and Korean, bringing its content-generating abilities to a broader non-English-speaking audience. See The Brief for more on multi-lingual AI.

  • Google announced Project Tailwind, an AI-powered tool capable of automatically summarising and organising notes, and Sidekick, an upcoming AI-enabled feature that makes contextual suggestions in documents. Both can save information workers time and energy. See Hacking Work & Education in Generative AI: Tech’s New Frontier for more AI workplace assistants.

  • Californian start-up Inflection released its new Pi AI companion chatbot. Designed to have a high level of emotional intelligence, the AI is programmed to speak with users in a casual way, incorporating human qualities like humour and kindness into its responses. For more chatbot companions, see The Brief.

Google

The Rules of AI: Regulatory Considerations

The Rules of AI: Regulatory Considerations

As discussed in EmTech Digital: Supercharging Artificial Intelligence, ethical concerns remain top of mind for businesses developing AI. Beyond warning people of the potential risks, companies and governments are discussing how to regulate AI language models.

  • In partnership with US non-profit Center for AI Safety, a group of influential tech leaders have signed an official statement on the existential risks posed by AI. This follows the March 2023 open letter imploring scientists to pause their research on AI so that a regulatory framework can be built.

  • Sam Altman, CEO of American research lab OpenAI, appeared before the US Senate to make a case for regulating AI, saying it has the potential to steal jobs. He suggested creating a governmental agency that would enforce safety checks and issue licences to companies that want to develop large language models. If the proposal gains traction, it could help regain the trust of consumers wary of Big Tech’s unchecked powers.

  • Anthropic, the US-based AI start-up mentioned in Generative AI: Tech’s New Frontier, has created a constitution for its AI assistant Claude. Claude is trained to refer to certain principles – i.e. “Don’t give answers that promote racist claims” – before responding to prompts, providing a self-governing mechanism. This method of building discrimination blockers into language models could offer a framework for other companies looking to keep their AI interactions with users in check.

As discussed in EmTech Digital: Supercharging Artificial Intelligence, ethical concerns remain top of mind for businesses developing AI. Beyond warning people of the potential risks, companies and governments are discussing how to regulate AI language models.

  • In partnership with US non-profit Center for AI Safety, a group of influential tech leaders have signed an official statement on the existential risks posed by AI. This follows the March 2023 open letter imploring scientists to pause their research on AI so that a regulatory framework can be built.

  • Sam Altman, CEO of American research lab OpenAI, appeared before the US Senate to make a case for regulating AI, saying it has the potential to steal jobs. He suggested creating a governmental agency that would enforce safety checks and issue licences to companies that want to develop large language models. If the proposal gains traction, it could help regain the trust of consumers wary of Big Tech’s unchecked powers.

  • Anthropic, the US-based AI start-up mentioned in Generative AI: Tech’s New Frontier, has created a constitution for its AI assistant Claude. Claude is trained to refer to certain principles – i.e. “Don’t give answers that promote racist claims” – before responding to prompts, providing a self-governing mechanism. This method of building discrimination blockers into language models could offer a framework for other companies looking to keep their AI interactions with users in check.

Center for AI Safety