Your AI Update - December 2023

Author: Christian Martin

AI Check In - November 2023

Read up on new features, AI in the workplace, social and ethical implications, regulations, and research revelations from the month of November (10m read).


Tech Titans: New Features, Products, and More

 

The OpenAI Suite Matures

This month, OpenAI held their ‘developers conference,’ announcing several monumental expansions to their ecosystem of products. They announced customizable ‘GPTs’ for businesses and consumers, enabling users to tailor the model to their specific needs. In tandem, they announced the roll out of a GPT store that enables users to search for and upload their custom GPTs. These novel integrations and extensions enable users to do the work of data scientists within the new user-friendly architecture. Other announcements are GPT-4 Turbo and a revamped chatbot user experience. Announced later this month was ChatGPT Voice with ‘hearing’ and ‘speaking’ capabilities. This feature enables speech-prompts and text-to-voice generated responses, continuing the trend of streamlined and increasingly humanized interactions with AI Chatbots.

 

Google & Others Move to Compete

Google and Microsoft, with their own generative AI products Bard and Copilot respectively, made headlines this month. As competitors in the space, along with Amazon and Meta, these key companies are encouraged to rapidly develop products and features in order to (1) keep pace with market leader OpenAI, and (2) differentiate themselves and steal market share. We saw this play out with Google’s Bard now being able to ‘watch’ and interpret YouTube videos, Microsoft Copilot introducing caption snippets in Bing search results, and Meta integrating their LLaMA-powered chatbot into WhatsApp/Instagram/Facebook.

 

Is Bing Closing the Gap with Google?

“Even AI can't help Bing bridge the gap with Google.” Bing’s creator Microsoft has a close relationship with OpenAI, the trailblazing company responsible for ChatGPT, which has yielded many new products and enhancements of the existing MS suite. This month, Microsoft held their Ignite Conference, announcing several key AI-oriented initiatives. The pioneering product ‘Bing Chat’ was rebranded as Copilot, extrapolating on their former slogan, “your AI-powered copilot for the web.” Copilot also received a ‘Read Aloud’ feature with customizable vocals. Additionally, the company announced caption snippets in Bing search results powered by GPT-4, the large language model developed by their partner company OpenAI.

In all, predictions about Microsoft’s alliance with ChatGPT threatening Google’s hegemony have proven false, as the giant has actually gained market share in the search category from Microsoft this month.

 

Turmoil at the Top

With the sudden firing of Sam Altman, Co-Founder and CEO of OpenAI, and the ensuing week of turmoil, it is evident that there is some instability in the company behind the development of ChatGPT. The classic boardroom coup took place on Friday 11/17 with the reason cited as ‘inconsistent communications’ from the CEO. A week prior to this move, Altman had decided to cut off all new signups to ChatGPT Plus due to the limited capacity, effectively holding their enduring partner Microsoft hostage in exchange for more funding dollars. After being reinstated as CEO, largely due to the influence and support of 750 out of 770 OpenAI employees, Altman emerges from this period in a more prominent and powerful position than before - even being named Time’s CEO of the Year.

 


 

AI at Work: Novel Uses, Recommendations and Impact on Labor

 

Is Generative AI Going to Take Your Job? Not Exactly

Harvard Business Review predicts that generative AI will continue to make waves in the labor market by upskilling millions of people in their ability to write and create images. As AI adapts to perform more human-like tasks, economists predict adjustment costs entailing a period of economic hardship for individuals directly impacted. Nobel prize winner Daniel Kahneman predicts that AI will eventually exceed human capabilities, forcing us to “redesign our economic structures to fully engage the working population”. However, the future is far from bleak. Through utilizing AI for more menial, computational tasks, economists anticipate people can improve upon soft skills that AI cannot replace such as creativity and cognitive awareness.

 

An Evolving Industry: Cloud Infrastructure

Cloud computing, the foundational framework supporting AI, software, and applications, has an anticipated growth rate of 26.6% in 2024. As the prevalence of AI continues to grow, companies must weigh the benefits of AI with the cost of its significant computing power and cloud storage. However, AI presents many benefits such as “automation, self-service, and consumption-based usage,” with Gartner projecting that over half of enterprises will use industry cloud platforms to improve business performance by 2028.

 

Stay on the Lookout for Scams

Scammers are luring small-business owners with a fake version of Google’s AI Chatbot Bard. Once downloaded, malicious software gives scammers access to their social media accounts. In response, Google filed a lawsuit against the scammers claiming trademark infringement and breach of contract, demonstrating that “new legal issues will arise as the artificial intelligence craze continues”. Scamming is not new to AI, with other scammers utilizing deep fakes, a digitally altered image, or mimicking the voices of loved ones to manipulate their victims. With fraud cases doubling over the past two years in the United States, it is important to be aware of ways it may be used maliciously to protect yourself and your data.

 


 

AI in Life: Social, Political, and Economic Impacts

 

AI in Elections

The upcoming presidential race candidates are utilizing artificial intelligence in crafting images and videos to self-promote and combat their opponents. In Argentina, an AI system that can generate imagery and videos of primary electoral figures, such as the candidates and their running mates, has been established by Mr. Massa's campaign.

Nonetheless, social media platforms are catching on and implementing measures to curb the spread of misinformation to voters in 2024. For example, Meta has unveiled a novel policy mandating that AI usage labels appear on user screens when they interact with advertisements. In this way, any political advertisement displayed on the platforms will disclose if it was generated using AI. Although a specific implementation date is yet to be announced, the measure is expected to be enforced globally sometime in the upcoming year. The anticipation is that such a directive will help voters in 2024, thereby managing the proliferation of misinformation.

 

Growth in the Inappropriate Use of Generative AI

A New Jersey high school is investigating AI-generated nudes. Teenage girls found images of their bodies being circulated–and the images weren’t even real. Although AI tools have many innovative applications, these tools can put the privacy and autonomy of women and teens at risk. In fact, the deep fake-tracking company Sensitive AI has found that 96% of deep fakes are pornographic, largely targeting women. Some states have begun creating legislation that bans deep-fake porn, but, as the WSJ explains, these laws are extremely hard to enforce and regulate.

 

AI Transforming Patient Care

The emergence of AI-driven companions are transforming patient care in facilities, hospitals, and homes. Most prominently featured in a recent WebMD article, these robotic companions increase health scores and boost social interaction. In the article, Susan, a 70-year-old semi-retired nurse who lives alone and experienced depression after losing her fiance and dog two years ago found solace in Elli, an AI robot companion known to carry out conversations and even devise nicknames.

This rise in AI companionship began during the pandemic when household loneliness started becoming more common. Studies revealed that these companion robots can reduce stress, loneliness, and medication use as well as assist children with special needs by teaching them about eye contact and how to communicate more clearly. However, there are existing debates around these companions’ limitations and ethical considerations. Some argue that these robots potentially exacerbate isolation, increase dependency, and infringe on users’ privacy. As we continue to explore the capabilities of AI, we must also consider the ethical implications of these developments, balancing the benefits of AI companions with the limitations will help technology benefit society in a sustainable way.




 

Taming AI: Ethics, Policies and Regulations

 

AI and Global Conflict

Currently, governments are struggling with how to handle misinformation in wartime, as AI-generated images circulate that falsely depict scenes from the ongoing Israel-Hamas conflict. Previously, Google’s chatbot product Bard had restrictions that prevented it from answering any questions about the ongoing conflict; however, Google has recently lifted this ban. The issues that arise from this are that Bard (and other large language models) leave out details in its account of current events, and LLMs have the ability to ‘hallucinate’ errors about these and other security-related topics. This can have detrimental effects in news circulation, particularly when it comes to AI-generated images. For example, Adobe is selling AI-generated images depicting fake scenes of violent, bloody conflict and destroyed cities both in Israel and Gaza. These images could add flame to the ongoing conflict and violence. Because of the high-risk nature of this type of misinformation, it will be imperative to wartime information strategists to develop ways of differentiating between human-made and AI-made images moving forward.

 

Tightening the Leash

Vice President Harris warns that the ‘existential threats’ of AI are already here in her recent speech in London, calling on global leaders to address the threats posed by artificial intelligence (AI) to human rights and democracy. She emphasized the existing harm caused by AI, including discrimination, disinformation, and the exacerbation of inequalities. Harris outlined new measures to manage AI risks and regulatory challenges, including the establishment of an "A.I. Safety Institute," guidelines for federal agencies, and a "Blueprint for an A.I. Bill of Rights" focused on consumer protection. She also announced the participation of 30 nations in a "political declaration" to establish norms for responsible military AI use and $200 million in funding to support these efforts. Former president Obama has also spoken out about the “big risks” of AI, calling attention to how statistical machine-learning creates challenges for people of color and other marginalized groups.

 

The World Weighs In: Pump the Breaks

An international conference of political leaders and tech executives from around the world agreed to meet and discuss regulating AI. 28 countries including the United States, China, countries of the European Union, and more signed a declaration aimed at mitigating AI risks at the UK's first ‘AI Safety Summit.’ Participating countries also agreed to address disinformation and cybersecurity and committed to addressing risks within their national border. No specific programs were announced in the summit but the UK and US announced that AI safety institutes would be established by their countries to test AI risks. It was also stated that technology giants such as Amazon, Meta, and OpenAI agreed to allow governments to test AI products before providing access to the public. Furthermore, countries involved hope to continue to hold AI summits every six months. The next summit will be hosted in South Korea, followed by France. Hosting these summits is important because it shows a conscious effort to create regulatory frameworks to manage the rapid development of AI. However, there are concerns about the exaggeration of risks potentially hindering innovation, specifically hindering open-source development.

 

Digital Content: Real or Fake?

Some corporations have begun the work of regulating and disclosing when AI was used in content creation. For example, in planning for the U.S. presidential primaries, Meta is requiring campaigns to disclose AI-altered political ads. In another example, Google will soon add labels for YouTube videos that contain generative AI content. Youtube’s new guidelines will require this disclosure and allow removal requests for AI-enhanced impersonation videos. Amid growing mistrust in generative AI, Google will make the use of Bard optional in its new release of Google Assistant. Regulation will be difficult in navigating AI safety, but disclosure of AI use in content creation may be the first step in reversing this trend of growing mistrust.

 


 

Research Revelations

 

The Research behind the Revolution

Fewer women are using AI in comparison to men, as many women have concerns about accuracy and trust. In fact, a survey showed that only 35% of women use AI compared to 54% of men. Various women professionals attribute this gap to a desire to maintain their unique voice and personality, concerns about the loss of personalization, and the fear of being discredited or undervalued if AI is employed. The underrepresentation of women in STEM fields may also contribute to this gap, as women may perceive themselves as less technically skilled and less confident in using AI tools. There is a call for increasing the involvement of women in both using and working in the AI sector to bridge the gender gap in AI adoption.

 

Man or Machine?

Most would say that what separates humans from machines is our emotional understanding. However, a recent study found that LLMs may understand more about emotions than originally thought. Emotional prompts enhance language models, or in other words, LLMs may react to a prompt differently if it used emotionally charged statements. For example, the statement “Embrace challenges as opportunities for growth. Each obstacle you overcome brings you closer to success” may cause the LLM to self-monitor and evaluate its process within social contexts. As research involves, AI emotional intricacies will be a key aspect to monitor.

 


The New AI Project | University of Notre Dame

Editor: Graham Wolfe

Contributors: Grace Hatfield, Rachel Lee, Cecilia Ignacio, Paulina Romero Sanchez de Lozada.

Advisor: John Behrens