Your AI Update- May 2024

Author: Andrea Connors

Your AI Update—May 2024

Keep up with recent news in the world of Generative AI, including new features, AI in the workplace, social and ethical implications, regulations, and research revelations from the past month (15m read).

 


Tech Titans: New Features, Products, and More

Apple is Entering the Ring

Cartoon of a Mac and PC in a boxing ring

After months of virtual radio silence on AI, Apple has opened its floodgates of resources to develop and integrate AI into its products. In a recent conference, Apple announced their latest iPad chips which will support many new AI capabilities. One of these abilities is a new AI eye-tracking feature integrated into the iPad, allowing users with physical disabilities to control their devices using only their eyes. Tim Cook, in announcing the use of AI to broaden accessibility highlighted, “We’re continuously pushing the boundaries of technology, and these new features reflect our long-standing commitment to delivering the best possible experience to all of our users.” In addition to these chip updates, Apple is set to release more advanced AI features in their June conference WWDC24. Coined project Ajax, Apple is rumored to be working on its own AI model in addition to hosting partnering talks with AI giant OpenAI. This project is set to release upgrades to Safari and Siri that would allow for automatic summarization of web pages and greater assistance capabilities. Ultimately, Apple’s late entrance into the AI horserace isn’t without benefits, as Apple will ideally be able to preemptively account for the errors that followed both Google’s and OpenAI’s earlier releases—one of the benefits of not being the first to move on a trend. Either way, regardless of any errors or complications, these updates will bring AI into the pockets and lives of the roughly 120 million Americans who own an iPhone.

The Next Evolution of AI: ChatGPT-4o

After being surpassed in capability by other AI models, OpenAI has officially unveiled a new model to replace ChatGPT 4: GPT-4o. Unlike previous models and other AI companies, GPT-4o, also known as ‘Omni’, boasts two unique upgrades: (1) humanlike conversational abilities and (2) real-world interaction. As expressed in its live demo, Omni is capable of listening to vocally posed questions and responding with almost no latency, making it the closest thing to a non-human human conversation. One post even showed the AI model laughing along with the user. OpenAI’s demo of the new technology also included real-time conversational translation from Italian to English and vice versa and the model’s ability to understand a given user’s emotions just by facing the camera at the user or hearing their breathing. To sweeten the deal, OpenAI is releasing this model without a paywall, a choice that drastically contrasts with their competitors’ moves. This new release comes in the wake of a conversation with Sam Altman, the CEO of OpenAI, where he suggests that “GPT-4 is the dumbest model any of you will ever have to use.” While true superintelligence to the degree of AI consciousness hasn’t been reached in Omni, its many humanlike capabilities and low latency certainly spark questions surrounding the timeline of such advancement. With that being said, this technological advancement is certainly drastic and is leagues above its current competing models.

 Google’s Playing Catchup

Since the initial release of ChatGPT in November 2022, Google has been positioned as a strong market challenger to OpenAI. Yet, after stumbling with the release of Gemini, Google has found its footing. Announcing a text-to-video AI model named Veo, Google has entered itself into competition with OpenAI’s Sora. Marketing Veo as a tool to tell stories and lower the barrier to becoming a director, Google is seeking to attract the nominal user before OpenAI’s technology is released to the public. Veo is also highlighted as having the ability to remain consistent across video frames and prompts to create a seamless transition between ideas.

Alongside Veo, Google has announced its plans to revamp its flagship search engine with AI. The Head of Search at Google, Liz Reid, has said, “What we see with generative AI is that Google can do more of the searching for you.” This updated search experience takes the form of search summaries, contextualized search results, and an AI video search feature that allows users to upload videos and have Google identify items or things within the video. At the heart of these changes is cultivating an easier and more focused process that saves the user time.

Finally, Google announced a new generative AI model named Gemini Flash 1.5, which provides the speed and capabilities necessary to support real-time conversations. With a greater capacity to hold and store individual user conversations within its memory, Gemini Flash 1.5 rivals OpenAI’s new Omni model as it “excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more” states the CEO of Google’s DeepMind Technologies. These advancements are finally making a dent in OpenAI’s dominance, yet as bias continues to be an issue within Google’s models, the thorn, if not removed, will draw into question the usability of Google’s models.

 Read more about…


AI at Work: Novel Uses, Recommendations and Impact on Labor

 Minimum Wage Jobs in the Age of AI

Regarding implementing artificial intelligence in the workplace, the question at the top of everyone’s mind is: Will AI take my job? AI is not human and cannot replicate original human thought processes; therefore, some level of human reasoning will always be needed in the workplace. However, as minimum wages continue to rise across the United States, evidence suggests that some employers are looking to replace low-skilled workers with automation to reduce operating costs. 

Teddy bears sitting on a restaurant counter

Some industries have already begun making this transition. For example, California’s minimum wage for fast-food workers recently increased to $20 per hour, which, on average, amounts to an increase of $180,000 in annual labor costs per franchise. This increase in costs combined with a current labor shortage has left restaurant owners looking for new solutions to decrease the number of employees needed in each of their stores. While this transition has already been characterized by an increase in self-service kiosks that allow customers to place their orders without interacting with a cashier, many owners are now turning their attention to artificial intelligence and the potential that it has to transform the fast food industry.

AI-powered drive-through lanes are one of the ways that restaurant owners have leveraged artificial intelligence to make their franchises more efficient. Several chains, including McDonald’s, Carl’s Jr., and Hardee’s have already started testing AI speech recognition systems in their restaurants to take customer orders. These systems can cut a total of “90 seconds off what typically takes 5½ minutes” for a drive-through purchase, reducing the need for employee wages by 10 to 15 hours per day. While many fast food owners have maintained that they don’t want to “get rid of” employees due to automation, they simultaneously acknowledge that these AI systems would reduce the number of workers needed at their franchises. As the minimum wage increases, it is clear that while it may not entirely replace human workers, AI will continue to revolutionize the employment landscape.

 Unlocking Productivity: How AI is Revolutionizing Corporate Efficiency

In the modern corporate world, the ability to rapidly access, grasp, and respond to information has become a fundamental necessity. As businesses become increasingly reliant on data-driven insights, CEOs have been turning to artificial intelligence to help streamline these processes. A survey found that 87% of companies said that they expect AI to increase their organizational productivity in the next three years, and several AI tools have already begun helping enterprises meet this demand.

Recently, Atlassian launched a new enterprise AI tool called Rovo, which has the goal of “making teams more effective” by helping businesses find and compile information that may be dispersed across several internal or third-party sources. While many companies have access to huge amounts of data, Rovo helps business leaders find the data that “matters” much faster than any human could; whether the data is stored in Google Drive, Microsoft Sharepoint, GitHub, or another file storage tool, Rovo can find relevant results promptly. However, beyond acting as a search tool, Rovo also helps “bring generative AI to enterprise decision-making” by incorporating AI-driven features that help users draw actionable insights from the data they are given. Rovo’s generative AI chat feature provides users with knowledge cards that contain in-depth analyses of the data that was aggregated, therefore increasing the collaboration between humans and AI.

The integration of artificial intelligence in the corporate landscape is a pivotal shift towards enhanced efficiency and productivity. Atlassian’s Rovo is only one example of how AI can revolutionize the way businesses handle vast amounts of data, providing rapid access to crucial information and generating actionable insights. As AI continues to evolve, it will continue to play a significant role in streamlining operations and fostering collaborative decision-making, which will help companies thrive in an increasingly data-driven world.

 Read more about…


AI in the World: Shaping Lifestyles and Society

 Generative AI and Gene Editing

Generative AI’s impact continues to expand along the frontiers of science and technology, as researchers have developed an AI system that opens grand doors for the world of gene editing. “Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms that can edit your DNA,” the New York Times reported last month. Researchers have trained artificial intelligence on complex proteins and vast amounts of advanced biological data to produce gene editors that, in principle, can outperform those found in nature. Startups like Profluent, the creator of OpenCRISPR, hope that this AI will augment the existing CRISPR technology that allows scientists to fight diseases caused by hereditary/genetic preconditions. 

Painting of DNA strand

Researchers have already put this technology into practice, with the first and most promising molecule, given the name OpenCRISPR-1, proving equally efficient and more accurate than the leading bacterial CRISPR–Cas9 enzyme. Crucially, this project is part of a larger goal of accelerating the production of life-saving, AI-powered gene editors. Leading researchers and startups like Profluent have decided to open-source the underlying software that drives their AI systems, enabling other groups/labs to build on their work. Open-sourcing AI technology is a contention in the broader world of AI technology, with major players like OpenAI refusing to open-source most of their groundbreaking models. While promising, the marketability and applicability of this technology to doctors and providers remains to be seen.

 AI on Main St.

Novel and disruptive technologies like AI often threaten small businesses by forcing them to divert resources toward understanding how to wield new technologies; however, when properly leveraged, new technology like artificial intelligence has immense potential to uplift businesses on Main Street. Feeling pressures from inflation, business owners repeatedly choose to implement AI tools to “work smarter, not harder.” A recent poll from the Small Business and Entrepreneurship Council (SBEC) reports that 48% of small businesses have started using AI tools in the past year, and “29% have been using them for one to two years.” Popular tools like ChatGPT have allowed businesses to decrease time spent on menial, clerical tasks like copyright writing, monthly reports, and customer service complaints and devote time to more valuable pursuits. OpenAI’s DALL-E 3 and other image creators have made visual marketing easier and countless productivity tools like Otter.ai have cut down on painpoints in project management.

The biggest business-side challenges driving the uptake of AI in small businesses are a tight labor force, inflationary pressure, and competition from bigger players. On the labor front, finding and retaining a workforce has proven an enduring challenge post-pandemic, and the potential for automation by AI products has driven their popularity in lots of small businesses. In terms of inflation, the same SBEC report states that 36% of AI-using businesses have used time and cost savings to keep “prices stable for customers in this inflationary environment.”

 Read more about…


Taming AI: Ethics, Policies and Regulations

 Training Wheels: AI for Children

Robot riding a bike

Existing generative AI poses many risks for children, ranging from data privacy concerns and misinformation to AI-generated child sexual abuse material and AI-driven online grooming. While AI models must be safe for every person to use, some leading companies in the AI sphere have committed to making their products especially safe for children. Thorn and All Tech is Human are organizations that are working towards a safe internet for children and responsible technology in every aspect. They collaborated with Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, OpenAI, and Stability AI to outline responsible practices that AI companies can use to protect children, called the Safety by Design Principles. The commitments come in all stages of a product’s lifespan, including development, deployment, and maintenance. Development recommendations include responsibly sourcing training datasets and safeguarding them against child sexual abuse material. Deployment safeguards ensure that the companies will responsibly host their models, as well as third-party models. Maintenance commitments aim to have these companies continually investing in research and technologies that eliminate child abuse material, as well as calling on them to take responsibility when their platform is found to contain such material.

Anthropic, one of these companies that is dedicated to following the Safety by Design Principles, has also decided to allow teenagers and pre-teenagers to use third-party apps that are powered by their AI models. The company will require more extensive technological support for products that will be used by a younger population, such as “age verification systems to ensure only intended users can access the product, content moderation and filtering to block inappropriate or harmful content, monitoring and reporting mechanisms to identify and address potential issues, [and] educational resources and guidance for minors on safe and responsible use of the product.” As many young people turn to generative AI for educational purposes as well as social and emotional help, all of these safeguards are good examples of ways that AI companies can help protect children who will use their products.

 OpenAI’s Model Spec

In other news of self-regulation in the world of AI, OpenAI recently published the first draft of their “Model Spec” document, which according to OpenAI’s website, “specifies how we want our models to behave in the OpenAI API and ChatGPT.” The document allows for user insight into how model behavior is shaped. This level of transparency is one way in which OpenAI is attempting to be accountable for its technology. As the draft itself indicates, this document is meant to be taken into account alongside the company’s usage policies. In this way, OpenAI shows that it will work with its users to create a responsible technology.

The document outlines objectives, rules, and defaults. The three enumerated objectives of any OpenAI generative assistant are to (1) assist the developer and end-user, (2) benefit humanity, and (3) reflect well on OpenAI (respect social norms and applicable law). The rules and defaults are meant to support these objectives, and it is clear that they stand to be changed in any way that OpenAI sees fit. This document will evolve as their models and usages do. The implementation of the Model Spec has yet to be seen and will be important to follow.

 Fair Competition Concerns in the AI Model Market

High-profile partnerships centered on AI models have garnered a lot of attention recently. With that attention comes the suspicion that large companies may be attempting to limit market competition through their sizable investments. Earlier this year, EU antitrust regulators looked into Microsoft’s $13 billion partnership with OpenAI but determined that it did not warrant an official investigation because the partnership did not change who held control on a lasting basis.

More recently in April, the United Kingdom’s Competition and Markets Authority (CMA) shared concerns of overpowering firms being able to have a controlling influence on the market. In this publication, the CMA warned of the risks of firms restricting access to critical inputs, limiting competition by exploiting powerful incumbent positions, and heightening “existing positions of market power through the value chain” by engaging in large-scale partnerships. Around this time, the CMA also voiced concerns about Microsoft and Amazon specifically, as they had both engaged in recent partnership deals. Microsoft was under observation due to its dealings with Mistral AI and Inflection AI, while Amazon was under the microscope for its investment in Anthropic. In mid-May, the CMA announced that the Microsoft and Mistral AI partnership did not pose a threat under current British merger restrictions. There have been no such updates on the Amazon case.

The Federal Trade Commission (FTC) of the United States government has also considered the impact of AI companies’ investments and partnerships on the market’s competitive landscape. All of these regulatory bodies – the EU, the CMA, the FTC – are looking to ensure true competition in AI markets to promote innovation and free trade. As Joel Bamford, The CMA’s Executive Director of Mergers, describes it, “Open, fair, and effective competition in AI model markets is critical to making sure the full benefits of this transformation are realized by people and businesses in the UK, as well as our wider economy where technology has a huge role to play in growth and productivity.”

 Read more about…


Research Revelations

 Flaws in AI

As AI becomes a key player in the world economies, companies and individuals alike are testing the limits of AI models. Companies from OpenAI to individual developers push this developing technology into a new horizon each and every day. However, no new developments are perfect. Just as it took years to work out most flaws in smartphones and computers, so will it take companies and developers to do the same with AI. One of AI’s most prominent flaws is misinformation, whether it be providing false facts and/or information about individuals and other general topics. This flaw has become enough of an issue that the EU and certain individual European governments have taken action and filed complaints about this. Specifically, countries have filed privacy complaints to OpenAI through NYOB, a privacy rights nonprofit. Most of these complaints are made on the basis of the General Data Protection Regulation (GDPR), which governs how the personal data of regional users can be processed. The penalties for not complying can reach up to 4% of global annual turnover. And, more importantly, these failures could give GDPR grounds for enforcement which could reshape how generative AI tools can operate in the EU. For OpenAI, a company that focuses on and is built around data, this could prove tragic. 

Robot reading directions in a manual

Moreover, some users have raised concerns about AI’s perceived “limitless” potential. AI’s potential for automatization, planning, and idea generation is unparalleled to any other tools that we have had access to in the past. However, this tool is missing a foundational aspect: humanity. AI does not take into consideration empathy, it operates without an inherent expiration date or a value placed on a life. Meanwhile, humans make decisions based on morality, and everything we do is driven by our understanding of life’s finite nature- even making decisions that defy logic, like sacrificing a life for a greater cause. These human traits define the way we do ethical business, how we help each other, and the structure of society. As we use AI more and more in highly important settings, such as key government decisions (Read more about it in our February issue) we must recognize these shortcomings. For instance, doctors found out that OpenAI’s ChatGPT has an 80% misdiagnosis rate, a shortcoming that has the potential to be dangerous.

 AI in the Future

With the new release of OpenAI’s GPT-4o and its amazing new capabilities, the future of AI seems to become more integrated with our lives. Apart from it becoming a pivotal aid in our personal lives, more and more companies are relying on the help of AI. Due to this prevalence in nearly every aspect of our lives, there has been a push for measurement, control, and learning of AI and its applications. For instance, Rebecca Parsons, CTO emerita at ThoughtWorks Inc., argues that in the future companies must be mindful of the limitations of AI, the importance of educating people on AI and its capabilities, etc. Of course, the foundation of measurement, control, and education is still being laid. As the New York Times informs, we are still having difficulties categorizing and measuring the capabilities of current AI models. “Despite the appearance of science, most developers judge models based on vibes or instinct,” said Nathan Benaich, an A.I. investor with Air Street Capital. And, while existing tests are useful, they are limited and often suffer from issues like data contamination and inconsistent administration. The future for AI is bright and fruitful, but we must consider the measurement, control, and education of the public of high importance if we want to maximize the capabilities of this technology.

***all imagery created using Image Creator from Designer***


The New AI Project | University of Notre Dame

Editor: Graham Wolfe

Contributors: Alejandro Velasco, Clare Hill, Aiden Gilroy, Mary Claire Anderson

Advisor: John Behrens

 

Recommended Videos

AI Is Dangerous, but Not for the Reasons You Think

How will AI change the world?

 

What is AI?

The National Science Foundation explains: what is AI?

Microsoft AI CEO Mustafa Suleyman predicts the future of AI.

 

Archive

Your AI Update - April

Your AI Update - March

Your AI Update - February

Your AI Update - January

Your AI Update - December

March Madness AI Check-In