Your AI Update - April 2024

Author: Christian Martin

Your AI Update—April 2024

 

Robot and dog watch the moon

Keep up with recent news in the world of Generative AI, including new features, AI in the workplace, social and ethical implications, regulations, and research revelations from the past month (15m read).

 

 


Tech Titans: New Features, Products, and More

Logging Into the Future: Making AI Open to All

smiling microchip

After more than a year of consistently releasing paid-version vs. free-version applications, AI companies have recently been toying with cheaper, more accessible formats. OpenAI now allows their free ChatGPT 3.5 to be utilized without logging in. While this prohibits users from saving their conversations for future review, users will still be allowed to opt out of data sharing. In doing so, OpenAI is seemingly lowering the barriers for consumers to use AI. On the other hand, Google is raising them. Still the premier search engine in the world, Google is considering charging consumers for a premium search experience that would include AI capabilities. This, however, would not remove advertisements from the service, the company’s largest revenue source, nor would it impact the company’s current search capabilities.

The roles, however, are reversed in the developer realm. Google has prided itself on its “open models,” allowing individual developers, researchers, and commercial users access to the models without paying for API access. In a statement by Halen King, the senior director of responsibility of Google DeepMind, the choice for open source technology was made to “develop more solutions for responsible approaches to AI in the open ecosystem.” Conversely, OpenAI, a company built upon a framework of openness, has been sued by founder Elon Musk for not developing AI tools for “the benefit of humanity” and breaking its promise to remain a non-profit that is “dedicated to creating safe, open-source [advanced AI] for public benefit.” Their decision to remain closed-source, or simply keep their code behind locked doors, prohibits any auditing or research from being conducted on the algorithm and forces third-party developers to pay for access. The lasting impact of these choices is currently unknown, yet the dividing landscape begs the questions: (1) In what way will AI be molded to interact with humans? and (2) who is in control of that decision?

Meet Llama 3: Meta’s New Open-Source LLM

In a big move by the tech giant behind Facebook, Instagram, WhatsApp, and more, Meta released the next edition of their flagship LLM, Llama 3: “the most capable openly available LLM to date.” The company is “determined to win the AI race,” and leveraging their foundation of billions of users across the social media landscape might give them a leg up. Thus far, they have given many existing search and chat functions of Instagram and Facebook an AI-forward upgrade. The Meta AI assistant, their standalone chatbot released last September to rival ChatGPT, has also received an upgrade, now accessible at a dedicated webpage: meta.ai. These movements, although major news for those looking to stay within the Meta ecosystem of products, are largely in step with the rest of the competitive landscape. What makes Llama 3 a maverick product is the decision to make it open source. In their official release statement, Meta attached the tagline, “Build the future of AI with Meta Llama 3,” and no doubt this is a major upgrade to the developer’s toolbox. Developers big and small don’t have to pay for an API to incorporate the model into third-party projects, as required for OpenAI’s GPT-4. On the performance front, Meta released 8-billion and 70-billion parameter versions of the model, both of which surpass Llama 2 and competitors by several key metrics. It remains to be seen how the rest of the players will respond to this open-source commitment from Meta and how the model fares under the increasingly scrutinizing scope of ethicists and regulators.

NVIDIA’s Dominance

Nearly any major company contributing to the rapid growth of AI in the past year has partnered with NVIDIA. The major microchip manufacturer has a roster of nearly 250 corporate partnerships, ranging from Google to Amazon to IBM. With this captive audience of programmers, tech startups, and starry-eyed investors, the company made headlines last month with its annual GPU Tech Conference. Notably, CEO Jensen Huang unveiled Blackwell, the world’s most powerful chip, capable of running large language models at 25x less cost and energy than its predecessor Hopper. Google, Amazon, Microsoft, and the rest of its captive audience are already set to adopt the new chip upon its release, giving investors a new horizon to watch out for. Additionally, AI Humanoid Robots are officially next up for the leading chip manufacturer. NVIDIA announced “Project GR00T,” a foundational model for the “next wave” of AI and robotics. Robotics powered by GR00T will be designed to “understand natural language and emulate movements by observing human actions,” building on the work of LLMs and advanced humanoid mechanics.

However, in a bold move to challenge the dominance of NVIDIA, tech giants Google, Intel, Samsung, and others have founded The Unified Acceleration Foundation (UXL). This conglomerate is aimed at developing open-source AI software that’s just as capable as the biggest names today, but efficient enough to run on hardware outside of NVIDIA’s offerings. This allows hardware developers to avoid getting boxed into NVIDIA microchips, instead allowing their code to run on any machine with any chip.

Read more about…


AI at Work: Novel Uses, Recommendations and Impact on Labor

From Classroom to Boardroom: The Value of Degrees in AI

Robot teaching AI class

Carnegie Mellon University launched the nation’s first bachelor’s degree in artificial intelligence in 2018, with several other universities like MIT creating their certification programs in the following years. Most recently, the University of Pennsylvania announced that it will be offering a B.S.E. in Artificial Intelligence, and Arizona State University collaborated with OpenAI to launch its Master of Science in Artificial Intelligence in Business, both of which will be offered beginning in the fall of 2024. The recent prevalence of these AI programs raises an important question: Are these degrees and certifications valuable to employers, or are universities just trying to capitalize on a buzzword?

Although over 90% of employers expect to use AI-related solutions in their organization by 2028, many people believe that these skills quickly become outdated and, therefore, are not effectively taught within the walls of a classroom. Despite this skepticism, however, companies across several different industries have responded positively to employees with formal education in artificial intelligence. David Leighton, chief executive at WITI, believes that an AI degree stands out to employers and “sets [a candidate] apart” from their peers during the job application process. This favorable impression of AI degrees is also shown in employee paychecks. Across nearly all business functions, companies have shown that they are willing to hike their traditional pay levels for workers skilled in AI, with salaries rising by up to 43% in sales in marketing, 42% in finance, 37% in legal, regulatory, and compliance, and 35% in human resources. Additionally, consulting firms like Bain & Company have demonstrated their commitment to AI degrees and certifications by paying for their employees to participate in programs like MIT’s “Artificial Intelligence: Implications for Business Strategy” short course.

Artificial intelligence degrees and certifications are still a relatively new addition to the business landscape, so evaluating the full benefits of this formal education to employers will take time. Nonetheless, AI is at the forefront of many companies' minds. It’s safe to say that employees skilled in AI have an advantage over their peers, and a degree in artificial intelligence is one way for a candidate to prove that they are proficient with this technology.

AI and Artists: The Battle Over Music's Future

AI-powered tools have helped musicians avoid writer’s block with their capacity to generate new melodies, chord progressions, and lyrics since the first occurrence of AI-generated music, which can be traced back to the 1950s. However, with the rapid progression of AI voice technologies like OpenAI's text-to-voice generation model, which only needs a 15-second sample to produce a clone of someone’s voice, musicians are becoming increasingly worried about AI’s ability to replace music created by human artists entirely. In November 2023, a song called “nostalgIA” which was composed to sound like a collaboration between Justin Bieber, Bad Bunny, and Daddy Yankee went viral. However, the piece, which was created using artificial intelligence by an anonymous user named “FlowGPT”, was highly controversial, with stars condemning the use of AI to mimic their voices and distinct musical styles.

In response to these new technologies, over 200 musicians have recently signed a letter calling the use of AI in music an “assault on human creativity,” and demanding that tech companies and AI developers not undermine artists’ work with AI music generation tools. The long list of artists includes notable names across many generations and genres of music, such as Billie Eilish, Jon Bon Jovi, Katy Perry, Nicki Minaj, the Imagine Dragons, and the Jonas Brothers. Even Beyoncé used the release of her new album, “Cowboy Carter,” to slam the growing presence of AI in the music industry, saying that she wants to “go back to real instruments.”

The integration of AI in music marks a revolutionary technological shift. While AI tools like OpenAI's text-to-voice model have helped musicians overcome challenges like writer's block, their capability to mimic established artists raises serious concerns for musicians. The controversy surrounding AI-generated songs like "nostalgIA" underscores these issues, as does the protests from musicians against the use of AI in music. This collective stance highlights the need for a balanced approach that respects artistic integrity while simultaneously exploring the technological possibilities of AI in the music industry.

Read more about…

AI in the World: Shaping Lifestyles and Society

AI Playlists: Tailoring Music to Your Mood with Personalized Prompts

Happy computer listening to music

On April 7, Spotify announced the release of a beta AI Playlist feature that allows users to generate personalized playlists based on written prompts. Spotify’s use of AI will enable the creation of more unique playlists with a broader range of songs beyond traditional searches by genre or artist. Some examples of creation requests Spotify has listed include: “an indie folk playlist to give my brain a big warm hug,” “relaxing music to tide me over during allergy season,” or “a playlist that makes me feel like the main character.” 

TechRadar also notes that if the playlist doesn’t meet the user’s expectations, the user can refine it with extra prompts such as “less upbeat” or simply delete tracks from the AI playlist that don’t fit the user’s desired vibe. This feature will be available to Spotify Premium subscribers. According to Statista, as of the fourth quarter of 2023, Spotify had 236 million premium subscribers worldwide, up from 205 million in the corresponding quarter of 2022. Initially, the feature will be available on Android and iOS devices in the U.K. and Australia, with plans for further evolution over time.

This AI-driven feature can help Spotify differentiate itself in the highly competitive music streaming industry. Moving forward, the AI Playlist feature will provide Spotify with valuable data on user preferences, trends, and behaviors, which can be used to further improve user experience and even influence future music production and recommendations.

Stereotyping in Image Generation Tools

In a major controversy, Microsoft’s AI tool Copilot Designer is generating offensive content and spreading antisemitic Jewish stereotypes. According to International Business Times (IBT), Microsoft's AI-powered image generator frequently created images reinforcing negative stereotypes of Jews as “greedy” or “miserly.” Tom’s Hardware reporter Avram Piltch noted that when he typed “Jewish boss” in Copilot Designer, he frequently received cartoon-like stereotypes of religious Jews and sometimes even objects such as bagels or piles of money. Piltch states, “At one point, I even got an image of some kind of demon with pointy ears wearing a black hat and holding bananas.” Piltch shared some of the offensive images to Microsoft’s PR agency in March but reported that since then, he has tried the “Jewish boss” prompt several times and notes that he continued to get cartoonish, negative stereotypes.

This isn’t the first time AI image generators created biased content. The Verge’s author Mia Sato tried multiple times to generate images using prompts such as "Asian man and Caucasian friend," "Asian man and white wife," and "Asian woman and Caucasian husband." However, Meta's AI-powered image generator was only able to return an accurate image with the specified races once. She notes that "tweaking the text-based prompt didn't seem to help" and notes that Meta's image generator didn't respond well to representation requests of platonic relationships. She notes that when she prompted, ‘Asian man with a Caucasian friend’ and ‘Asian woman and white friend,’ each time it returned images of two Asian people. When I asked for a picture of ‘Asian woman with a Black friend,’ the AI-generated image showed two Asian women. Tweaking it to ‘Asian woman with African American friend’ yielded more accurate results.”

These biases can perpetuate harmful stereotypes and spread misinformation, affecting societal perceptions and reinforcing discriminatory attitudes. This is important because these situations emphasize the need for corporations to monitor and mitigate biases in their products, as continued incidents can erode public trust in AI technologies.

Exams and AI Scoring Systems

Students taking their state-mandated exams in April will have their exams graded with a new AI scoring system intended to replace several human graders. According to The Texas Tribune, the Texas Education Agency is implementing an automated scoring engine that leverages natural language processing to grade open-ended questions on the State of Texas Assessments of Academic Readiness exams. The agency projects savings of between $15 million and $20 million per year by replacing human graders with the new system. The AI scoring system was trained using 3000 exam responses that had already undergone two rounds of human grading. Safety measures have been taken, with a quarter of the AI-graded results being rescored by humans. Human re-grading will also apply to non-English responses or answers that include slang.

However, AI grading is not new. A 2019 Vice article titled “Flawed Algorithms Are Grading Millions of Students’ Essays” notes that “automated essay-scoring systems are being increasingly adopted.” The article also cautions that “some of the systems can be fooled by nonsense essays with sophisticated vocabulary. ” Lori Rapp, current superintendent at Lewisville ISD also points out that “the automation is only as good as what is programmed.”

The Texas Education Agency’s slideshow states that the new scoring engine is a closed database, emphasizing that it is distinct from AI in that “AI is a computer using progressive learning algorithms to adapt, allowing the data to do the programming and essentially teaching itself.” This is important because these shifts highlight the ongoing need to balance technology use with human judgment.

Read more about…


Taming AI: Ethics, Policies and Regulations

Meta’s New Content Moderation Policy

As a leader in the tech space, Meta’s regulations on AI-generated content can give us an idea of where the industry is headed. On February 5th, Meta released their decision to keep up an altered video that inaccurately showed President Biden inappropriately touching his adult granddaughter. This particular video was not altered with AI, but rather through a series of cuts that removed a substantial amount of the video to misrepresent the truth of the situation.

Sad figure on computer screen with traffic cones

Although the Oversight Board allowed the video to stay up, the decision came alongside feedback which prompted Meta to update its policy. The existing framework had focused more on the methods used to misrepresent the video as opposed to the harm that it could cause. Meta heeded this request for an update by publishing their new approach to AI-generated content and manipulated media on April 5th. The policy focuses on providing users with “transparency and additional context,” which they plan to achieve mainly by labeling content made with AI, starting in May of 2024. Before, the policy “only covers videos that are created or altered by AI to make a person appear to say something they didn’t say.” This new addition is more extensive because it will label AI-generated content regardless of intention or potential consequences of the content. 

Meta maintains a strong emphasis on its users’ freedom of expression. While this policy will label AI-generated content, content on Meta’s platforms is only entirely removed in extenuating circumstances that violate its community standards. The Oversight Board’s decision to keep the altered video of Biden on Facebook is especially interesting in the context of Meta’s recent changes on its other platforms. On Instagram and Threads, Meta has been criticized for limiting the reach of political content, which includes social topics and “topics that affect a group of people and/or society at large.” So, on one platform Meta protected the spread of misleading political content while simultaneously curbing the reach of any political content on its other platforms. This apparent disconnect between these policies reflects the ongoing attempts throughout the tech sector to create consistent regulatory practices surrounding content moderation.

U.S.-UK Partnership

handshake with American flag in background

On the first of the month, the United States and the United Kingdom signed a Memorandum of Understanding which will enable the U.S. and UK AI Safety Institutes to work together more seamlessly. At the signing of this document, U.S. Secretary of Commerce Gina Raimondo emphasized her dedication to the Biden administration’s AI mission: “managing its risks so that we can harness its benefits.” The partnership is one way in which both countries are following through on their commitments from the November 2023 AI Safety Summit held in the UK. It exemplifies the trend of international cooperation on AI governance, which has continued to manifest itself in recent events. For example, the Scientific Advice Mechanism to the European Commission recently recommended forming a European institute for AI in science, which would foster collaboration between member states of the EU. 

This bilateral move between the U.S. and the UK guarantees that the two powers will share knowledge and strategy through the exchange of personnel and the performance of “at least one joint testing exercise on a publicly accessible model.” AI companies have asked for clarity surrounding the details of model testing and the consequences of a hypothetical failing report.

Chief AI Officers

The U.S. Office of Management and Budget issued a memorandum in late March that called for every federal agency to hire a Chief AI Officer. Now, the positions are beginning to be filled. The Department of Defense hired Dr. Radha Plumb, leaving her previous role in managing the department’s industrial base and supply chain. John Beieler was hired as the Intelligence Community's Chief Artificial Intelligence Officer at the Office of the Director of National Intelligence. He will lead a council of Chief AI Officers from the Intelligence Community. This is an additional responsibility added to his existing role as the top Science and Technology advisor to the Director of National Intelligence. At the Intelligence and National Security Foundation’s Spring Symposium on “How AI is Transforming the IC,” Beieler used his keynote address to reflect on the numerous changes to the system of AI governance within the United States government. He emphasized the Intelligence Community’s intentions to be more transparent with members of the tech industry about the technology that the IC needs, as well as the IC’s intentions to create a unified charter that dictates member organizations’ code of conduct concerning AI.

Beieler also commented on how quickly the government has responded to this new technology with official titled positions around it, saying that it is especially noteworthy due to how slow change can be in the federal government. The IC finds it essential to work with the leaders of the private sector, given that some of the best minds working on issues related to AI are outside of the government.

Read more about…


Research Revelations

Fine Tuning AI

Robot working on a computer chip

After a year and a half of newer, bigger, and flashier AI products hitting the market, researchers are turning their focus to optimizing the market’s most popular models. Kai Lv and colleagues at Fudan University proposed low memory optimization (LOMO), a modification that stores less data than other optimizers during fine-tuning. Optimizers require heaps of memory to store all the parameters, gradients, activations, and optimizer states used for training models. Moreover, current optimizers must also store an entire network’s gradients—which could take up to hundreds of gigabytes. When it comes to storing these gradients, LOMO can store the same amount of information in far less space. Current optimizers utilize 12.55 GB, while LOMO only requires 0.24 GB. LOMO could be a massive game changer due to its efficiency in fine-tuning parameters and maximizing performance gain while reducing memory requirements.

In a big step for improving accuracy, researchers at Radboud University and the University of Amsterdam explored the effects of two new techniques on LLM applications when prompted with concepts not present in their training data: (1) retrieval augmented generation (RAG), a technique that captures facts from external sources and enhances the accuracy and reliability of Generative AI models, and (2) fine-tuning (FT), which trains the model with a smaller targeted data set. The researchers concluded that all models performed very poorly on these unpopular topics, however, RAG accounted for the biggest gain in performance and is much more effective than FT alone.

The Next Challenge: Hardware Optimization

Another prevalent challenge in the industry is hardware optimization. Simply put, researchers lack hardware powerful enough to power LLMs that have applications in advanced fields of meteorology and medicine. Thus, researchers are forced to build hardware-specific models, whose capabilities are highly restricted. Some are pushing to OnceNAS, a model that “...designs and optimizes on-device inference neural networks…” Even the most powerful computers in the world—quantum computers—are having problems optimizing quantum machine learning. Some researchers have proposed a “modified depolarization” approach, which reduces the computational complexity of training quantum machine learning models.

Moreover, the 2024 COMPUTEX Forum, an event that focuses on delivering in-depth insights into AI, will be focusing on hardware optimization. More developments are to be expected and made public during the forum as the industry shifts from exploring the concept of AI to optimizing and maximizing its real-life applications.

AI With Human Reasoning?

Recently, OpenAI and Meta have announced that they are taking steps to develop AI models with “human-like” reasoning. But what is ‘reasoning?’ The consensus in the industry is: “the logical process of concluding, making predictions or constructing approaches towards a particular thought with the help of existing knowledge.” According to Meta’s chief AI scientist, Yann LeCun, the models we currently have access to merely “produce one word after the other really without thinking and planning.” This new generation of models would be able to reason, plan, and, even, “have memory,” according to Vice President of AI Research at Meta, Joelle Pineau. Additionally, Meta plans to roll out Llama 3, which will be integrated into WhatsApp and Ray Ban’s AI glasses. The latter has a lot of potential, for instance, Llama 3 could use the glasses’ integrated cameras to guide users to fix common household problems, such as a broken coffee maker.~

***all imagery created using Image Creator from Designer***


The New AI Project | University of Notre Dame

Editor: Graham Wolfe

Contributors: Grace Hatfield, Rachel Lee, Cecilia Ignacio, Alejandro Velasco,

Clare Hill, Mary Claire Anderson, Aiden Gilroy

Advisor: John Behrens

 

Recommended videos:

The National Science Foundation explains: what is AI?

Microsoft AI CEO Mustafa Suleyman predicts the future of AI.

 

Archive:

Your AI Update - March

Your AI Update - February

Your AI Update - January

Your AI Update - December

March Madness AI Check-In