Keep up with recent changes in new technologies, along with AI impacts in the workplace and society, ethical and regulatory changes, and a touch of science with our research revelations (10m read).
Tech Titans: New Features, Products, and More
Products First: The Enduring Strategy of Big Tech
Rather than dedicating resources to developing new models like GPT-4, Llama 2, Claude 2, etc., the market for AI demands that big players focus on improving the underlying technologies and focus more on getting them into existing or new products. In a January 15th announcement, Microsoft committed to bringing “the full power of Copilot to more people and businesses,” notably launching a dedicated mobile app for their product (formerly BingChat) and rebranding their image generator to Image Creator from Designer. The company also announced that their future Surface Laptops will be the first true “AI PCs,” featuring new AI-optimized Snapdragon X chips and a hardware-integrated Copilot key on the keyboard, among others. Co-pilot is a market-challenging AI chatbot developed by Microsoft that is intended to be integrated into the Microsoft 365 suite of products. Initially launched last February as ‘Bing Chat,’ the product has undergone many changes on its way to being Microsoft’s flagship AI product. Similarly, Google is integrating its Quantum technologies into its Chrome browser and ad services, two cornerstone elements of their business now receiving AI upgrades.
The ChatGPT suite is also receiving a major upgrade with the announcement of the GPT Store. The introduction of ‘GPTs’ back in November that allow users to customize ChatGPT for a desired purpose (e.g. cooking buddy, stock whiz, creative writing coach, etc.) was a major success. Exclusive to subscribers of ChatGPT Plus, Team, and Enterprise, the feature gives users a no-code way of tailoring the power of AI to their needs. Now, with the January announcement of the GPT Store, users can browse the world of custom GPTs and seamlessly integrate them into practice. Anyone is capable of creating a GPT and uploading it to the store for other users’ access, and the company plans to implement a revenue program sometime during Q1 that pays creators by usage/popularity.
Flying too Close to the Sun
Google launched Gemini, what it considers to be its most capable model yet. A ‘working’ demo of the product was posted to social media in December to a reception of millions of viewers. However, the company later admitted that the seamless interaction between the narrator and the model was faked and was merely an “illustrative depiction of the possibilities of interacting with Gemini, based on real multimodal prompts and outputs from testing.” The term multimodal in this quote references Gemini’s supposed understanding of both visual and linguistic prompts. It is doubtless that Google’s integrity has taken a hit because of this high-visibility controversy, and some have accused the company of “playing catch-up with hype.”
The Hype around Video Generation
Since Midjourney released its groundbreaking AI image generator in June 2022, we’ve all been wondering when AI’s capabilities would extend to replicate video and organic sound, music, and speech. In early January, Midjourney announced precisely what has been so long anticipated, a ‘text-to-video generator,’ to be released in the coming months. Shortly after, CEO of OpenAI Sam Altman confirmed that video generation capabilities will be coming to ChatGPT within a year or so. Microsoft also broke ground by announcing Copilot’s compatibility with an extension called Suno, a plug-in that enables users to create songs based on text prompts. Google contributed to the frenzy by announcing VideoPoet, a large language model (LLM) that can generate video. As with any stride in AI capability, other problems like perpetuating the web’s misinformation remain to be evaluated. Currently, only brief (several seconds) video generation is widely available through Meta’s Emu, Stability AI’s Stable Video Diffusion, and Runway’s suite of video and image tools.
Read more about…
…the gimmicky new AI-powered augmented reality pin: Humane’s AI Pin will start shipping in March
…Google’s take on AI image generation: Google Will Soon Add An AI Image Generator To Bard
AI at Work: Novel Uses, Recommendations and Impact on Labor
The Future of Work with AI
The conversation around the future of work has changed due to recent developments in AI capability and novel insights from a variety of industries. The New York Times contributor Peter Coy warns of the potential job displacement by AI, estimating that “if science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10 percent by 2027, and 50 percent by 2047.” Coy advocates for directing AI to complement human skills, as proposed by MIT's Shaping the Future of Work Initiative. The initiative emphasizes the importance of human agency in guiding AI to enhance, rather than replace, work. Adding to Coy’s argument, Ana Kreacic, the Chief Knowledge Officer of Oliver Wyman, emphasized the need of re-training employees to help adjust to changing roles caused by AI. At the Davos World Economic Forum, Kreaic stated that “40% of executives believe their workforce needs training or retraining” while 98% of workers are saying they need more training, highlighting the disparity between roles.
Meanwhile, Forbes economist Bill Conerly asserts that AI's impact on various job sectors will vary based on the importance of location and the tasks involved. For office workers, generative AI like ChatGPT is expected to enhance productivity, potentially reducing demand and wage growth in certain sectors. As observed by Kreaic, “when you look at history, many of the productivity gains that we've seen historically have gone purely to the employer” as opposed to workers. In contrast, hands-on work, such as construction and manufacturing, may see less immediate impact from AI. Conerly emphasizes the task-specific nature of the future job landscape, predicting increased demand and higher wages for hands-on workers. As AI continues to shape the future of work, both Conerly and Coy both converge on the idea that AI requires intentional human choices to balance ethical technological innovation and worker well-being.
Is AI able to replace Animators?
As AI image generators have improved over the past few years, animators have become apprehensive about the future of their work. David Crownson, a comic book writer and publisher of Kingwood Comics, voiced his concerns about decreasing job opportunities for animators, especially those from ethnic minority backgrounds. With AI producing animations much faster than human artists, Jeffrey Katzenberg, co-founder of Dreamworks SKG, predicted that 90% of animator jobs could be replaced by AI, particularly as studios and major publishers seek to reduce costs in the aftermath of the Hollywood writers’ strike.
AI-generated comic artwork is not protected by US copyright laws, prompting animators to petition for the rights to their artwork and to safeguard animator jobs. While some artists fear that AI diminishes the creativity of animations, others see AI as a tool to expand graphic design possibilities, as noted by Dave Jesteadt, the president of animated film company Gkids. With new apps such as Animaker AI and Deepmotion, animation is more accessible to less experienced individuals. Even experienced artists are integrating their work with AI, as seen with the band Guns N' Roses partnering with Dan Potter, the creative director at Creative Works, to create a blend of real-life concerts with animations for their music video for “The General.”
AI on the Roads
Government agencies are leveraging artificial intelligence to mitigate traffic congestion and enhance road safety. In the state of Florida, AI is employed to manage traffic by analyzing data from diverse sources, including cell phones, cameras, and connected vehicles. The application of AI to transportation data extends its utility to supporting emergency service vehicles, law enforcement, and public transportation. Florida's AI system plays a crucial role in identifying traffic issues, expediting responses to incidents, and aiding law enforcement in tasks such as license plate recognition. In New York City, the Fire Department collaborates with the C2SMARTER consortium to utilize AI for analyzing traffic patterns and optimizing routes for emergency vehicles, addressing challenges posed by obstacles and evolving city landscapes. Additionally, in the UK, FirstBus, a nationwide bus service provider, employs AI to update bus timelines, leading to more accurate arrival times.
Read more about…
…layoffs at an iconic language-learning app: Duolingo sheds some human workers
…less work for developer and hardware teams: Google cuts over 1,000 jobs
AI in Life: Social, Political, and Economic Impacts
Breaking OpenAI’s Store Policy: AI Girlfriend Bots
In the fourth quarter of 2023, OpenAI announced that everyone would be able to make their own GPTs, regardless of coding experience. To help facilitate this initiative, OpenAI launched the GPT store on January 10, enabling users to have more personalized versions of ChatGPT. However, within two days of the store’s opening, users are already disregarding OpenAI’s policy which had been updated during the store’s launch. According to Quartz, a search for the word “girlfriend” on the GPT store will show at least eight romantic chatbots ranging from “Korean Girlfriend” to “Your girlfriend Scarlett.” OpenAI explicitly states that they “don’t allow GPTs dedicated to fostering romantic companionship or performing regulated activities.”
Last summer, a Surgeon General Advisory was released to call attention to the lack of connection in the country. The popularity of relationship chatbots is likely a result of the alarming rise of loneliness and isolation in the United States. OpenAI’s store policy being broken in 48 hours for romantic companionship bots showcases the importance of mental health and the need for careful and proper regulation of GPTs.
Smart Home Appliances Leverage AI for Sustainability
Home appliance manufacturing companies are looking to AI to help consumers design more sustainable home products that will save consumers both time and money. According to Zafer Ustuner, the CEO of Arcelik Hitachi Home Appliances and Asia-Pacific CCO of Arcelik Global, AI is being leveraged in the product design, development process, and communications team to boost productivity. Ustuner is also interested in building longer-lasting appliances, noting how advancements in technology make production more efficient and emphasizing that durability is key to a desirable home appliance.
Moving forward, it is likely that AI will continue to shape production. Consumers today are becoming more environmentally conscious about their purchases with about 75% of millennial consumers stating that they consider sustainability when shopping. The use of AI will influence how companies design products and cater to customers needs and values.
ChatGPT in School Curriculums
Governments around the world are coming up with frameworks to incorporate AI and make education more effective. For instance, according to the Guardian, ChatGPT will formally be rolled out in all Australian schools with an explicit requirement to include “potential limitations and biases” in the curriculum. Since the release of ChatGPT in 2022, the country’s Department of Education has created an AI chatbot called EdChat with built-in safeguards to ensure privacy and limit inappropriate content. The chatbot is currently being tested at select public schools and will be useful to further examine how AI should be implemented in school curriculums.
OpenAI Goes to College
On January 19, Arizona State University (ASU) announced its collaboration with OpenAI to launch ChatGPT Enterprise which has higher-grade security and privacy measures. ChatGPT Enterprise also offers faster response times and has the capacity to process more tokens. Throughout the semester, ASU faculty and staff members will be encouraged to share new and creative uses of ChatGPT in hopes of enhancing research and organizational structures on top of boosting efficiency and student success. This is the first higher education institution that has worked with ChatGPT and could set an example for other institutions. The implementation will officially begin in February 2024.
Read more about…
…how AI impacted the largest commercial tech event in the world: at the consumer electronics show, AI gets companies on the same page
…how AI is impacting global GDP: AI Will Transform the Global Economy
Taming AI: Ethics, Policies and Regulations
We’ve Got Your Attention Now: Swifties Take Action
Currently, creating and sharing deepfakes is completely legal under federal law. Threats of inciting violence through political imitation or the creation of child pornography have not brought about regulations, but Taylor Swift fans, or Swifties, may bring us closer to forming more ethical deepfake laws. Recently, explicit photos of Taylor Swift were fabricated and circulated online, receiving up to 47 million views before being deleted from social platforms. This brings about concerns, not only for celebrities like Taylor Swift but for all women, as most of the deepfakes created depict women pornographically. Recently, some U.S. states have begun forming laws around this issue, as well as the UK, with their 2023 Online Safety Act. Advocates and fans hope that pressure from fans and a boost in media attention will push Congress to address this phenomenon on a national level.
AI and the Presidential Election
The enduring contention of misinformation in presidential campaigns has recently been exacerbated as a result of deepfakes, a video of a person in which their face or body has been digitally altered, becoming exponentially easier and cheaper to create. In fact, fake messaging has already begun. In late January, a crafted robocall disguised as Joe Biden encouraged voters to skip a New Hampshire primary election. It provided false information on how presidential voting worked to thousands of voters, aiding in voting suppression. This month also incited OpenAI to ban its technology in the use of presidential campaigns for the first time when a bot was created that resembled potential Democratic candidate Dean Phillips. National efforts to maintain free and fair elections will likely call more attention to ethical AI usage throughout the election cycle.
Open[ing]AI to the Government
OpenAI has lifted bans on the usage of ChatGPT and other AI tools by the U.S. military. OpenAI’s policy still restricts its AI tools from being used for harm or weaponry, however, the company has removed specific references regarding the military. According to a recent interview with CEO Sam Altman at the World Economic Forum, this change is due to a partnership with the U.S. Department of Defense in the creation of open-source cybersecurity technology. While there may be some benefits to utilizing AI in this context, historically, Big Tech employees have had concerns and even protested tech contracts with the military. This frequent lack of employee support constructs an unease surrounding OpenAI opening its policy to military use.
In related news, American tech companies will have greater communication with the U.S. government moving forward, and it is likely that the Biden administration will soon use the Defense Production Act to ensure this. This act would require that tech companies like OpenAI and Google brief the government when using large amounts of computing power to train new AI models. Currently, the bar for notification is slightly higher than what is used to train ChatGPT, so implications of this change would be focused on regulating future, more powerful models.
The Pope is on it
Pope Francis themed this year’s World Peace Day as “Artificial Intelligence and Peace”. With this, the Catholic Church sent a powerful message, arguing that the future of global peace will go hand-in-hand with the developing world of artificial intelligence. AI can help promote peace through its integration with education, for example, but it can also be counterproductive, such as with discrimination and bias in algorithms. Additionally, being a target of deepfakes himself, see the “Pope in Puffer” pictures, the Pope is wary about how AI can be a source of misinformation. Overall, the Pope’s message examines the future of this technology and if AI could ultimately be wielded in the global pursuit of peace.
Read more about…
…how AI is automating warfare: Israel is using an AI system to find targets in Gaza
…the EU’s ‘AI Act’: a blueprint for holistic regulation
Google’s AI research subsidiary Deepmind announced a breakthrough in AI’s toughest theoretical challenge: high-school level geometry problems. This novel AI model can solve the world’s toughest geometry problems presented at the International Mathematical Olympiad, not only matching the brainpower of the world’s top mathematical minds but even presenting proofs previously unknown to human understanding. The system combines more traditional computer programming logic with the newer generative large language models (similar to ChatGPT). Essentially, it uses the language model to interpret and reframe the problem in a way that more traditional logic-based programming can handle. Mixing the two paradigms provides results that neither approach could attain alone and points toward the future of computing by assigning different systems to corresponding parts of complex problems.
The field of robotics has seen a major leap in potential as developers train ‘intelligently automated’ models that bridge the world of mechanical capability and computational prowess. Deepmind introduced AutoRT, a synthesis of a Large Language Model and a robot control model that together enables robots to gather training data in novel environments. For humanoid robots, more data means more organic, human-like behavior. In addition to AutoRT, the lab introduced neural network-enabled Sara-RT and RT-Trajectory to “improve real-world robot data collection, speed, and generalization.” When all three of these systems are combined, DeepMind envisions a new foundation for the future of robotics. ~
***all imagery created using Image Creator from Designer***
The New AI Project | University of Notre Dame
Editor: Graham Wolfe
Contributors: Cecilia Ignacio, Grace Hatfield, Rachel Lee, Graham Wolfe
Advisor: John Behrens