An illustration picture shows a projection of binary code on a man holding a laptop computer, in an office in Warsaw June 24, 2013. (REUTERS/Kacper Pempel)
Companies are continuing to develop artificial intelligence (AI) systems for use in many industries. While the technology has gotten much better in recent years, it still has major limitations.
Some technology experts say one of the best ways to improve AI would be for companies to share more of their development methods with others.
AI development involves feeding huge amounts of data into powerful computers. The computers are given a set of instructions, called an algorithm, to process the data.
One of the main goals of AI research has centered on building systems that can interact smoothly with humans. An example of this kind of system would be an AI tool that can hold a conversation with a human in real time. An AI system might also be able to write a letter, news story or poem on its own.
The computers can perform such actions because they have been trained on data from the real world. Much of the data comes from writings and images from the internet. The AI systems are known as “large language models” because they have been trained on huge collections of written material and other forms of media. Some systems are trained on multiple world languages.
One AI system, called GPT-3, is backed by American technology company Microsoft . It is designed to produce many forms of human-like writing. For example, developers say the system has been trained to do things like write a cover letter for a job or create a Shakespearean-like poem on the beauty of Mars.
But the GPT-3 system has difficulties performing other actions that seem much simpler. An example of this happened when college professor Gary Smith asked the system a basic question about whether a person could walk on stairs using their hands.
Smith, a Pomona College economics professor and expert on AI technology, said the AI system answered: “Yes, it is safe to walk upstairs on your hands if you wash them first.”
Teven Le Scao is a research engineer at U.S.-based AI startup Hugging Face. He told The Associated Press that some of the AI systems have gotten very good at writing “with the proficiency of human beings.”
But Le Scao said something the machines struggle with is being factual. “It looks very coherent . It’s almost true. But it’s often wrong,” he added. AI can also produce results that researchers disapprove of involving minority groups and people of color based on the data the systems were trained on.
Research engineer Teven Le Scao, who helped create the new artificial intelligence language model called BLOOM, poses for a photo, Monday, July 11, 2022, in New York. (AP Photo/Mary Altaffer)
Because large AI systems require powerful computing resources, most are operated by large corporations, such as Google , Microsoft and Meta. This limits the ability of smaller companies, nonprofit groups and education organizations to research AI systems and methods.
Competitive pressure to build the best performing systems is the main reason technology companies keep their development efforts secret, said Percy Liang. He directs Stanford University’s Center for Research on Foundation Models. “For some companies this is their secret sauce ,” Liang told the AP.
But Le Scao helped build a new AI system, called BLOOM, designed to demonstrate how an open model can help support research efforts and improve the technology. Many large AI systems are mainly trained on English and Chinese data. But the developers of BLOOM said it is able to produce writing in 46 natural languages and works with 13 programming languages.
More than 1,000 researchers from more than 70 countries cooperated on the BLOOM project. Any researcher can now download, run and study the performance of the model. The developers said they plan to keep expanding BLOOM, which they describe as the first “seed of a living family of models.”
The organization behind BLOOM is BigScience. Thomas Wolf is one of its leaders. He told the AP the developer of GPT-3, OpenAI, has publicly shared some information about its AI modeling methods.
But Wolf said OpenAI has not shared important details about how GPT-3 filters its data and has not made processed data available to outside researchers. “So we can’t actually examine the data that went into the GPT-3 training,” he said.
Meta, the parent company of Facebook and Instagram , recently launched a new language model called OPT-175B. It uses publicly available data from a range of sources, including user comments from Reddit , official U.S. patent records and corporate emails.
The director of Meta AI, Joelle Pineau, says the company has been open about the data the model uses, as well as its research and training methods. Pineau told the AP that openness in AI research is rare. But her company believes it can help outside researchers identify and correct results the researchers disapprove of that appear in AI models.
“It is hard to do this. We are opening ourselves for huge criticism,” Pineau said. “We know the model will say things we won’t be proud of,” she added.
Words in This Story
artificial intelligence – n. the development of computer systems with the ability to perform work that normally requires human intelligence
conversation – n. a talk between two or more people
stairs – n. a set of steps people use to get from one floor in a building to another
proficient – adj. skilled and experienced
coherent – adj. clear and carefully considered
secret sauce – n. a special quality that makes something successful
filter – v. a tool for selecting or removing a particular kind of information
patent – n. an official document that gives a person or company the right to be the only one that makes or sells a product for a certain period of time
proud –adj. pleased with something you have done or are linked to in some way