The student news site of Baruch

The Ticker

The student news site of Baruch

The Ticker

The student news site of Baruch

The Ticker

Polls
Sorry, there are no polls available at the moment.

Anthropic debuts artificial intelligence chatbot Claude 2 to public

Anthropic+debuts+artificial+intelligence+chatbot+Claude+2+to+public
Razia

The artificial intelligence startup Anthropic publicly released its AI language model Claude 2 in the United States and the United Kingdom on July 11. The chatbot launched less than 2 months after the company revealed it received $450 million in funding from venture capital firm Spark Capital.

More than 350,000 people signed up for Claude 2’s waitlist, requesting access to the chatbot’s application program interface and consumer offering.

The language model is a successor to the company’s still operating model Claude 1.3 and has passed numerous tests.

It scored 71.2% on a Python coding test, 76.5% on the Bar exam and 88% on a middle school math quiz. To compare, the previous Claude model scored 56%, 85.2% and 73% on the respective tests.

Claude 2 can also analyze a prompt with up to 150,000 words, double of Claude 1.3’s ability.

OpenAI’s ChatGPT, a competing chatbot, can analyze up to 3,000 words, while Google’s Bard has a limit of 4,000 characters.

Although Claude can interpret more words, it is limited to analyzing text. ChatGPT-4, OpenAI’s most recent model, can respond to both text and images.

Anthropic said Claude 2 is less likely to have harmful responses because it uses “constitutional AI” to train the chatbot. With this machine learning method, an artificial intelligence model is given a set of principles to follow and instructed to follow that list. A second AI model then tests to see how much the first model follows the constitution and makes any needed improvements.

The company was founded in 2021 by a group of former OpenAI employees concerned with the over commercialization and dangers of large AI models. Anthropic started as a public benefit corporation in hopes that it would allow the company to pursue social responsibility and profitability.

According to the startup, the result is a self-policing chatbot that “misbehaves” less frequently.

New York Times reporter Kevin Roose tried to test the limits set by Claude 2.

“[Claude] seemed scared to say anything at all,” Roose said.

“In fact, my biggest frustration with Claude was that it could be dull and preachy, even when it’s objectively making the right call,” he continued. “Every time it rejected one of my attempts to bait it into misbehaving, it gave me a lecture about my morals.”

Anthropic president, Daniela Amodei, said the San Francisco based company was focused on making safe AI models for businesses.

“We really feel that this is the safest version of Claude that we’ve developed so far, and so we’ve been very excited to get it in the hands of a wider range of both businesses and individual consumers,” she said in a CNBC interview.

The consumer version is free, though there are plans to monetize Claude in the future.

Anthropic said it is working with other businesses such as Notion, Zoom and AI image generator Midjourney to build customized models for commercial use.

Amodei, who co-founded the startup with her brother Dario Amodei, also acknowledged the flaws found in AI models, including having “hallucinations” which is the tendency to generate incorrect answers.

“No language model is 100% immune from hallucinations, and Claude 2 is the same,” she said.

As of now, Claude 2 is only available in the U.S. and the U.K., but Anthropic plans to expand its availability in the upcoming months.

Leave a Comment
More to Discover
About the Contributor
Mia Euceda
Mia Euceda, Arts & Culture Editor
Mia Euceda is the Arts and Culture Editor of The Ticker.
Donate to The Ticker

Comments (0)

All The Ticker Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *