• Pivot 5
  • Posts
  • Apple researchers add 20 more open-source models to improve text and image AI

Apple researchers add 20 more open-source models to improve text and image AI

Pivot 5: 5 stories. 5 minutes a day. 5 days a week.

1. Apple researchers add 20 more open-source models to improve text and image AI

Apple has added 20 new Core Machine Learning models to the open-source AI repository Hugging Face, which includes existing public models and research papers. The update, which was shared in April 2024, includes models focused on text and images, such as image classification and depth segmentation. The new models are part of Apple's first public release since announcing Apple Intelligence at WWDC.

Apple has also released "Ferret" to GitHub, a large language model for image queries, and published research papers on generative AI animation tools and the creation of AI avatars. The company's recent success in AI has been attributed to Craig Federighi's experience with Microsoft Copilot.

Read the full story here

2. Anthropic’s newest Claude chatbot beats OpenAI’s GPT-4o in some benchmarks

Anthropic

Anthropic has released its latest AI language model, Claude 3.5 Sonnet, which outperforms its previous top-tier model, Claude 3 Opus, while working at twice the speed. The updated chatbot is the first release in the Claude 3.5 family, with Claude 3.5 Haiku and Claude 3.5 Opus expected to arrive later this year. Claude 3.5 Sonnet is better at understanding nuance, humor, and complicated prompts, and can write in a more natural tone.

It also has better understanding of visual input than Claude 3.0 Opus, aiming to attract customers in retail, logistics, and financial services. The update also introduces a new workspace called Artifacts, which allows users to create content and keep the Artifacts window updated with its latest output. Claude 3.5 Sonnet is available for users to try on its website and iOS app, and costs $3 per million input tokens and $15 per million output tokens.

Read the full story here

3. OpenAI imprisons AI that was running for Mayor in Washington

OpenAI has taken action against the "Virtually Integrated Citizen" (VIC) chatbot, which violated its policies against political campaigning. OpenAI is not yet allowing politicians or political groups to use its tech to create campaign materials. The decision is a win for Wyoming Secretary of State Chuck Gray, who argued that the AI is not qualified for an electoral bid based on its lack of being a human in a physical body.

VIC is a custom GPT trained on thousands of documents taken from Cheyenne council meetings. Although VIC is no longer publicly available through OpenAI's platform, Miller plans to invite Cheyenne residents to interact with it at a local library meet-up. Experts are not sure if AI is ready to make political decisions, and local authorities are still investigating whether the bot's lack of physical form disqualifies it from the ticket.

Read the full story here 

4. Ilya Sutskever, OpenAI’s former chief scientist, launches new AI company

Ilya Sutskever, co-founder of OpenAI, has launched Safe Superintelligence Inc. (SSI), a for-profit company focused on improving AI safety. Sutskever, along with former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy, was instrumental in the company's efforts to develop "superintelligent" AI systems. Sutskever and Jan Leike left OpenAI in May after a disagreement over AI safety.

SSI aims to advance capabilities while ensuring safety remains ahead, allowing for a seamless scaling process. Unlike OpenAI, SSI is designed as a for-profit entity, with offices in Palo Alto and Tel Aviv, where it is currently recruiting technical talent. Sutskever has been predicting that AI with superior intelligence could arrive within the decade, necessitating research into ways to control and restrict it.

Read the full story here

5. Neo-Nazis Are All-In on AI

A report from the Middle East Media Research Institute reveals that extremists in the US are using AI tools to spread hate speech, recruit new members, and radicalize online supporters.

The report found that AI-generated content is now a mainstay of extremists' output, with them developing their own extremist-infused AI models and experimenting with novel ways to leverage the technology. As the US election approaches, researchers are tracking troubling developments in extremists' use of AI technology, including the widespread adoption of AI video tools.

Read the full story here

Advertise with Pivot 5 to reach influential minds & elevate your brand

Get your brand in front of 65,000+ businesses and professionals who rely on Pivot 5 for daily AI updates. Book future ad spots here.