- Pivot 5
- Posts
- Stability AI Reveals New Efficient Language Models
Stability AI Reveals New Efficient Language Models
Pivot 5: 5 stories. 5 minutes a day. 5 days a week.
1. Stability AI Reveals New Efficient Language Models
Stability AI has launched two new LLMs, FreeWilly1 and FreeWilly2, causing a stir in the AI community. These models, inspired by Meta's open-source LLaMA models, were trained on a compact, synthetic dataset, showcasing the potential for high performance with fewer data.
The FreeWilly models were released under a non-commercial license, aimed at encouraging research and promoting open access within the AI community. This action highlights Stability AI's dedication to advancing the field through open-source contributions.
Despite being trained with just 600,000 data points—10% of the size of the original Orca dataset—the FreeWilly models have demonstrated exceptional performance. They have proven to be more cost-effective and environmentally friendly, suggesting a promising direction for future AI development.
Read the full story here
2. Microsoft's Bing Chat Expands to Chrome and Safari
Microsoft's AI chatbot, Bing Chat, is broadening its horizons beyond Microsoft products and into non-Microsoft browsers, including Google Chrome and Apple's Safari. This expansion, currently in the testing phase for select users, represents a significant stride in making Microsoft's ChatGPT-like AI chatbot accessible to a broader audience.
The Bing Chat experience, powered by OpenAI's GPT-4 model, has received praise for its capabilities. However, early testers have reported some limitations when using Bing Chat in non-Microsoft browsers, such as a reduced number of messages per conversation and a lower character count.
Despite these initial hurdles, Bing Chat's expansion signifies Microsoft's commitment to making its AI technologies more widely available. As Bing Chat continues to integrate with other Microsoft products, its reach and impact are set to increase, marking an exciting development in the field of AI chatbot technology.
Read the full story here
3. MIT's PhotoGuard: A New Defense Against Malicious AI Edits
MIT CSAIL has introduced a new technique, "PhotoGuard," aimed at protecting images from unauthorized edits by AI. As AI systems gain the ability to edit and create images, concerns over unauthorized manipulation and theft of online artwork and images have escalated. PhotoGuard addresses these concerns by disrupting an AI's ability to interpret an image.
PhotoGuard modifies select pixels in an image, creating "perturbations" that are invisible to the human eye but detectable by machines. The technique employs two methods: the "encoder" attack method, which targets the algorithmic model's latent representation of the image, and the "diffusion" attack method, which disguises an image as a different image in the eyes of the AI.
Despite its innovative approach, PhotoGuard is not infallible. Malicious actors could potentially reverse-engineer the protected image by adding digital noise or altering the image. Nevertheless, the development of PhotoGuard represents a significant advancement in the effort to safeguard digital images from unauthorized AI manipulation.
Read the full story here
4. OpenAI's ChatGPT Android App Set for Launch
OpenAI has announced the impending launch of its eagerly awaited ChatGPT application for Android devices. The app, currently available for pre-registration in the Google Play Store, is slated to roll out soon, marking a significant milestone in OpenAI's efforts to broaden its AI capabilities.
The ChatGPT Android app offers a plethora of features designed to revolutionize user interaction with AI. It allows for synchronization of user history across multiple devices and provides users with the latest model enhancements from OpenAI. The app aims to provide instant answers, tailored advice, and creative inspiration at the touch of a button.
In response to data safety concerns, OpenAI has detailed its approach to data collection, sharing, and handling. The company emphasizes that the ChatGPT Android app does not share user data with third parties and adheres to strict privacy practices, including data encryption in transit.
The launch of the ChatGPT Android app comes amid speculation about future AI developments from tech giants like Apple and Google. As AI becomes increasingly integrated into operating systems, the success of standalone apps like ChatGPT may depend on their ability to offer reliable and accurate AI tools.
Read the full story here
5. Musk's xAI Joins Top Labs in Focusing on AI Existential Risk
Elon Musk's new startup, xAI, has appointed Dan Hendrycks, director of the nonprofit Center for AI Safety, as an advisor. This move signifies a substantial shift in the AI industry, as it indicates that four of the world's most renowned AI research labs — OpenAI, DeepMind, Anthropic, and now xAI — are focusing on existential risk, or x-risk, ideas about AI systems.
However, this ""doomer"" narrative has sparked controversy among top AI researchers and computer scientists. Critics argue that the focus on existential risk distracts from the real issues posed by today's AI, both positive and negative.
The influence of the Effective Altruism (EA) movement on the field of AI is also a point of contention. Critics argue that the movement, supported by tech figures like FTX's Sam Bankman-Fried, is shaping the field of AI and its priorities in ways that may not benefit all of humanity.
Despite these controversies, the focus on existential risk underscores the potential impact of AI on humanity and the need for responsible AI development. As AI continues to evolve, the debate over its potential risks and benefits is likely to intensify.
Read the full story here