• Pivot 5
  • Posts
  • Nick Clegg of Meta Deems Current Models 'Quite Stupid'

Nick Clegg of Meta Deems Current Models 'Quite Stupid'

Pivot 5: 5 stories. 5 minutes a day. 5 days a week.

1. Meta Releases Llama 2: Next-gen Text-Generating Models Unveiled

Meta announced Llama 2, a newly developed generation of generative models outperforming its predecessor, designed to invigorate chatbots like OpenAI's ChatGPT and Bing Chat. This suite of models, trained on publicly accessible data, is openly available for research and commercial applications—a marked change from Meta's previous restrictive stance.

Llama 2's accessibility on platforms including AWS, Azure, and Hugging Face's hosting platform adds to its appeal, even though it slightly underperforms high-profile rivals like GPT-4 and PaLM 2 in benchmarks. Despite this, human evaluators found Llama 2 as "helpful" as ChatGPT, as evidenced by responses to a set of prompts probing for "helpfulness" and "safety".

This release sparks questions about implications for the AI industry, especially concerning model accessibility and performance. As the AI field evolves, the impact of Llama 2 will be closely scrutinized

Read the full story here

2. Nick Clegg of Meta Deems Current Models 'Quite Stupid'

In a surprising statement, Nick Clegg, Meta's president of global affairs, described current AI models as "quite stupid", indicating that apprehensions about AI dangers surpass technological advancement. He emphasized that models fall "far short" of the point where AI gains autonomous cognition.

This remark coincides with Meta's announcement to open-source its large language model, Llama 2, a decision sparking divisive reactions in the tech sector. Critics raise concerns about potential misuse, but Clegg argues that Meta's open-sourced models are safer than others on the market.

This move signals a significant shift in the AI industry, prompting discussions about potential sector implications. As the AI landscape evolves, the impact of open-sourcing AI models on safety and regulation will be a focus of intense scrutiny.

Read the full story here

3. Civil Liberties Debate Ignites Over AI Surveillance in Paris Olympics

AI surveillance is slated for the streets of Paris during the 2024 Olympics to detect suspicious activities, stirring concern among civil rights groups fearing infringements on liberties. Despite these concerns, François Mattens, the bidding AI company's representative, defends the technology, clarifying that humans make the final decisions based on AI alerts.

Opponents, however, worry about the potential for this setup to become a permanent fixture. Digital rights campaign group La Quadrature du Net's Noémie Levain voiced fears about the possible normalization of this robust surveillance for the Olympics.

The implementation of AI surveillance during the Paris 2024 Olympics represents a pivotal moment for AI usage in security during large-scale events. As debates continue around AI surveillance and civil liberties, global attention will be focused on how this technology is managed and regulated.

Read the full story here

4. SAP's Strategic Venture into AI Sector with Startup Investments

SAP, the German enterprise software giant, has made a strategic foray into the AI realm, investing in three AI startups: Cohere, Anthropic, and Aleph Alpha. The undisclosed investment signifies SAP's commitment to integrating generative AI into its suite of business applications.

This follows a $1 billion commitment to gen AI startups by SAP-backed Sapphire Ventures, signaling a surge of interest in AI in the business sector. The existing 26,000-strong customer base for SAP's Business AI suggests a robust market appetite.

The integration of generative AI into SAP's applications could significantly impact global businesses and customers. As SAP continues to innovate and invest in AI, its strategic moves will likely play a crucial role in shaping the future of business applications.

Read the full story here

5. AI Advancements Pose Threat to Outsourced Coders in India

Stability AI's CEO, Emad Mostaque, predicts a significant job loss for outsourced programmers in India due to AI advancements. He equates AI models to "exceptionally talented grads" who can develop software with fewer people.

However, the AI job impact will differ across countries due to variable labor laws. France's stronger labor laws might protect developers, but India, home to over 5 million software programmers and a popular choice for outsourcing jobs, is particularly vulnerable.

This rise of AI in software development signals a significant industry shift. As AI evolves and becomes more advanced, its influence on the job market, especially in the software programming sector, will be a hot topic. Balancing AI advancement and job security is a critical issue to address in the future.

Read the full story here