• Pivot 5
  • Posts
  • G7 introduces AI code of conduct for safety

G7 introduces AI code of conduct for safety

Pivot 5: 5 stories. 5 minutes a day. 5 days a week.

1. G7 introduces AI code of conduct for safety

The Group of 7 Industrial Countries (G7) has unveiled the International Code of Conduct for Organizations Developing Advanced AI Systems. This voluntary guidance seeks to ensure AI's safety, security, and trustworthiness, building on the earlier "Hiroshima AI Process."

The 11-point framework, released by the G7, provides directives for responsible AI development. It emphasizes risk mitigation, transparency, and data protection. The initiative, seen as a living document, has received global support, highlighting the collective push for ethical AI practices.

Read the full story here 

2. Leica camera offers proof-of-authenticity feature

Leica has unveiled its M11-P camera, which comes equipped with a unique feature called Content Credentials. The system, designed to combat the rise of AI-altered content, embeds encrypted metadata into photos, detailing aspects like the photo's location, time, and any subsequent editing history.

When the feature is activated, images receive an algorithmic signature, ensuring their authenticity. In a world increasingly wary of AI manipulations, tools like Content Credentials aim to restore trust in shared images by providing a verifiable chain of authenticity.

Read the full story here 

3. Google's DeepMind sees 50% chance of AGI by 2028

Shane Legg, co-founder of Google's DeepMind, reaffirmed his decade-old prediction of a 50% chance to achieve artificial general intelligence (AGI) by 2028. Drawing inspiration from Ray Kurzweil's vision of superhuman AIs, Legg emphasizes the growth in computational power and data.

However, Legg points out challenges in defining AGI due to human intelligence's complexities. He also stresses the need for scalable AI training models, given the energy demands of current systems. Despite these hurdles, he remains optimistic, yet cautious, about the timeline.

Read the full story here

4. Emotion-detecting AI goes open-source with LAION initiative

LAION, the nonprofit behind the text-to-image model Stable Diffusion, has launched the Open Empathic project, aiming to democratize emotion-detecting AI. This initiative seeks to infuse open-source AI systems with enhanced empathy and emotional intelligence. Christoph Schuhmann, a LAION co-founder, highlighted the significant gap in the open-source realm concerning emotional AI.

The project encourages volunteers to submit audio clips to help AI better understand human emotions. While the goal is to enhance human-AI interactions, challenges like potential bias and misuse of such technology remain. LAION emphasizes community contributions and open collaboration to ensure data quality and authenticity.

Read the full story here 

5. Pigeons' problem-solving mirrors AI methods

A recent study has shed light on the remarkable intelligence of pigeons, revealing that their problem-solving capabilities closely resemble the techniques used in artificial intelligence. When presented with various visual tasks, pigeons made decisions that mirrored the predictive methods employed by AI models.

The research showed that pigeons learn through trial and error, improving their accuracy in tasks over time. These findings underscore the potential of understanding associative systems in animals and how they might parallel the mechanisms in AI.

Read the full story here