- Pivot 5
- Posts
- OpenAI CEO warns that 'societal misalignments' could make AI dangerous
OpenAI CEO warns that 'societal misalignments' could make AI dangerous
1. OpenAI CEO warns that 'societal misalignments' could make AI dangerous
OpenAI CEO Sam Altman warns that subtle societal misalignments could make artificial intelligence dangerous. He calls for a body like the International Atomic Energy Agency to oversee AI, which is likely advancing faster than expected.
Altman emphasizes that the AI industry should not be in the driver's seat when making regulations governing the industry. The UAE, an autocratic federation of seven hereditarily ruled sheikhdoms, has signs of that risk, as speech remains tightly controlled. The UAE also has the Abu Dhabi firm G42, overseen by the country's national security adviser.
Read the full story here
2. ChatGPT is getting a digital memory to recall your past conversations
OpenAI is adding a memory feature to ChatGPT, allowing the bot to remember personal details from past conversations and apply that context to current queries. Users can tell ChatGPT to remember specific details, and the system will automatically store this data as requested.
The goal is for the chatbot to become smarter and attuned to users' specific needs. The feature is currently a beta service, rolling out to a small number of ChatGPT free and Plus users.
Read the full story here
3. First Woman To Marry AI-Generated Hologram
Spanish artist Alicia Framis is set to marry an AI-generated hologram, named AILex, as part of her project 'Hybrid Couple'. The marriage is not a romantic one, but part of Framis' project to experiment with love, intimacy, and identity in the age of AI.
Framis plans to create an artistic documentary about her partner's life and integrate the hologram into her daily life. She believes AI and human companions can be beneficial options for those who need company.
Read the full story here
4. AI Girlfriends are a privacy nightmare
Mozilla Foundation's analysis of 11 romance and companion chatbots has found security and privacy concerns. The apps, which have been downloaded over 100 million times on Android devices, gather huge amounts of people's data, use trackers to send information to Google, Facebook, and companies in Russia and China, allow users to use weak passwords, and lack transparency about their ownership and the AI models that power them.
The research highlights the potential misuse of chat messages by hackers and the need for more transparency in chatbots.
Read the full story here
5. AI giants to unveil pact to fight political deepfakes
Tech giants Meta, Microsoft, Google, and OpenAI are negotiating an agreement to combat AI content used to deceive voters ahead of crucial elections this year. The "accord" will be announced during the Munich Security conference on Friday. The companies will agree to develop ways to identify, label, and control AI-generated images, videos, and audio that deceive voters.
The agreement comes amid concerns over AI-powered applications being misused in a pivotal election year. Meta, Google, and OpenAI have already agreed to use a common watermarking standard to tag images generated by their AI applications.
Read the full story here
Bay Area Times is a visual-based newsletter on business and tech, with 250,000+ subscribers.
Sign up with one click.
Advertise with Pivot 5 to reach influential minds & elevate your brand
Get your brand in front of 50,000+ businesses and professionals who rely on Pivot 5 for daily AI updates. Book future ad spots here.