- Pivot 5
- Posts
- Meta will label AI-generated images across its platforms
Meta will label AI-generated images across its platforms
1. Meta will label AI-generated images across its platforms
Meta plans to increase labeling of AI-generated images across Facebook, Instagram, and Threads to make it clear that the visuals are artificial. The company is working on tech that can detect content generated by third-party AI tools, aligning with C2PA and IPTC technical standards.
Meta expects to detect images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, all of which incorporate GAI metadata into their products. Users are expected to label such content themselves, but Meta is also working on making it harder for people to alter or remove invisible markers from GAI content.
Read the full story here
2. OpenAI is adding new watermarks to DALL-E 3
OpenAI is adding watermarks to DALL-E 3, as more companies support standards from the Coalition for Content Provenance and Authenticity (C2PA). The watermarks will appear in images generated on the ChatGPT website and the API for the DALL-E 3 model. Mobile users will receive the watermarks by February 12th.
The watermarks include an invisible metadata component and a visible CR symbol, which will appear in the top left corner of each image. OpenAI says adding the watermark metadata will not affect the quality of image generation and will slightly increase image sizes for some tasks. The C2PA, which includes companies like Adobe and Microsoft, has been pushing the use of the Content Credentials watermark to identify the provenance of content.
Read the full story here
3. Hackers steal $25 million by deepfaking finance boss
A multinational company was scammed out of $25.6 million by hackers who tricked employees at its Hong Kong branch into believing their digital recreation of its chief financial officer and other video conference participants were real. The hack, believed to be the first of its kind, highlights the progress of deepfake technology.
Scammers used publicly available footage to create deepfake representations of the staff, with some fake video calls having only a single human on the line. The company's senior superintendent, Baron Chan Shun-ching, said that two to three employees were targeted. Deepfakes have also been used by kidnappers to get a ransom, as seen in a recent case in northern China.
Read the full story here
4. Apple releases ‘MGIE’, a revolutionary AI model for instruction-based image editing
Apple has released MGIE, an open-source AI model that can edit images based on natural language instructions. MGIE, or MLLM-Guided Image Editing, uses MLLMs to interpret user commands and perform pixel-level manipulations. It can handle editing aspects like Photoshop-style modification, global photo optimization, and local editing.
MGIE is a collaboration between Apple and researchers from the University of California, Santa Barbara. It integrates MLLMs into the image editing process by derriving explicit instructions from user input and generating visual imagination. MGIE is available as an open-source project on GitHub.
Read the full story here
5. Two Texas companies were behind the AI Joe Biden robocalls
Two Texas companies, Lingo Telecom and Life Corporation, have been linked to a robocall campaign using an AI voice clone of President Joe Biden to persuade New Hampshire residents not to vote.
The robocalls began on January 21st, two days before the New Hampshire presidential primary. Authorities have issued cease-and-desist orders and subpoenas to both companies. The state is investigating whether the robocall campaign violated election and consumer protection laws.
Read the full story here
Advertise with Pivot 5 to reach influential minds & elevate your brand
Get your brand in front of 50,000+ businesses and professionals who rely on Pivot 5 for daily AI updates. Book future ad spots here.