• Pivot 5
  • Posts
  • Apple Restricts Employees from Using AI Tools Including OpenAI’s ChatGPT Over Confidentially Concerns

Apple Restricts Employees from Using AI Tools Including OpenAI’s ChatGPT Over Confidentially Concerns

1. Apple Restricts Employees from Using AI Tools Including OpenAI’s ChatGPT Over Confidentially Concerns

Apple has implemented restrictions on its employees, prohibiting the use of AI tools like OpenAI's ChatGPT and GitHub's Copilot due to concerns about the potential leakage or collection of confidential information.

The decision follows a report by The Wall Street Journal and confirmation from Bloomberg reporter Mark Gurman. OpenAI retains user interactions with ChatGPT by default, and although they introduced a chat history feature in April allowing users to disable it, conversations are still stored for 30 days for review and deletion.

Apple's move reflects similar bans by other companies such as JP Morgan, Verizon, and Amazon, despite OpenAI recently launching an iOS app for ChatGPT.

Read the full story here

2. Google Plans to Utilize AI for Ad Creation, Customer Support, and YouTube Video Ideas

According to CNBC, Google intends to employ artificial intelligence (AI) to aid companies in generating advertisements.

Internal documents reveal that Google's PaLM 2 AI language model will be utilized to assist advertisers in producing assets for their ads. The company is also exploring additional applications for its AI model, including providing video ideas for YouTubers and integrating AI chatbots into platforms like Play Store, Gmail, and Maps for customer support.

Tech giants such as Meta and Amazon are also developing AI toolsets for advertisers, with Meta launching the AI Sandbox for early versions of AI-powered advertising tools.

Read the full story here

3. Meta Unveils New AI Infrastructure Projects to Support Next-Generation Applications

Meta, formerly Facebook, has announced a range of new hardware and software initiatives at the AI Infra @ Scale event.

The company's AI data center design, optimized for AI training and inference, will utilize Meta's own silicon called the Meta training and inference accelerator (MTIA). Additionally, Meta has developed the Research Supercluster (RSC), an AI supercomputer integrating 16,000 GPUs for training large language models. The move reflects Meta's long-term efforts to advance and enhance the use of AI technology across its platforms.

Other tech giants like Microsoft, IBM, and Google are also focusing on purpose-built AI infrastructure to meet growing demands.

Read the full story here

4. Nvidia CEO Jensen Huang Highlights Impact of AI and Accelerated Computing in Chip Manufacturing

Jensen Huang, CEO of Nvidia, delivered a keynote address at the ITF World 2023 semiconductor conference, discussing the significant role of accelerated computing and artificial intelligence (AI) in the chip manufacturing industry.

Huang showcased how Nvidia's accelerated computing and AI solutions are transforming the technology sector, particularly in chipmaking processes. He emphasized the need for innovative approaches to meet rising computing demands while considering net-zero goals.

Huang also unveiled Nvidia's advancements in embodied AI, including the VIMA system and Earth-2 project, which contribute to physical world comprehension and sustainable energy solutions.

Read the full story here

5. Revealing the Proprietary and Offensive Websites Used to Train AI Chatbots

AI chatbots have gained immense popularity recently, showcasing impressive capabilities in tasks like writing term papers and engaging in lucid conversations.

However, their abilities are derived from vast amounts of text data, mostly scraped from the internet. The Washington Post conducted an analysis of Google's C4 data set, revealing the types of proprietary, personal, and offensive websites that contribute to the training data. The analysis unveiled websites from various categories, including news, business, technology, religion, and even controversial platforms associated with piracy, hate speech, and conspiracy theories.

The findings raise concerns about the privacy and ethical implications of AI training data.

Read the full story here