• Pivot 5
  • Posts
  • Microsoft launches a Pro plan for Copilot

Microsoft launches a Pro plan for Copilot

Pivot 5: 5 stories. 5 minutes a day. 5 days a week.

1. Microsoft launches a Pro plan for Copilot

Microsoft has launched a consumer-focused paid Copilot plan, priced at $20 per user per month, to broaden the base of potential paying Copilot customers while making Microsoft's existing services more attractive through AI features. Copilot Pro, priced at $20 per user per month, gives customers access to Copilot GenAI features across Word, Excel, PowerPoint, Outlook, and OneNote on PC, Mac, and iPad if they have a Microsoft 365 Personal or Family plan.

Copilot Pro subscribers also get 100 boosts per day in Designer, improved generation quality, and priority access to the newest GenAI models underpinning Copilot, including OpenAI's GPT-4 Turbo. Copilot for Microsoft 365 customers can also access expanded customization options via Copilot Studio, a souped-up version of Copilot GPT Builder. Microsoft is also expanding the number of languages Copilot supports in the first half of 2024.

Read the full story here 

2. OpenAI is approaching 2024 worldwide elections

OpenAI is preparing for 2024 worldwide elections by focusing on preventing abuse, providing transparency on AI-generated content, and improving access to accurate voting information. The company is working to prevent abuse, such as misleading "deepfakes" or scaled influence operations, and ensuring accurate voting information. The company is also refining its Usage Policies for ChatGPT and the API to prevent applications that deter participation in democratic processes.

OpenAI is also working on improved transparency around image provenance, integrating with existing sources of information, and improving access to authoritative voting information. The company is working with the National Association of Secretaries of State to direct users to CanIVote.org when asked about procedural election-related questions.

Read the full story here 

3. The FDA has reportedly approved an AI product that predicts cognitive decline

The US government has approved AI-based memory loss prediction software, BrainSee, by San Francisco-based brain imaging analytics company Darmiyan. BrainSee can predict memory loss progression using clinical brain MRIs and cognitive tests, which are already standard for patients concerned about early signs of decline. After analyzing the imaging and cognitive assessments, BrainSee assigns a predictive score indicating the patient's odds of memory deterioration within the following five years.

This could lead to early treatment for some patients and peace of mind for others. The FDA's "De Novo" designation means the product has no clear market predecessors but has proven its effectiveness and safety in clinical trials. BrainSee is fully automated and provides results on the same day the scans and cognitive test scores are entered.

Read the full story here

4. The future of work starts with trust

Workday research reveals that only 62% of business leaders welcome AI and 62% are confident their organization will ensure responsible AI implementation. Employees show a deeper level of skepticism than their leadership counterparts. The World Economic Forum emphasizes the importance of transparency, consistency, and meeting user expectations to establish trust in AI systems. Workday believes that AI should elevate humans and that trust in AI must be earned through transparency. However, three in four employees say their organization is not collaborating on AI regulation and four in five have not shared guidelines on responsible AI use.

Organizations should establish a responsible AI (RAI) programme based on four pillars: principles, practices, people, and policy. Principles guide ethical foundations, ensuring fairness, transparency, accountability, and privacy. Practices include building responsible infrastructure, using robust tools, promoting transparency, and empowering customers. People create a culture of trust, requiring leadership commitment, dedicated resources, and cross-company support. A dedicated AI officer and multidisciplinary team are essential for overseeing AI development, ethical reviews, and training.

Read the full story here 

GenAI presents numerous risks and opportunities, including compliance, operational, reputational, and regulatory risks. Key concerns include quality control, accuracy, and misinformation, as well as the misinformation and accuracy of AI models. Chief Legal Officers play a crucial role in advising boards on AI-related risks and developing enterprise-wide mitigation policies. Intellectual property risks include the unauthorized use of copyrighted material to train large language models and licensing restrictions on AI-generated output.

The fragmented nature of the AI regulatory landscape presents risks for enterprises operating in diverse sectors and geographies. Companies can actively shape best practices by embracing responsible AI compliance programs, due diligence, compliance programs, and documentation. Building trust is essential for maximizing AI's economic and humanitarian potential. Companies must prioritize transparency, privacy, and risk-based frameworks that address the entire value chain. Chief Legal Officers can play a central role in steering organizations toward prioritizing trust, accountability, and creating a positive impact in this era of AI innovation.

Read the full story here 

Advertise with Pivot 5 to reach influential minds & elevate your brand

Get your brand in front of 50,000+ businesses and professionals who rely on Pivot 5 for daily AI updates. Book future ad spots here.