- Pivot 5
- Posts
- Singapore tops global rankings in AI skill adoption
Singapore tops global rankings in AI skill adoption
Pivot 5: 5 stories. 5 minutes a day. 5 days a week.
1. Singapore tops global rankings in AI skill adoption
Workers in Singapore are the fastest in the world at adopting AI skills, according to LinkedIn's latest Future of Work report. The report, which analyzed data from 25 countries, found that Singapore has the highest "diffusion rate" of AI skills, with the share of members adding AI skills to their profiles growing 20 times since January 2016. This is significantly higher than the global average of eight times. Finland, Ireland, India, and Canada follow Singapore in the top five countries with the highest rates of AI skills diffusion.
Pooja Chhabria, career expert and Asia-Pacific head of editorial at LinkedIn, attributes Singapore's rapid adoption of AI skills to the country's robust digital infrastructure, strong intellectual property protection framework, and thriving venture capital ecosystem. "We have seen rapid growth in AI development and adoption fueled by startups and businesses over the years, in their efforts to carve out new niches or achieve greater competitive advantage," Chhabria said.
The report also highlights the fastest-growing AI-related skills added to member profiles, hinting at the emergence of generative AI. Skills such as question-answering, classification, and recommender systems have seen significant growth. Generative AI's ability to create text, images, and other content in response to human input has sparked new fears of jobs being replaced by technology. However, LinkedIn's analysis found that many skills, such as lesson planning and curriculum development for teachers, could be augmentable by generative AI, potentially lightening workloads and allowing professionals to focus on the most important parts of their jobs.
Read the full story here
2. Viome secures $86.5M in funding, partners with CVS for microbiome testing
Viome, a microbiome startup, has raised $86.5 million in a Series C funding round co-led by Khosla Ventures and Bold Capital. The company, based in Bellevue, Washington, assesses customers' microbiomes, applies AI to the data, and provides them with supplements and other guidance based on the findings. Viome claims that its RNA sequencing technology, originally developed out of research from the Los Alamos National Laboratory, is clinically validated and capable of analyzing biological samples at least 1,000 times greater than other technologies.
Since its founding in 2016, Viome has run tests for some 350,000 consumers from 106 countries, working out to some 600,000 samples that are feeding and informing its algorithm. The company plans to use the funding to expand its existing business, which includes tests based on samples of a person’s blood, stools, and saliva, vitamin supplements, and assessments around diet. Viome also aims to break into new areas, including new product lines around mouth and dental health, and retail partnerships.
As part of its retail expansion, Viome has inked a deal with CVS that will see the pharmacy chain offer Viome tests in some 200 stores in the U.S. The CVS deal involves a revenue share agreement, with CVS buying kits from Viome for sale in its stores. Viome CEO and founder Naveen Jain said that CVS believes more people are becoming gut-health conscious and wants to offer Viome's products to address that demand.
However, companies in the microbiome space, including Viome, have faced challenges and criticisms. Dr. Jonathan Eisen, a professor at the University of California, Davis, and a specialist in medical microbiology and genomics, has questioned the scientific validity of Viome's claims. Eisen has called the company the "Theranos of Microbiome Studies" and has criticized its marketing as misleading and scientifically inaccurate.
Viome's raise and plans to grow come amid a critical moment for companies playing in the crossover of healthcare, technology, biotech, and changing consumer sentiments. As the microbiome field continues to evolve, the need for rigorous scientific validation and evidence-based approaches remains paramount.
Read the full story here
3. UK government invests £100m in AI chips amid global race
The UK government is investing £100 million to gain a foothold in the global race to produce computer chips that power AI. Taxpayer money will be used as part of a drive to build a national AI resource in Britain, similar to those under development in the US and other countries. The funds will be used to order key components from major chipmakers Nvidia, AMD, and Intel. However, an official briefed on the plans expressed concerns that the £100 million investment is far too low compared to the investments made by peers in the EU, US, and China.
The government is in advanced stages of an order of up to 5,000 graphics processing units (GPUs) from Nvidia, a company that initially focused on building processing capacity for computer games but has seen a sharp increase in its value as the AI race has heated up. GPUs, also known as graphics cards, are a key part of chips' capacity for processing and are critical for running complex actions required by AI.
Despite these efforts, concerns are rising in the industry and Whitehall that the UK government's actions may prove too little, too late. The UK accounts for just 0.5% of global semiconductor sales. In May, Rishi Sunak's government revealed plans to invest £1 billion over 10 years in semiconductor research, design, and production, a step dwarfed by the US's $52 billion Chips Act and EU subsidies of €43 billion.
The UK's relatively weak investment could leave the country exposed amid mounting geopolitical tensions over AI chip technology. The White House recently moved to ban US investment in advanced Chinese microconductors, and China declared chips from US manufacturer Micron a security risk.
As part of the government's drive, the UK is set to hold an AI summit this autumn aimed at establishing shared standards for technology that some believe could pose an existential risk to humanity. UK Research and Innovation, a funding body, is leading the effort to get the UK's orders in place with major chip manufacturers alongside the Department for Science, Innovation, and Technology.
Read the full story here
4. MIT funds projects exploring human-Computer interaction in modern workspaces
The MIT Stephen A. Schwarzman College of Computing has awarded seed grants to seven interdisciplinary projects exploring how AI and human-computer interaction can enhance modern workspaces for better management and higher productivity. Funded by Andrew W. Houston ’05 and Dropbox Inc., these projects bring together researchers from computing, social sciences, and management to conduct research in this rapidly evolving area.
One of the selected projects, "LLMex: Implementing Vannevar Bush’s Vision of the Memex Using Large Language Models," led by Patti Maes of the Media Lab and David Karger of the Department of Electrical Engineering and Computer Science (EECS) and the Computer Science and Artificial Intelligence Laboratory (CSAIL), proposes to design and implement memory prosthetics using large language models. The AI-based system aims to intelligently help individuals keep track of vast amounts of information, accelerate productivity, and reduce errors by automatically recording their work actions and meetings, supporting retrieval based on metadata and vague descriptions, and suggesting relevant, personalized information proactively based on the user’s current focus and context.
Another project, "Using AI Agents to Simulate Social Scenarios," led by John Horton of the MIT Sloan School of Management and Jacob Andreas of EECS and CSAIL, envisions the ability to easily simulate policies, organizational arrangements, and communication tools with AI agents before implementation. Tapping into the capabilities of modern LLMs to serve as a computational model of humans makes this vision of social simulation more realistic and potentially more predictive. These projects, along with others awarded seed grants, have the potential to significantly impact AI-augmented management and productivity in various fields.
Read the full story here
5. Hybrid AI system proposed to tackle shortcomings
LLMs face significant challenges in practical applications, including unpredictability, lack of reasoning, and uninterpretability. In a recent paper, cognitive scientist Gary Marcus and AI pioneer Douglas Lenat argue that the required capabilities for a trustworthy general AI mostly come down to knowledge, reasoning, and world models, which are not well handled within LLMs. They propose an alternative AI approach that could theoretically address these limitations: "AI educated with curated pieces of explicit knowledge and rules of thumb, enabling an inference engine to automatically deduce the logical entailments of all that knowledge."
Marcus and Lenat believe that LLM research can learn and benefit from Cyc, a symbolic AI system that Lenat pioneered more than four decades ago. Cyc is a knowledge-based system that provides a comprehensive ontology and knowledge base that the AI can use to reason. Unlike current AI models, Cyc is built on explicit representations of real-world knowledge, including common sense, facts, and rules of thumb. It includes tens of millions of pieces of information entered by humans in a way that can be used by software for quick reasoning.
In their paper, Lenat and Marcus outline 16 capabilities that AI must have to be trusted in critical settings, where the cost of error is high. LLMs struggle in most of these areas. For example, AI should be able to recount its line of reasoning behind any answer it gives and trace the provenance of every piece of knowledge and evidence that it brings into its reasoning chain. Deductive, inductive, and abductive reasoning, as well as analogies and theory of mind, are also important capabilities for AI systems.
The authors propose a synergy between a knowledge-rich, reasoning-rich symbolic system such as Cyc and LLMs. They suggest both systems can work together to address the "hallucination" problem, which refers to statements made by LLMs that are plausible but factually false. Cyc can provide LLMs with knowledge and reasoning tools to explain their output step by step, enhancing their transparency and reliability.
Marcus and Lenat advocate for hybrid AI systems that bring together neural networks and symbolic systems. The combination of Cyc and LLMs can be one of the ways that the vision for hybrid AI systems can come to fruition. "There have been two very different types of AI’s being developed for literally generations," the authors conclude, "and each of them is advanced enough now to be applied — and each is being applied — on its own; but there are opportunities for the two types to work together, moving us one step further toward a general AI which is worthy of our trust.
Read the full story here
Listen to the daily Pivot 5 podcast on Spotify, Apple Podcasts, and more