Article

BrainCore: Next generation cognitive technology

BrainCore is an intelligent system that can read text, understand the meaning, store information in a structured form and make logical decisions. It is used to create intelligent software for specific business domains that can interpret text-based information and act on its understanding. The system has three main parts: comprehension, model of the world and decision-making.

Comprehension

Comprehension part of the system derives meaning from text sentences and maps it into the model of the world regardless of how sentences are phrased. The model incorporates a pretrained artificial neural network for determining context and enabling correct interpretation of the words’ meaning.

For example, in a sentence “I work as a product marketing manager at Google” the system will be able to derive several meanings: a) the person is employed, b) has a product marketing manager role, and c) the employer is Google. All of these are pre-defined meanings in the model of the world so that all of this information can be channeled to the right data fields within the overall data structure.

Creating comprehension capabilities became possible only recently when modern natural language understanding (NLU) algorithms reached human level of accuracy. In January 2021, two systems achieved scores of 90.2 and 90.3 on the SuperGLUE benchmark above human performance at 89.8. This benchmark for evaluation of NLU models consists of a wide range of tasks, including question answering, natural language inference, co-reference resolution, word sense disambiguation, and others. It is a more complex version of the original General Language Understanding Evaluation (GLUE) benchmark. NLU models reached human level performance on GLUE as of July 2019 and that triggered the need for a more stringent set of tests.

Achieving human level of language understanding is as important milestone as when computer vision first crossed the human performance in ImageNet image classification in 2015. This was like opening eyes for self-driving cars that could now recognize objects as accurately as humans. They now needed to be trained on driving skills to pass on the driving test. Five years later, in 2020, the National Highway Traffic Safety Administration (NHTSA) oversees self-driving pilots executed by 10 companies in seventeen cities (including Dallas, Washington, D.C., San Francisco and Austin, Texas) across nine states in US.

With knowledge work, there is a subset of tasks that require relatively simple level of language understanding. There is also no significant risk such as killing a person in an accident as with self-driving cars. This makes it possible to automate a large number of tasks today, and with increased sophistication of algorithms as we get closer to Artificial General Intelligence (AGI), even more complex tasks can be automated in the coming years.

Model of the world

Model of the world is the central part of the system that captures various types of relevant business information:

  • people with their multiple roles as a student, employee, household, taxpayer or a volunteer
  • organizations of different types such as companies, government, non-profits and educational institutions
  • relationships between people and organizations, in certain cases involving products or services
  • products manufactured by suppliers and sold to other businesses directly or via retail
  • services delivered by vendors and consumed by businesses or households
  • named entities such as cities, regions and geography
  • company organization into ownerships structure, management layers, departments, functions, employee roles and responsibilities
  • industries and market segments as an aggregation of certain products and services

OneForce uses leading industry experts to pre-define the baseline data structure for different domains. It covers industries, core functions within a company (strategy, leadership, innovation, research and development, product design) and typical non-core functions (marketing, sales, finance, human resources, legal, etc.). Additional algorithms expand the model of the world with new meanings programmatically derived from sentences. Both layers of the data structure are later used for logical decision making.

The model of the world has an in-built hierarchical organization for geographies, industries, functions, roles and responsibilities. The information is organized using ontology, knowledge graphs and relational databases. The data structure supports data enrichment process when information is collected from multiple different sources and can be intelligently combined in one place.

Decision-Making

Decision making part of the system collects relevant algorithms and applies them to data to come to accurate conclusions. It is using rules created by experts and rules derived from data.

There is sophisticated functionality for knowledge transfer from experts to machine. The knowledge acquisition process defines both the rules and ontologies. Over time, the system learns from annotated datasets and user decisions made at each step of the data analysis process.

Additional data mining algorithms are used for categorization, scoring, prioritization, correlation of information and other analytical functions. Knowledge discovery process derives patterns from large volumes of data, extracts concepts from artifacts, determines dependencies from databases.

OneForce takes an applied AI approach to creating BrainCore technology. It is built from multiple available on the market AI capabilities integrated into a single, unified intelligent system. The system takes advantage of the latest NLP algorithms such as BERT and transformers. Custom deep learning models are created when there are no existing solutions. Certain algorithms were developed to allow a more cost-efficient way of processing large amounts of data.

Microsoft DeBERTa surpasses human performance on the SuperGLUE benchmark

SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

Time for AI to cross the human performance range in ImageNet image classification

New autonomous vehicle map shows on-road test sites

Knowledgebase
BrainCore: Next generation cognitive technology

BrainCore: Next generation cognitive technology

BrainCore is an intelligent system that can read text, understand the meaning, store information in a structured form and make logical decisions. It is used to create intelligent software for specific business domains that can interpret text-based information and act on its understanding. The system has three main parts: comprehension, model of the world and decision-making.

Comprehension

Comprehension part of the system derives meaning from text sentences and maps it into the model of the world regardless of how sentences are phrased. The model incorporates a pretrained artificial neural network for determining context and enabling correct interpretation of the words’ meaning.

For example, in a sentence “I work as a product marketing manager at Google” the system will be able to derive several meanings: a) the person is employed, b) has a product marketing manager role, and c) the employer is Google. All of these are pre-defined meanings in the model of the world so that all of this information can be channeled to the right data fields within the overall data structure.

Creating comprehension capabilities became possible only recently when modern natural language understanding (NLU) algorithms reached human level of accuracy. In January 2021, two systems achieved scores of 90.2 and 90.3 on the SuperGLUE benchmark above human performance at 89.8. This benchmark for evaluation of NLU models consists of a wide range of tasks, including question answering, natural language inference, co-reference resolution, word sense disambiguation, and others. It is a more complex version of the original General Language Understanding Evaluation (GLUE) benchmark. NLU models reached human level performance on GLUE as of July 2019 and that triggered the need for a more stringent set of tests.

Achieving human level of language understanding is as important milestone as when computer vision first crossed the human performance in ImageNet image classification in 2015. This was like opening eyes for self-driving cars that could now recognize objects as accurately as humans. They now needed to be trained on driving skills to pass on the driving test. Five years later, in 2020, the National Highway Traffic Safety Administration (NHTSA) oversees self-driving pilots executed by 10 companies in seventeen cities (including Dallas, Washington, D.C., San Francisco and Austin, Texas) across nine states in US.

With knowledge work, there is a subset of tasks that require relatively simple level of language understanding. There is also no significant risk such as killing a person in an accident as with self-driving cars. This makes it possible to automate a large number of tasks today, and with increased sophistication of algorithms as we get closer to Artificial General Intelligence (AGI), even more complex tasks can be automated in the coming years.

Insights on accelerating business growth

Join our newsletter community for tips and strategies from Oneforce experts on:

  • Optimizing your sales funnel
  • Generating more qualified leads
  • Leveraging data across marketing and sales
  • Breaking through plateaus
  • Omni-channel outreach tactics

SIGNUP AND STAY UP TO DATE!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.