Our Goal

Our goal is to empower machines in such a way that they are capable of human levels of intelligence, characterized by:

  • Learning using only a minimal number of experiences of a given situation, e.g. a visual scene, which are often widely distributed in time and space
  • Transferring knowledge which has been learnt from one experience to be used in another situation which has not previously been experienced
  • Selecting the aspects of a situation, e.g. parts of a visual scene, which are most salient for a given task, and focusing information processing onto these aspects whilst maintaining the ability to switch attention to new objects of interest as the need arises
  • Minimizing energy consumption by responding principally to abnormal, unexpected situations, whilst simultaneously using predictive, learnt knowledge to maintain responses to normal situations.

Our Objectives

To achieve this goal, we are targeting the following key objectives:

  • Develop fundamental probabilistic graph-based generative neural networks, which use spatio-temporally defined neuronal processing, and sparse feed-forward, lateral and feedback connectivity to achieve the desired functionality
  • Seamlessly integrate deep generative neural network models for learning spatio-temporal features with graph-based knowledge representation for storing knowledge and inferring information from prior experiences
  • Deploy our neural networks and graph-based knowledge networks on a massively parallel, many-core distributed processing/memory hardware platform, using event-based asynchronous processing, in order to achieve fast, scalable, low energy computation
  • Develop processes for online, continuous learning and inference, in real-time on the same computing platform
  • Use event-based sensors and actuators to create end-to-end event-based systems and products which conserve the low energy characteristics of the algorithms and processing platform throughout the system/product.

Our STAR technology: Select, Track And Recognise

To demonstrate the power of our neuromorphic approach to machine intelligence, we are developing an end-to-end event-based machine vision system for fast emergency pedestrian collision avoidance, for deployment in automotive advanced driver assistance systems. The system will be capable of the rapid Selection of objects of interest in the visual scene, predictive Tracking of the objects to determine if they are moving into a danger zone, And accurate Recognition of the object to decide the action required.

  • Using Dynamic Vision Sensors (DVS) in combination with our neuromorphic algorithms and computing hardware, our end-to-end neuromorphic system will be sufficiently fast to meet emergency response time requirements, robust to poor and high contrast lighting conditions, and highly energy efficent.
  • The system will make use of low (micro-second) latency, spatio-temporal motion event data from the DVS to detect visual events of interest in the real world, select and focus attention on them, and by tracking their spatial and temporal features and using these features to recognise the nature of these events, to make rapid and accurate predictions about their potential to create dangerous situations.
  • Experience-based learning on graphical knowledge bases will allow the system to achieve continouous and persistent improvements in performance, whilst the use of validation techniques developed for cryptocurrency applications, will ensure data integrity and ensure that stringent safety criteria are met.

Copyright © 2018
All Rights Reserved by
Logo design