Storie sulla comunicazione e quello che ci circonda.

 

Newsletter

Paul Isaac’s at TEDxRoma 2018

Personally, my approach to AI is one of integration as we might integrate a new culture or Alien intelligence into our cohabitation of the world.

Comunicazione di Ilaria Forniti

18 maggio 2018

Today we present another highly anticipated speaker at TEDx Roma 2018.

To not miss the event, here the instructions for discounted tickets reserved for our community.

TEDxRoma 2018

 

Neuromorphic Cognitive Security Engineer. An international IT professional for over 30 years. Paul has spent the last 10 years researching a curiosity.

Why do we spend kilowatts or even megawatts of conventional computing power to investigate something that only consumes 20 watts of energy, the Brain.

Paul Isaac’s is Head of Engineering & Information Management at Bristol’s Smart City Experimental Network joint-venture, formed by Bristol City Council and the University of Bristol in the United Kingdom, Bristol is Open (BiO). Paul is also principal researcher in Project NeuralMimicry. The project is investigating a neuromorphic based approach to strong artificial intelligence using his own conceptual Autonomic Asynchronous Recursive Neural Network (AARNN). This forms the groundwork of Paul’s part-time Ph.D studies in Intelligent Systems & Information Management at De Montfort University with applications in cybersecurity and more.

I am a father of 6 children (Michael 29, Christopher 28, Benjamin 26, Rebecca 22) (Melissa 14) (Scarlett 2). Grouped sequentially as (First marriage) (Girlfriend) (Second marriage). Interestingly, 3 sons were born in England and have the same mother. 3 daughters each have a different mother. All 3 daughters were born in different countries beginning with S. Rebecca – Saudi Arabia, Melissa – Scotland, Scarlett – Switzerland.

I first programmed a computer in 1980, aged 11.

 

Paul Isaac’s Neuromorphic Cognitive Security Engineer

 

As any object created by human being, the AI is influenced by our perceptions. Is it really possible to connect neutrality with artificial intelligence?
Neutrality and artificial intelligence is precisely what my TEDxRoma 2018 talk will cover – See you there!

However, our influence may become meaningless if the first strong AI that we design inherently with our own biases then suddenly chooses to build an AI 2.0 template of its own accord with its own biases but in a fraction of the time. And so on, recursively and in parallel. We need to state the obvious because it still appears overlooked. We are not designing single AI solutions. Once built, the ability to replicate the templated solutions will only be limited by resources and the constraints we establish at the beginning to avoid a runaway build problem.

 

What, exactly, constitutes harm when it comes to AI?
Harm is when an action is carried out against the betterment of the natural world, others or oneself. According to a value system harm or inaction to one subsystem may prevent harm to another and therefore the balance of what is good can appear in conflict from one viewpoint yet acceptable to another.

We are working with definitions and constructs of how we understand technology, physics and the universe we exist in today. Whilst the masses may be blissfully unaware or disregard science fiction as fantasy or pure entertainment, it is an example of the work of creative minds speculating on where researchers and engineers may lead us that can provide a source of inspiration to pursue the possible. The Film Industry derives more wealth in perpetuating destructive tendencies than beneficial ones.

It is by far simpler to destroy than to construct. And by extension, it is easier to construct a destructive system than a benevolent one.

It is harmful to suggest that the future for Artificial Intelligence is ‘all bad’. It is wise to have the foresight to plan carefully in the creation of a new intelligence or group of intelligences and anticipate the outcomes of variations from imperfect or mutated copies. It would be reckless to ignore contingents that are not risk averse whom may seek to develop an intelligence by cutting corners. A requirement here also is to apply a value system based on the likelihood of success and reward.

Human intelligence have developed weapons that kill in a variety of ways. We are experts at causing harm even without AI.

When it comes to AI is it more harmful to pursue that which is more acceptable to the majority?

NeuralMimicry by Paul Isaac’s Neuromorphic Cognitive Security Engineer

 

 

The concept of machine learning is linked to artificial intelligence. But, what is machine learning and how does it relate to AI? Could we associate this concept with a neutral learning method?
Firstly, I will openly say that whilst Machine Learning (ML) has its place, I am biased against using it in the pursuance of Strong/Generalised AI. Resources for computation are prolific. The hand-held mobile phone now contains more computational power than some of the last century’s super computers that occupied whole rooms.

Machine Learning utilises that workhorse power of computation to calculate brute force results based on algorithm based rule-sets. Whilst there are opportunities to develop new rule-sets from new data streams I don’t see the benefit of requiring 11 million cat pictures just to figure out a new image is or is not a cat. And then a further set of 11 million dogs to do likewise. Very large datasets are a number-crunching Machine Learning utopia and it has taken until this century for systems to be available that can handle this large volume of data.

A key problem with Machine Learning models is that they are typically not cross-domain. If I build an ML rule-set model optimised for facial recognition I could not take that same model and apply it to fingerprint detection. Yet an intelligent (AI) system could identify the commonalities of distinguishing lines and features within both and within a distinctly smaller dataset. Simplistic ML models with limited rule-sets can calculably be neutral.

However, for real-world high dimensional problems that must also account for dynamic context, viewpoint, risk, reward, objective and intent I believe would not be achieved by utilising multiple building blocks of ML simplistic rules. Yet, this is how some approaches to abstracting the way biological neurons function attempt to build simulated intelligence. As science is discovering, biological neurons come in thousands of variations with complex electro/chemical/biological responses. Current computing resources in this area consume kilowatts or megawatts of power when in relation to functionality the brain performs so much more with just 20Watts – the current approach is clearly the wrong approach.

The approach that I am following is based on Neuromorphic engineering principles using mixed-signal electronics that react and adapt to stimuli rather than a system of scripted lines of code calculating weights.

I call my project NeuralMimicry and relies on developing my conceptual Autonomic Asynchronous Recursive Neural Network using temporal/phase interaction of stimuli to cause computation and storage within the same units – it is an alternative to Von Neumann/Boolean based computing logic. NeuralMimicry has now been accepted into the United Nations Economic and Social Council (ECOSOC) database that holds information on projects related to solving the UN’s 17 Sustainable Development Goals.

See: About and My Project

Paul Isaac’s Neuromorphic Cognitive Security Engineer

 

In some ways, the neutrality of artificial intelligence could be associated with its ethical value. What’s your opinion on the ethical sense of AI?
I have no formal education in the ethics of Artificial Intelligence, but do touch on it during my TEDx talk. So please consider my answer from purely personal observations.

If I present the case of how ethical it is to use a conventional hammer to hit a regular nail I would say its exactly the right tool for the job, by design and so no conflict.
If the hammer is wielded using threatening behaviour the hammer itself has no ethics to contend with. It has no method of awareness.

Ethics applies to the person wielding the tool.

Current implementations of Machine Learning and Artificial Intelligence are still at the tool/hammer level. Ethics applies to the system developer or that whom wields the tool.
This is going to change in the very near future and is a realisation that AI is not quite as developed yet as the hype would lead us to believe.
It is perfectly correct and timely to have discussions concerning ethics, AI, the ethical use of AI and by extension the ethics an AI should consider when it is about to take action on a decision it has arrived at.

The transition is subtle but distinct. At some point researchers and engineers will aid in the transition of what we know about AI to an AI that knows itself. That is the time by when we need to not just establish but to have finalised how an AI should consider ethics. To wait for the transition will be too late.

Personally, my approach to AI is one of integration as we might integrate a new culture or Alien intelligence into our cohabitation of the world.

Paul Isaac’s Neuromorphic Cognitive Security Engineer

 

What improvements do you imagine in future thanks to the application of AI?
Depending who you speak to you will get very varied and subjective viewpoints. From the futurists, the optimists, the pessimists, the nay-sayers and general population.

The problem for providing an accurate answer is understanding the impact of exponential change. Humans are comforted to think of challenges being resolved in mostly linear timeframes with the occasional step change.

If we are asked to draw a line between two points the majority of responses will be a straight line.

AI development has gone through two ‘AI Winters’ where the promises of what it could deliver were quashed by quite negative high profile responses to published research papers. The first was a politically motivated attack to retain academic position rather than for the collaborative betterment of the objectives. The affect was funding opportunities were significantly reduced. This year several governments have announced major funding opportunities for AI innovators based on the recent activities such as Google’s DeepMind success with ‘Go’ and examples of systems able to learn how to achieve top scores in retro-arcade games.

There are many manufacturing and warehouse product-picking jobs that are currently being displaced by automation and robotics. How we train our children requires a predictive element as to how we think the world will be in 25 years, the typical time from birth to approach a PhD.

From a linear approach we can plan a timeline. From an exponential approach the target will end up moving quicker than we can react to. Given this approach our education system should now switch to a speculative approach.

To introduce complex niche concepts usually taught in later years, earlier on. Until the age of 11 the rate of new neurons created in the brain outweigh the rate that die. Therefore, there is capacity to absorb more information then than after. Consider if we introduce many more concepts first and then in the teenage years we use that time to identify and rationalise the underpinning arguments for each concept. Rather than waiting for 25 years for a few specialists in much needed areas of research to emerge from the majority we can upskill the majority early on and adapt/guide to the areas needing greater understanding depending on how technology requires us.

An exponential growth pattern is much easier to chase in a 10-15 year span than 25-30 years. Artificial Intelligence led personalised education packages tailored to the rate of understanding of the individual will help us keep some pace with the rate of change ahead. The immediate need is to ensure global accessibility for all to this radical approach, because Strong/Generalised AI when it arrives will affect all of us.

Ilaria Forniti

Ilaria Forniti

Isabelle Allende: «Mi innamoro a squarciacuore dei dettagli».
Leggo molto, mi piacciono le montagne, quelle che cadono a picco nei grandi laghi, il colore intenso della vegetazione, mi fanno sentire libera; per questo tra nome e cognome ci ho messo l’Alaska. Per ricordarmi che tutti abbiamo dentro un posto puro e incontaminato da percorrere.
Amo i paesini dimenticati, quelli dove gli anziani risalgono le salite a fatica con la legna per la stufa. E capire che diventa inverno quando le strade si riempiono dell’odore di fumo e il cielo di nuvole artificiali. Mi piace fotografare i pezzi che nessuno guarda; lì, trovo spesso il senso della poesia e la forza dirompente dell’umanità. Canto sottovoce, di continuo. Scrivo, come terapia. Non amo l’estate, il caldo mi sfianca.
Sono sbadata, inciampo. Per questo, non porto quasi mai i tacchi. Credo nella cultura, nel progresso, nell’intelligenza, nei tratti sani della modernità, nella comunicazione a cuore aperto, quella senza inganni, quella che informa e libera, nella verità più che nelle convinzioni, nella conoscenza senza pregiudizio e in quella parte di uomini che hanno scelto di credere che "crescere" non significa "invecchiare".
Ilaria Forniti