Let’s cultivate our gardens!

General artificial intelligence is the Holy Grail of IT and it’s researched in all major developer centres nowadays.

artificial-entityWe now know that machine learning will greatly change our lives, but it’s not clear yet how we can get to the creation of general artificial intelligence. After explaining the main schools, we’ll look at a self-developed concept, the method of algofarming.


Is it possible to have an artificial entity that can be freely questioned about any field of life in order to receive sensible answers? A system capable of combining the elements of the knowledge stored inside it resulting in a level of complexity that could never be achieved by man? This question is decades old, if we look at purely the field of modern IT  without considering the philosophical scenarios proposed over the course of hundreds of years we can still find experts who have been building ontologies / knowledge graphs required by machine learning for over forty years, as well as experts who have been pondering about training neural networks since the fifties.



 doug-lenatCurrently, there are five schools of artificial intelligence research. The first is symbolism, which gathers the most fundamental statements and regularities on the way the world works into ontologies. For example, one of the most well-known proponents of this approach, Doug Lenat has been developing the common sense engine, or Cyc, since the end of the 70s, which includes statements such as “you can’t be in multiple places at the same time” or “there’s no need to lean down to pick up something if you aren’t close to the thing”. Knowledge graphs like Cyc contain notions characteristic of human thinking and their relations that serve as the basis of future operations and conclusions.


The second main school of machine learning is the connectionists. They see the most effective approach to developing artificial intelligence is the construction and training of neural networks. Neural networks are complex, multi-layered systems with a series of ongoing complicated calculations in all layers. For example, by processing a great number of photographs, such systems can be trained to define the species of animals featured in pictures, categorise objects and, with the use of text samples, to even write sci-fi novels. Training is an apt expression as these networks don’t “understand” the essence of their tasks and don’t operate with human terms and thus are unable to “justify” the results of their series of calculations and decisions.


A major disadvantage of the connectionist approach is that it’s easy to “mis-train” a network. A well-known example was a case where a neural network created to recognise animal species deemed an image of a lynx to be a wolf with 99 percent certainty.


lynx-to-be-a-wolf


As it’s impossible to determine the operations ongoing in the depths of the system, it’s very hard to correct such mistakes, yet after many experiments, developers realised that they failed to take something into consideration while training the systems. The images used to train the network to recognise wolves mostly had a snowy background – the image of the lynx also happened to feature a snowy landscape. The example clearly shows how much care and rigour is required to train a neural network and that the correct operation of the network depends on accidental minor factors.


super-mario


The school of evolutionism applies the logic of biological phylogenetics and seeks for solutions to problems from the mutations of algorithms and the ensuing development. If an algorithm achieves certain partial results in relation to a particular question, they run it again with a slight adjustment or by crossing it with other successful algorithms again and again until a multi-generational algorithm achieves the appropriate result. Although nowadays the Google DeepMind division is mostly in the limelight due to neural networks, the evolutionist approach is also yielding increasingly spectacular results.

The two other schools of artificial intelligence research won’t be discussed in detail on this occasion, as they are irrelevant in the discussion of the subject, yet in brief: the Bayesian school uses probability calculations to refute or confirm hypotheses, while the analogising school seeks to find solutions to seemingly unknown problems through the similarities between various cases.


What new can Algofarming bring to the table?


The answer in short, is nothing. The concept I propose is based on elements that were partly or fully known to the proponents of one or other traditions.

Algofarming is based on ontologies and evolution. An ontology contains information on the problem to be resolved. Amongst other things, it includes the basic entities, operations and relations characteristic of the problem. This knowledge graph determines the framework for the functioning of the system, thus establishing the ground rules with which the algorithms will work. Subsequently, the system will create a previously determined number of algorithms from the elements listed in or derived from the graph, which will simultaneously start working on solving the problem.


After a determined number of steps or time, the system will examine how many of the set number of algorithms achieved a result, retaining the most successful and discarding the less successful ones. Subsequently, according to a method known from the evolutionist school, we create derivatives or mutants from the more successful algorithms and use them to fill the empty positions. The production of algorithms is repeated until the system manages to make the simplest process that successfully solves the problem. The end result is an algorithm that can be read, interpreted, corrected and expanded on, if necessary, by human beings, as opposed to the obscure opacity of neural networks.


The most effective concept of machine learning can yield a true breakthrough for future economies. A reliable and versatile system can find correlations in chaotic lists of data such as meteorological measurements or stock market trends and such technologies could also provide impetus for the industry of self-driving cars.

Therefore, it’s worth being a good gardener and cultivating our algorithm farms!

Laci Németh
Written by

Laci Németh

Software Development Engineer

2020-01-17

Let’s cultivate our gardens!

6 min

algofarming

artifical intelligence

Other blog posts

Contact us

+36 1 611 0462

info@blackbelt.hu