Category Archives: AI

AI Quintessence

It’s Deep

Today, we are trying Deep Neural Networks on many [previously] unsolved problems. Image and language recognition with CNNs and LSTMs has become a standard. Machines can classify images/speech/text faster, better and much longer than humans.

There is breakthrough in computer vision in real-time, capable to identify objects and object segments. That’s very impressive, it enables self-driving cars, and in-doors positioning without radio beacons or other infrastructure. The machine sees more than human, because the machine sees it all in 360 degrees. And the machine sees more details simultaneously; while human overlooks majority of them.

We created some new kind of intelligence, that is similar to human, but is very different from human. Let’s call this AI as Another Intelligence. The program is able to recognize and identify more than one billion human faces. This is not equivalent what humans are capable to do. How many people could you recognize/remember? Few thousands? Maybe several thousands? Less than ten thousands for sure (it’s the size of small town); so 1,000,000,000 vs. 10,000 is impressive, and definitely is another type of intelligence.

DNNs are loved and applied almost to any problem, even previously solved via different tools. In many cases DNNs outperform the previous tools. DNNs started to be a hammer, and the problems started to be the nails. In my opinion, there is overconfidence in the new tool, and it’s pretty deep. Maybe it slows us down on the way of reverse engineering the common sense, consciousness…

The Man Without Brain

DNNs were inspired by neuroscience, and we were confident that we were digitally recreating the brain. Here is cold shower – a man with a tiny brain – 10% size of the normal human brain. The man was considered normal by his relative and friends. He lived normal life. The issue was discovered accidentally, and it shocked medical professionals and scientists. There are hypothesis how to explain what we don’t understand.

There are other brain-related observations, that threaten the modern theory of brain understanding. Birds – some birds are pretty intelligent. Parrots, with tiny brains, could challenge dolphins, with human-sized brains, and some chimps. Bird’s brain is structured differently from the mammalian brain. Does size matter? Elephants have huge brain, with 3x more neurons than humans. Though the vast majority of those neurons are within different block of the brain, in comparison to humans.

All right, the structure of the brain matters more than the size of the brain. So are we using/modeling correct brain structure with DNNs?

The Structure And The Function

Numenta is working on reverse engineering the neocortex for a decade. Numenta’s machine intelligence technology is built on the own computational theory of the neocortex. It deals with hierarchical temporal memory (HTM), sparse distributed memory (SDM), sparse distributed representations (SDR), self-organizing maps (SOM). The network topologies are different from the mainstream deep perceptrons.

It’s fresh stuff from the scientific paper published in free frontiers magazine, check it out for the missing link between structure and function. “… remarkably intricate and previously unseen topology of synaptic connectivity. The synaptic network contains an abundance of cliques of neurons bound into cavities that guide the emergence of correlated activity. In response to stimuli, correlated activity binds synaptically connected neurons into functional cliques and cavities that evolve in a stereotypical sequence toward peak complexity. We propose that the brain processes stimuli by forming increasingly complex functional cliques and cavities.”

One-Shot Learning

When human is shown a new symbol, from previously unseen alphabet, it is usually enough to recognize the other such symbols, when shown again later. Even in the mix with other known and unknown symbols. When human is shown a new object, for the first time, like segway or hoverboard, it is enough to recognize all other future segways and hoverboards. It is called one-shot learning. You are given only one shot at something new, you understand that it is new for you, you remember it, you recognize it during all future shots. The training set consists of only one sample. One sample. One.

Check out this scientific paper on human concept learning with segway and zarc symbol. DNNs require millions and billions of training samples, while the learning is possible from the only one sample. Do we model our brain differently? Or are we building different intelligence, on the way of reverse-engineering our brain?

God Doesn’t Build In Straight Lines

These are two models of the same kart, created differently. On the left is human-designed model. On the right is machine-designed model within given restrictions and desired parameters (gathered via telemetry from the real kart from the track). It is paradigm shift, from constructed to grown. Many things in nature do grow, they have lifecycle. It’s true for the artificial things too. Grown model tends to be more efficient (lighter, stiffer, even visually more elegant), than constructed ones.

geneai

Take a look at those DNNs, including GoogLeNet and MS ResNet. On the left we have human-designed [constructed] models. Imagine what could be machine-generated [grown] on the right…

genai2

How to generate? Good start would be to use evolutionary programming, with known primitives for cells and layers. Though it is not easy to get it right. By evolving an imaginable creature, that moves to the left, right, ahead, back, it is easy to get asymmetrical blocks, handling the left and right. Even by running long evolutions, it could be hardly possible to achieve the desired symmetry, observed in the real world, and considered as common sense. E.g. the creature has very similar or identical ears, hands, feet. What to do to fix the evolution? To bring in the domain knowledge. When we know that left and right must be symmetrical, we could enforce this during the evolution.

The takeaway from this section – we are already using three approaches to AI programming simultaneously: domain rules, evolution and deep learning via backpropagation. Altogether. No one of them is not enough for the best possible end result. Actually, we even don’t know what the best result is possible. We are just building a piece of technology, for specific purposes.

The Master Algorithm

The above approach of using domain rules, evolution and deep learning via backpropagation altogether might not be capable to solve the one-shot learning problem. How that kind of problems could be solved? Maybe via Bayesian learning. Here is another paper on Bayesian Framework, that allows to learn something new from few samples. Together with Bayes we have four AI approaches. There is a work on AI, identifying five [tribes] of them.

tribes

The essense is in how to learn to learn. Without moving the design of AI to the level when AI learns to learn, we are designing throw-away pieces, like we did with Perl programming, like we do with Excel spreadsheets. Yes, we construct and train the networks, and then throw them away. They are not reusable, even if they are potentially reusable (like substituting the final layers for custom classification). Just observe what people are doing, they all are training from the very beginning. It is the level of learning, not the learning to learn – i.e. it’s throw-away level. People are reusable, they could train again; while networks are not reusable.

The Master Algorithm is the work, that appeals to the AI creators, who are open-minded to try to break through the next level of abstraction. To use multiple AI paradigms, in different combinations. It is design of design – you design how you will design the next AI thing, then apply that design to actually build it. Most probably good AI must be built with special combination of those approaches and tools within each of them. Listen to Pedro Domingos for his story, please. Grasp the AI quintessence.

 

 

 

Tagged , , , , , , , , , , , , ,

10,000x faster

We wanted to know it

Since mankind developped some good intelligence, we [people] immediately started to discover our world. We walked by foot until we could reach. We domesticated big animals – horses – and rode horses to reach even further, horizontally and vertically. So we reached the water. Horses could not bring us across the seas and oceans. We had to create new technology, that could carry people above the water – ships.

Fantasy map of a flat earth

Fantasy map of a flat earth — Image by © Antar Dayal/Illustration Works/Corbis

Ship building required pretty much calculation itself. And ship only is not sufficient to get there. Some navigation needed. We developped both measurement and calulcation of wood and nails, measurement of time, navigation by stars and sides of the world. That was kind of computing. Not the earliest computing ever, but good enough computing that let us to spread the knowledge and vision of our [flat] world.

Wooden computing

Early device for computing was abacus. Though it is usually called a calculating tool or counting frame, we use word computing, becuse this topic is about computing technology. Abacus as computing technology was designed with size bigger than a man, and smaller than a room. Then the wooden computing technology miniaturized to desktop size. This is important: emerged at the size between 1 and 10 meters, and got smaller in time to fit onto dektop. We could call it manual wooden computing too. Wooden computing technology is still in use nowadays in African countries, China, Russia.

01_abacus

Mechanical computing

Metal computing emerged after wooden. Charles Babbage designed his analytical engine from metal gears, to be more precise – from Leibniz wheel. That animal was bigger than a man, and smaller than a room. Below is a juxtaposition of inventor himself with his creation (on the left). Metal computing technology miniaturized in time, and fit into a hand.

Curt Herzstark made really small mechanical calculator, named it Curta (on the right). Curta also lived long, well into the mid of XX century. Nowadays Curta is favorite collectible, priced at $1,000 minimum on eBay, while majority of price tags are around $10,000 for good working device, built in Lichtenstein.

02_babbage_curta

Electro-mechanical computing

Babbage machine became a gym device, when Konrad Zuse designed first fully automatic electro-mechanical machine Z3. Clock speed was 5-10Hz. Z3 was used to model flatter effect for military aircrafts in Nazi Germany. And first Z3 was destroyed during bombardment. Z3 was bigger than a man, and smaller than a room (left photo). Then electro-mechanical computing miniaturized to desktop size, e.g. Lagomarsino semi-automatic calculating machine (right photo).

03_Z3

Here something new happened – growth beyond the size of a room. Harvard Mark I was big electro-mechanical machine, put in big hall. Mark I served for Manhattan Project. There was a problem, how to detonate atomic bomb. Well known von Neumann computed explosive lens on it. Mark I was funded by IBM, Watson Sr.

03_mark_I

 

So, electro-mechanical computing started from the size bigger than a man, smaller than a room, and then evolved in two directions: miniaturized to desktop size, and grown to small stadium size.

Electrical Vacuum Tube computing

At some point, mechanical parts were redesigned to electrical, and first fully electrical machine was created – ENIAC. It used vaccum tubes. Its size was bigger than a man, smaller than a big room (left photo). The fully electrical computing technology on vacuum tubes got miniaturized to desktop size (right photo).

04_electrical_vacuum

Very interesting and beautiful was miniaturization. Even vacuum tubes could be small and nice. Furthermore, there were many women in the indutry at the time of electrical vacuum tube computing. Below are famous “ENIAC girls”, with the evidence of miniaturization of modules, from left to right, smaller is better. Side question: why women left programming?

04_ENIAC_girls

ENIAC was very difficult to program. Here is tutorial how to code the modulo function. There were six programmers who could do it really well. ENIAC was intended for balistic computing. But well known same von Neumann from atomic bomb project, got access to it and ordered first ten programs for hydrogen bomb.

04_SAGE

Fully automatic electrical machines grew big, very big, bigger than Mark I, II, III etc. They were used for military purposes, and space programs. IBM SAGE on photo, its size is like mid stadium.

Electrical Transistor computing

First fully transistor machine was build probably by IBM, though there is photo of European [second] machine, called CADET (left photo). There were no vacuum tubes in it anymore. Transistor technology is till alive, very well miniaturized to desktop and hand (right photo).

05_transistor

Miniaturization of transistor computing went even further, than size of the hand. Think of small contact lens, small robots in veins, brain implants, spy devices and so on. And transistors are getting smaller and smaller, today 14nm is not a big deal. There is dozen of silicon foundries capable of doing FinFET at such scale.

05_titan

Transistor computers grew really big, to the size of the stadium. The Earth is being covered by data centers, sized as multiple stadiums. It’s Titan computer on photo, capable of crunching data at the rate of 10 petaFLOPS. The most powerful supercomputer today is Chinese Sunway TaihuLight at 34 petaFLOPS.

But let me remind the point: electrical transistor computing was designed at the size bigger than a man, smaller than a room, and then evolved into tiny robots, and huge supercomputers.

Quantum computing

Designed at the size bigger than a man, smaller than a room.

06_quantum_dwave

Everything is a fridge. The magic happens at the edge of that vertical structure, framed by the doorway, 1 meter above the floor. There is a silicon chip, designed by D:Wave, built by Cypress Semiconductor, cooled to absolute zero temperature (-273C). Superconductivity emerges. Quantum physics start its magic. All you need is to shape your problem to the one that quantum machine could run.

It’s somewhat complicated excercise, like modulo function for first fully automatic electrical machines on vacuum tubes years ago. But it is possible. You got to take your time, paper and pen/pencil, and bring your problem to the equivalent Ising model. Then it is easy: give input to quantum machine, switch on, switch off, take output. Do not watch when machine is on, because you will kill the wave features of particles.

Today, D:Wave solves problems 10,000x faster than transistor machines. There is potential to make it 50,000x faster. Cool times ahead!

Motivation

Why do we need such huge computing capabilities? Who cares? I personally care. Maybe others similar to me, me similar to them. I want to know who we are, what is the world, and what it’s all about.

The Nature does not compute the way we do with transistor machines. As my R&D colleague said about a piece of metal: “You raise the temperature, and solid brick of metal instantly goes liquid. Nature computes it at atomic level, and does it very very fast.” Today one of Chinese supercomputers Tianhe-1A computed behavior of 110 billion atoms during 500,000 evolutions… Is it much? It was only 0.1 nanosecond corresponding to real time, done in three hours of computing.

Let’s do another comparison for same number of atoms. It was about 10^11 atoms. If it was computed at the rate of 1 millisecond, then it would be only 500 seconds, less than 10 minutes. My body has 10 trillions molecules, or about 10^28 atoms. Hence, to simulate entire me during 10 minutes at the level of individual atoms, we would need 10^18x more Tianhe-1A supercomputers… Obviously our current computing is wrong way of computing. Need to invent further. But to invent further, we have to adopt new way of computing – quantum computing.

Who needs such simulations? Here is counter question – what is Intelligence? Intelligence is our capability to predict the future (Michio Kaku). We could compute the future at atomic level and know it for sure. The stronger intelligence is, the more detailed and precise our vision into the future is. As we know the past, and know the future, the understanding of time changes. With really powerful computing, we know for sure what will be in the future as accurately as we know what happened in the past. Distant future is more complicated to compute as distance past. But it is possible, and this is what Intelligence does. It uses computing to know the time. And move in time. In both directions.

Conclusion

All computing technologies together, on one graph, show some pattern. Horizontaly we have time, from past (left) to future (right). Vertically we have scale of sizes, logarithmic, in meters. Red dot shows quantum computing. It is designed already, bigger than a man, smaller than a room. Upper limits are projected bigger than modern transistor supercomputers. Lower is unknown. It’s OK that both transistor and quantum computing technologies coexist and complement each other for a while.

07_dragon

All right, take a look at those charts, imagine quantum lines continuation, what do you see? It is Software is eating the World. Dragon’s tail is on the left, body is in the middle, and the huge mouth is on the right. And this Software Dragon is eating ourselves at all scales. Somebody calls it Digitization.

07_software_eating_world

Software is eating the World, guys. And it’s OK. Right now we could do 10,000x faster computing on quantum machines. Soon we’ll be able to do 50,000x faster. Intelligence is evolving – our ability to see the future and the past. Our pathway to time machine.

 

Tagged , , , , , , , , , , ,

Building AI: Another Intelligence

https://skillsmatter.com/skillscasts/8326-building-ai-another-intelligence

How AI tools can be combined with the latest Big Data concepts to increase people productivity and build more human-like interactions with end users. The Second Machine Age is coming. We’re now building thinking tools and machines to help us with mental tasks, in the same way that mechanical robots already help us with physical work. Older technologies are being combined with newly-created smart ones to meet the demands of the emerging experience economy. We are now in-between two computing ages: the older, transactional computing era and a new cognitive one.

In this new world, Big Data is a must-have resource for any cutting-edge enterprise project. And this Big Data serves as an excellent resource for building intelligence of all kinds: artificial smartness, intelligence as a service, emotional intelligence, invisible interfaces, and attempts at true general AI. However, often with new projects you have no data to begin with. So the challenge is, how do you acquire or produce data? During this session, Vasyl will discuss what the process of creation of new technology to solve business problems, and the strategies for approaching the “No Data Challenge”, including:

  • Using software and hardware agents capable of recording new types of data;
  • The Five Sources of Big Data;
  • The Six Graphs of Big Data as strategies for modern solutions; and
  • The Eight Exponential Technologies.

This new era of computing is all about the end user or professional user, and these new AI tools will help to improve their lifestyle and solve their problems.

https://skillsmatter.com/skillscasts/8326-building-ai-another-intelligence

 

Tagged , , , , , , , , ,