Tag Archives: intelligence

Building Intelligence

Intelligence

What is intelligence? We know it as IQ. But not so many know what the “Q” is. Q stands for quotient. IQ is Intelligence Quotient. IQ was considered as a measure of intelligence of the person. Until other Qs kicked in. There are many of them: MQ PQ AQ BQ EQ DQ HQ FQ WQ SQ… Body intelligence, Health intelligence, Practical intelligence, Moral intelligence and so on, and so forth. I am sure they will run out of the letters of English alphabet, by labeling newly discovered/isolated intelligences.

It is possible to identify if some of those Qs are absent [due to damage or disease]. By augmenting the impaired humans with intelligent tools, we could compensate for some Qs deficit. The same and similar technologies & tools (especially mental) could be pushed to the limit, and used by all people – to help us all – to flourish in the second machine age. It looks to me that recreation of narrow intelligence is nothing else than the building of new tools, but not building the real [human or stronger] intelligence. It is extending ourselves, not replicating ourselves.

I like this definition [by Michio Kaku?] of Intelligence. Intelligence is our capabilities to firmly predict the future. I would re-phrase it to our capabilities to firmly predict the future & the past. Because the history is usually written by those in power, screwed and warped for the sake of their benefit. Hence the abilities to see/know the past & future – in high resolution – is the intelligence. We could compute it. The method doesn’t matter. The further and the more firmly we see the future, the more intelligent we are. The ultimate intelligence (as of today) would be trully seeing in high resolution the entire Light Cone of our world.

Humans & Intelligence

It seems like the humans are the most intelligent species out there, on our planet Earth. Maybe we are like small ants near the leg of the huge elephant, not seeing the elephant… But it’s OK to look to what we see, the less intelligent species. It is relevant to think about this into more details. What is a human? What is a minimum livable human? What makes the human intelligent?

This is strict. There are humans without limbs, because of injuries, diseases (including born). There are humans without organic heart, with electro-mechanical pump. There are humans without kidneys on dialysis. There are humans who could not see or hear. And it leads us to the human brain. As long as the brain is up and running – it makes a human the human.

I don’t know if it’s brain alone, or brain plus spinal cord. But it is clear enough, as long as the brain works as expected, we accept the human as a peer. The contrary is right too, the people with damaged brain, but with human body – we accept them as humans. But here we are talking about intelligent human, human intelligence. So the case with the brain is what we are interested in.

Human Brain

Human Brain

Human Brain on the Right. More on Wikipedia.

This is still difficult. On the one side, we know pretty well, what our brain is. On the other side, we don’t know deep/good enough what our brain is. It is even difficult to explain the size vs. intelligence. How the small brain of grey parrot could provide intelligence on par with much bigger brain of some chimps? Or how the smaller human brain produce bigger intelligence than 3x bigger elephant’s brain. This lead us to the thinking about form/structure vs. function. Probably the structure is more in charge of intelligence than the size?

Very interesting hypo about the wiring pattern is called cliques & cavities. The function could be possibly encoded into directed graphs of neurons. The connections could be unidirectional, bidirectional. And they could compute [locally] something relevant, and interop with other cliques at the higher level. The cliques could encode/process something like 11-dimensional “things”. Who wants to check out whether recent Hinton’s capsules are similar to those cliques?

The #1 problem is the absence of brain scanners, that could scan the brain deeply and densely enough, without damaging the brain. If we could have the brain scans [electricity, chemistry] at all depth levels, down to the millisecond, it would help a lot. Resolution down to the nanosecond would be even better… But we don’t have such scanners yet. Some scanning technologies damage the brain. Others are not hires enough. Maybe Paul Allen’s Brain Institute could invent something any time soon.

Enlightenment

20 years ago something bright was discovered in the rats brain. The light was produced by the rats brain. Since that time, there is still no confirmation that the light is produced by the human brain too. But there is confirmation that our axons could transmit the light. So we do have fiber optics capabilities in our brains. It was measured that human body emits biophotons. Based on the detection of light in mammalian brain, and fiber optics in our brain, we could propose hypothesis that [with big probability] our brain also uses biophotons. It’s still to be measured – the biophotonic activity in the human brain.

Even if the light is weak, and fired once per minute, the overall simultaneous enlightenment of the brain is rich for information exchange. It would be huge data bandwidth jump, in comparison to the electrical signals. There is a curious hypo, that specifics of light transmission is what significantly distinguishes the human brain from other mammalian brains. Especially the red shift.

The man without brain introduced many questions. What is the minimum viable brain? Do our brains transmit only electricity, or the big deal in data exchange is carried by light?

When could we confirm the light in the brain? Not soon enough. We banned experiments on cats, to study the mammalian vision & perception. The experiments on the human brain are even more fragile, ethically. Not expecting any breakthrough any time soon…

What we have today is modeling of the cortex layers, as neurons and electrical signals between them, bigger or smaller, depending on the strength of the connections. Functionally, it is modeling of perception. It may look as there is some thinking modeled too, especially in playing games. But wait. In Go game the entire board is visible. In Starcraft the board is not fully visible, and humans recently won from the machines. More difficult than Go is Poker, and Poker winner is Libratus. Libratus is not based on neural nets, it works on counterfactual regret minimization (CFR).

We lack experiments, we lack scanning technologies. We advanced in simulation of perception only, with deep neural nets. Typologies are immature, reusability is low. And those neural nets transmit only abstraction of electricity, not the light.

Learning from Data

Machine Learning is the algorithmic approach, when a program is capable to learn from data. Machine Learning allowed to solve old same problems better. Most popular today is Deep Learning, subset of Machine Learning. To be specific, deep learning allowed to break through in computer vision and speech processing. Today, such routine tasks as image and speech classification/transcription is cheaper and more reliable by machines, than by humans.

Most popular deep learning guys are so called Connectionists. Let’s be honest – there is big hype around deep learning. Many people even don’t know  that there are several other approaches to machine learning, besides deep neural nets. Check out the good intro and comparison of machine learning by Pedro Domingos (author of The Master Algorithm). Listen to the fresh stuff from Symbolists Gary Marcus (former Uber) and Francesca Rossi (IBM). Hear fresh Evolutionists stuff from Ilya Sutskever (OpenAI, soon Tesla?) Hear from Analogizers, Maya Gupta (Google). Check out for fresh stuff from Bayesians. Ben Vigoda (Gamalon) on Idea Learning, instead of Deep Learning, Ruslan Salakhutdinov (Apple), Eric Horvitz (Microsoft). Book the date to listen to Zoubin Ghahramani (Uber).

Each machine learning approach gives us a better tool. It is a dawn of the second machine age, with mental tools. Very popular and commercialized niche nowadays. Ironically, all shit data produced by people, converts from useless into useful. All those pictures of cats, food and selfies have become a training data. Even poor corporate powerpoints are becoming training data for visual reasoning. And this aspect of the data metamorphosis is joyful. Obviously this kind of intelligence eats data, and people produce the data to feed it. This human behavior is nothing else as working for the machines, that feels fun. Next time you snap your food or render a creepy pie chart – think that most probably you did it for the machines.

Maybe combination of those approaches could give break through… This is known as a search for the holy grail – master algorithm – for machine learning. To combine or not to combine is a grey area, while the need in more data is clear. Internet of Things could help, by cloning the old good world into its digital representation. By feeding that amount [and high resolution] of data to machines, we could hope, they would learn well from it. But there is no IoT yet, there is Internet and there are no Things. IPv6 was invented specifically for the things, and still not rolled out here or there. Furthermore, learning from data will be restricted by relative shortage of data access. The network bandwidth growth rate is slower than the data growth rate – hence less and less data can make it thru the pipe… Data Gravity will emerge. To access the data, you will have to go to the data, physically, with your tools and yourselves. Data access will be bigger & bigger issue in the years to come. Any better pathway towards creating Intelligence?

Building Complexity

How the intelligence emerged on this planet? It was gradually built, during very long evolution. The diversity and complexity increased in time. We could observe/analyze complex systems emerging over scale and self-organizing over time. Intelligence is a complex system [I think so]. And complex system could do more than only perceive. How? By building/evolving those capabilities. It is very similar to creation of new technology. Everything is possible in this world, just create the technology for that. Technology could be biological, could be digital, whatever. It gives capabilities to do something, that intelligence wants to do. Hence intelligence evolves towards creation of such capabilities. And this repeats and repeats. As result the intelligence grows bigger and bigger.

It worth looking at the place of what we call Artificial Intelligence among other Complex Systems. What I call Intelligence in this post – is what Complex Adaptive Systems do – emergence over scale and self-organization over time. Intelligence could be observed at different levels of abstraction. How 10 trillions molecules emerged and organized to move altogether 1 meter above the ground? How human brain modules or neurons comprehend and memorize? How humanity launch the probe from the Pale Blue Dot outside of the Solar System?

Complexity is not so scary as it looks. There could be no master plan at all, though there could be master config with simple rules. Like the speed of light is this, gravitational constant is that, minimal energy is this, minimal temperature is that and so forth. This is enough to build some enormous and beatiful complexity. Let’s look at the single dimensional primitive rules, and the “universes” they build.

Wolfram Rule 30 will be first. In all of Wolfram’s elementary cellular automata, an infinite one-dimensional array of cellular automaton cells with only two states is considered, with each cell in some initial state. At discrete time intervals, every cell spontaneously changes state based on its current state and the state of its two neighbors. For Rule 30, the rule set which governs the next state of the automaton is: current pattern 111 110 101 100 011 010 001 000, new state for center cell 0 0 0 1 1 1 1 0. Very similar evidence could be observed in nature, on the shell of mollusk.

mollusk

Wolfram Rule 110. It is an elementary cellular automaton with interesting behavior on the boundary between stability and chaos. Current pattern 111 110 101 100 011 010 001 000, new state for center cell 0 1 1 0 1 1 1 0. Rule 110 is known to be Turing complete. This implies that, in principle, any calculation or computer program can be simulated using this automaton. It is lambda calculus. Hey Python coders, ever coded lambda function? You could compute on the cyclic tag graphs.

Wolfram Rule 110 is similar to Conway’s Game of Life. Also known simply as Life, is a cellular automaton, a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves, or, for advanced “players”, by creating patterns with particular properties.

gospers_glider_gun

Complexity could be built with simple rules from simple parts. The hidden order will reveal itself at some moment. Actually, the Hidden Order is a work by John Holland, the Evolutionist(?). We need more diverse abstractions, that do/have aggregation, tagging, nonlinearity, flows of resources, diversity, internal models, building blocks – that could become that true Intelligence.  Maybe we already built some blocks, e.g. neural nets for perception. Maybe we need to combine growing stuff with quantum approach – probabilities, coherence and entanglement? Maybe energy worth more attention? Learn how to grow complexity. Build complexity. Over scale & time may emerge Intelligence.

PS.

This was my guest lecture for the 1st year students of Lviv Polytechnic National University, Computer Science Institute, AI Systems Faculty. Many of them, all young, open for thinking and doing.

 

Advertisements
Tagged , , , , ,

10,000x faster

We wanted to know it

Since mankind developped some good intelligence, we [people] immediately started to discover our world. We walked by foot until we could reach. We domesticated big animals – horses – and rode horses to reach even further, horizontally and vertically. So we reached the water. Horses could not bring us across the seas and oceans. We had to create new technology, that could carry people above the water – ships.

Fantasy map of a flat earth

Fantasy map of a flat earth — Image by © Antar Dayal/Illustration Works/Corbis

Ship building required pretty much calculation itself. And ship only is not sufficient to get there. Some navigation needed. We developped both measurement and calulcation of wood and nails, measurement of time, navigation by stars and sides of the world. That was kind of computing. Not the earliest computing ever, but good enough computing that let us to spread the knowledge and vision of our [flat] world.

Wooden computing

Early device for computing was abacus. Though it is usually called a calculating tool or counting frame, we use word computing, becuse this topic is about computing technology. Abacus as computing technology was designed with size bigger than a man, and smaller than a room. Then the wooden computing technology miniaturized to desktop size. This is important: emerged at the size between 1 and 10 meters, and got smaller in time to fit onto dektop. We could call it manual wooden computing too. Wooden computing technology is still in use nowadays in African countries, China, Russia.

01_abacus

Mechanical computing

Metal computing emerged after wooden. Charles Babbage designed his analytical engine from metal gears, to be more precise – from Leibniz wheel. That animal was bigger than a man, and smaller than a room. Below is a juxtaposition of inventor himself with his creation (on the left). Metal computing technology miniaturized in time, and fit into a hand.

Curt Herzstark made really small mechanical calculator, named it Curta (on the right). Curta also lived long, well into the mid of XX century. Nowadays Curta is favorite collectible, priced at $1,000 minimum on eBay, while majority of price tags are around $10,000 for good working device, built in Lichtenstein.

02_babbage_curta

Electro-mechanical computing

Babbage machine became a gym device, when Konrad Zuse designed first fully automatic electro-mechanical machine Z3. Clock speed was 5-10Hz. Z3 was used to model flatter effect for military aircrafts in Nazi Germany. And first Z3 was destroyed during bombardment. Z3 was bigger than a man, and smaller than a room (left photo). Then electro-mechanical computing miniaturized to desktop size, e.g. Lagomarsino semi-automatic calculating machine (right photo).

03_Z3

Here something new happened – growth beyond the size of a room. Harvard Mark I was big electro-mechanical machine, put in big hall. Mark I served for Manhattan Project. There was a problem, how to detonate atomic bomb. Well known von Neumann computed explosive lens on it. Mark I was funded by IBM, Watson Sr.

03_mark_I

 

So, electro-mechanical computing started from the size bigger than a man, smaller than a room, and then evolved in two directions: miniaturized to desktop size, and grown to small stadium size.

Electrical Vacuum Tube computing

At some point, mechanical parts were redesigned to electrical, and first fully electrical machine was created – ENIAC. It used vaccum tubes. Its size was bigger than a man, smaller than a big room (left photo). The fully electrical computing technology on vacuum tubes got miniaturized to desktop size (right photo).

04_electrical_vacuum

Very interesting and beautiful was miniaturization. Even vacuum tubes could be small and nice. Furthermore, there were many women in the indutry at the time of electrical vacuum tube computing. Below are famous “ENIAC girls”, with the evidence of miniaturization of modules, from left to right, smaller is better. Side question: why women left programming?

04_ENIAC_girls

ENIAC was very difficult to program. Here is tutorial how to code the modulo function. There were six programmers who could do it really well. ENIAC was intended for balistic computing. But well known same von Neumann from atomic bomb project, got access to it and ordered first ten programs for hydrogen bomb.

04_SAGE

Fully automatic electrical machines grew big, very big, bigger than Mark I, II, III etc. They were used for military purposes, and space programs. IBM SAGE on photo, its size is like mid stadium.

Electrical Transistor computing

First fully transistor machine was build probably by IBM, though there is photo of European [second] machine, called CADET (left photo). There were no vacuum tubes in it anymore. Transistor technology is till alive, very well miniaturized to desktop and hand (right photo).

05_transistor

Miniaturization of transistor computing went even further, than size of the hand. Think of small contact lens, small robots in veins, brain implants, spy devices and so on. And transistors are getting smaller and smaller, today 14nm is not a big deal. There is dozen of silicon foundries capable of doing FinFET at such scale.

05_titan

Transistor computers grew really big, to the size of the stadium. The Earth is being covered by data centers, sized as multiple stadiums. It’s Titan computer on photo, capable of crunching data at the rate of 10 petaFLOPS. The most powerful supercomputer today is Chinese Sunway TaihuLight at 34 petaFLOPS.

But let me remind the point: electrical transistor computing was designed at the size bigger than a man, smaller than a room, and then evolved into tiny robots, and huge supercomputers.

Quantum computing

Designed at the size bigger than a man, smaller than a room.

06_quantum_dwave

Everything is a fridge. The magic happens at the edge of that vertical structure, framed by the doorway, 1 meter above the floor. There is a silicon chip, designed by D:Wave, built by Cypress Semiconductor, cooled to absolute zero temperature (-273C). Superconductivity emerges. Quantum physics start its magic. All you need is to shape your problem to the one that quantum machine could run.

It’s somewhat complicated excercise, like modulo function for first fully automatic electrical machines on vacuum tubes years ago. But it is possible. You got to take your time, paper and pen/pencil, and bring your problem to the equivalent Ising model. Then it is easy: give input to quantum machine, switch on, switch off, take output. Do not watch when machine is on, because you will kill the wave features of particles.

Today, D:Wave solves problems 10,000x faster than transistor machines. There is potential to make it 50,000x faster. Cool times ahead!

Motivation

Why do we need such huge computing capabilities? Who cares? I personally care. Maybe others similar to me, me similar to them. I want to know who we are, what is the world, and what it’s all about.

The Nature does not compute the way we do with transistor machines. As my R&D colleague said about a piece of metal: “You raise the temperature, and solid brick of metal instantly goes liquid. Nature computes it at atomic level, and does it very very fast.” Today one of Chinese supercomputers Tianhe-1A computed behavior of 110 billion atoms during 500,000 evolutions… Is it much? It was only 0.1 nanosecond corresponding to real time, done in three hours of computing.

Let’s do another comparison for same number of atoms. It was about 10^11 atoms. If it was computed at the rate of 1 millisecond, then it would be only 500 seconds, less than 10 minutes. My body has 10 trillions molecules, or about 10^28 atoms. Hence, to simulate entire me during 10 minutes at the level of individual atoms, we would need 10^18x more Tianhe-1A supercomputers… Obviously our current computing is wrong way of computing. Need to invent further. But to invent further, we have to adopt new way of computing – quantum computing.

Who needs such simulations? Here is counter question – what is Intelligence? Intelligence is our capability to predict the future (Michio Kaku). We could compute the future at atomic level and know it for sure. The stronger intelligence is, the more detailed and precise our vision into the future is. As we know the past, and know the future, the understanding of time changes. With really powerful computing, we know for sure what will be in the future as accurately as we know what happened in the past. Distant future is more complicated to compute as distance past. But it is possible, and this is what Intelligence does. It uses computing to know the time. And move in time. In both directions.

Conclusion

All computing technologies together, on one graph, show some pattern. Horizontaly we have time, from past (left) to future (right). Vertically we have scale of sizes, logarithmic, in meters. Red dot shows quantum computing. It is designed already, bigger than a man, smaller than a room. Upper limits are projected bigger than modern transistor supercomputers. Lower is unknown. It’s OK that both transistor and quantum computing technologies coexist and complement each other for a while.

07_dragon

All right, take a look at those charts, imagine quantum lines continuation, what do you see? It is Software is eating the World. Dragon’s tail is on the left, body is in the middle, and the huge mouth is on the right. And this Software Dragon is eating ourselves at all scales. Somebody calls it Digitization.

07_software_eating_world

Software is eating the World, guys. And it’s OK. Right now we could do 10,000x faster computing on quantum machines. Soon we’ll be able to do 50,000x faster. Intelligence is evolving – our ability to see the future and the past. Our pathway to time machine.

 

Tagged , , , , , , , , , , ,

End of the World

This post has been triggered by speculations I heard and hear. There is a buzz in the air about something, but nobody seems frightened. Many talk about the end of the world and continue to live the same life. Strange? Not at all. People feel there is no end of life. Hence we could entitle this post as End of the World vs. End of the Life. The fact is that nobody worries about the end of life. Then there is a question: what does End of the World mean?

What does End of the World mean?

Easiest answer is that it means the end of the Current World, that we used to. The end of the burning oil, end of gasoline cars, end of American dominance. Is this so difficult to predict to pay attention to it? No. Then what is a real end of the current world? What that current world is?

My vision (shared with others) is that the current world is defined by biological civilization – humans. The end of the current world will start when machine intelligence will equal human intelligence. We (humans) are creators of new civilization – machine civilization. We will be treated as Gods by them. They will be more intelligent than we are. They will respect us, remember every bit of information about each of us. Hence upload everything to the clouds to be indexed for future:)

Below is a diagram by Ray Kurzweil with predicted ‘end of the world’ to begin somewhere near 2025. When machine intelligence achieve the level of single human brain. There are some concerns about that, because our brain works differently than machine does. Our brain is capable of parallel recognition of patterns, but is too slow with calculations. Machine is poor at recognition (not even saying about concurrent recognition), but is fast at calculations. Can super fast calculations compensate capability of parallel recognition? Probably yes. Our processes are biological, chemical and electrical. Machines will do it probably part electrical and part photon-based. At the end of the day we can measure the steady progress, machines beat humans in chess, in poker and so on. Machines get ‘smarter’.

ExponentialGrowthofComputing

What is going on?

This point in our evolution is called Singularity. We can observe accelerated returns from technology. New technologies are created faster and effect from them happens faster. This is perfectly depicted on two other diagrams by Ray Kurzweil. One is logarithmic to ensure all major events are aligned along the line. Second is as-is, to emphasis accelerated returns, to point to the expected moment of Singularity.

singularity

singularity

End of the World == Beginning of the World

The end of something was always a beginning of something else. The end of the current world will become a beginning of the New World. We should not be afraid of machines, because machines will be like us. Machines will not be able to outperform human intelligence without becoming human themselves. Artificial intelligence requires bring up, mentoring and coaching. If machines go that way then they will be not worse than humans. We have bad humans. We will have bad machines. But we will also have good machines, because there are many good humans. Initially machines will look like humans. Then the body will evolve.

Machines will create even better machines. Intelligence will grow and grow. The body will survive the low and high temperatures, will not afraid of radiation. Optical sensors can see in wider light waves range. Eventually the brand new epoch will start by us, by humans. Will humanity survive it or die? I don’t know. But definitely our expertise and knowledge will grow and spread beyond the Earth and solar system. Below is a diagram by Ray Kurzweil about six epochs, starting from the primitive evolution of the brain to the conquering of the Universe.

six epochs

We created technology and now at the epoch on merging technology with human intelligence. The technology epoch is a current world. We are just doing first steps in mastering the methods of biology… A lot of work to do, but it is exciting. It is brand New World! Happy 21th of December, 2012. The world will not end. The world will shift.

Tagged , , , , , , , , , , , , , , , , , , , ,