Most intelligent version of Curiosio launched on 01/23. Labeled Beta3. As smart as Beta2 but 100x faster. Intro and details at Ingeenee Engineering Blog.
Today, we are trying Deep Neural Networks on many [previously] unsolved problems. Image and language recognition with CNNs and LSTMs has become a standard. Machines can classify images/speech/text faster, better and much longer than humans.
There is breakthrough in computer vision in real-time, capable to identify objects and object segments. That’s very impressive, it enables self-driving cars, and in-doors positioning without radio beacons or other infrastructure. The machine sees more than human, because the machine sees it all in 360 degrees. And the machine sees more details simultaneously; while human overlooks majority of them.
We created some new kind of intelligence, that is similar to human, but is very different from human. Let’s call this AI as Another Intelligence. The program is able to recognize and identify more than one billion human faces. This is not equivalent what humans are capable to do. How many people could you recognize/remember? Few thousands? Maybe several thousands? Less than ten thousands for sure (it’s the size of small town); so 1,000,000,000 vs. 10,000 is impressive, and definitely is another type of intelligence.
DNNs are loved and applied almost to any problem, even previously solved via different tools. In many cases DNNs outperform the previous tools. DNNs started to be a hammer, and the problems started to be the nails. In my opinion, there is overconfidence in the new tool, and it’s pretty deep. Maybe it slows us down on the way of reverse engineering the common sense, consciousness…
DNNs were inspired by neuroscience, and we were confident that we were digitally recreating the brain. Here is cold shower – a man with a tiny brain – 10% size of the normal human brain. The man was considered normal by his relative and friends. He lived normal life. The issue was discovered accidentally, and it shocked medical professionals and scientists. There are hypothesis how to explain what we don’t understand.
There are other brain-related observations, that threaten the modern theory of brain understanding. Birds – some birds are pretty intelligent. Parrots, with tiny brains, could challenge dolphins, with human-sized brains, and some chimps. Bird’s brain is structured differently from the mammalian brain. Does size matter? Elephants have huge brain, with 3x more neurons than humans. Though the vast majority of those neurons are within different block of the brain, in comparison to humans.
All right, the structure of the brain matters more than the size of the brain. So are we using/modeling correct brain structure with DNNs?
Numenta is working on reverse engineering the neocortex for a decade. Numenta’s machine intelligence technology is built on the own computational theory of the neocortex. It deals with hierarchical temporal memory (HTM), sparse distributed memory (SDM), sparse distributed representations (SDR), self-organizing maps (SOM). The network topologies are different from the mainstream deep perceptrons.
It’s fresh stuff from the scientific paper published in free frontiers magazine, check it out for the missing link between structure and function. “… remarkably intricate and previously unseen topology of synaptic connectivity. The synaptic network contains an abundance of cliques of neurons bound into cavities that guide the emergence of correlated activity. In response to stimuli, correlated activity binds synaptically connected neurons into functional cliques and cavities that evolve in a stereotypical sequence toward peak complexity. We propose that the brain processes stimuli by forming increasingly complex functional cliques and cavities.”
When human is shown a new symbol, from previously unseen alphabet, it is usually enough to recognize the other such symbols, when shown again later. Even in the mix with other known and unknown symbols. When human is shown a new object, for the first time, like segway or hoverboard, it is enough to recognize all other future segways and hoverboards. It is called one-shot learning. You are given only one shot at something new, you understand that it is new for you, you remember it, you recognize it during all future shots. The training set consists of only one sample. One sample. One.
Check out this scientific paper on human concept learning with segway and zarc symbol. DNNs require millions and billions of training samples, while the learning is possible from the only one sample. Do we model our brain differently? Or are we building different intelligence, on the way of reverse-engineering our brain?
These are two models of the same kart, created differently. On the left is human-designed model. On the right is machine-designed model within given restrictions and desired parameters (gathered via telemetry from the real kart from the track). It is paradigm shift, from constructed to grown. Many things in nature do grow, they have lifecycle. It’s true for the artificial things too. Grown model tends to be more efficient (lighter, stiffer, even visually more elegant), than constructed ones.
How to generate? Good start would be to use evolutionary programming, with known primitives for cells and layers. Though it is not easy to get it right. By evolving an imaginable creature, that moves to the left, right, ahead, back, it is easy to get asymmetrical blocks, handling the left and right. Even by running long evolutions, it could be hardly possible to achieve the desired symmetry, observed in the real world, and considered as common sense. E.g. the creature has very similar or identical ears, hands, feet. What to do to fix the evolution? To bring in the domain knowledge. When we know that left and right must be symmetrical, we could enforce this during the evolution.
The takeaway from this section – we are already using three approaches to AI programming simultaneously: domain rules, evolution and deep learning via backpropagation. Altogether. No one of them is not enough for the best possible end result. Actually, we even don’t know what the best result is possible. We are just building a piece of technology, for specific purposes.
The above approach of using domain rules, evolution and deep learning via backpropagation altogether might not be capable to solve the one-shot learning problem. How that kind of problems could be solved? Maybe via Bayesian learning. Here is another paper on Bayesian Framework, that allows to learn something new from few samples. Together with Bayes we have four AI approaches. There is a work on AI, identifying five [tribes] of them.
The essense is in how to learn to learn. Without moving the design of AI to the level when AI learns to learn, we are designing throw-away pieces, like we did with Perl programming, like we do with Excel spreadsheets. Yes, we construct and train the networks, and then throw them away. They are not reusable, even if they are potentially reusable (like substituting the final layers for custom classification). Just observe what people are doing, they all are training from the very beginning. It is the level of learning, not the learning to learn – i.e. it’s throw-away level. People are reusable, they could train again; while networks are not reusable.
The Master Algorithm is the work, that appeals to the AI creators, who are open-minded to try to break through the next level of abstraction. To use multiple AI paradigms, in different combinations. It is design of design – you design how you will design the next AI thing, then apply that design to actually build it. Most probably good AI must be built with special combination of those approaches and tools within each of them. Listen to Pedro Domingos for his story, please. Grasp the AI quintessence.
Since mankind developped some good intelligence, we [people] immediately started to discover our world. We walked by foot until we could reach. We domesticated big animals – horses – and rode horses to reach even further, horizontally and vertically. So we reached the water. Horses could not bring us across the seas and oceans. We had to create new technology, that could carry people above the water – ships.
Ship building required pretty much calculation itself. And ship only is not sufficient to get there. Some navigation needed. We developped both measurement and calulcation of wood and nails, measurement of time, navigation by stars and sides of the world. That was kind of computing. Not the earliest computing ever, but good enough computing that let us to spread the knowledge and vision of our [flat] world.
Early device for computing was abacus. Though it is usually called a calculating tool or counting frame, we use word computing, becuse this topic is about computing technology. Abacus as computing technology was designed with size bigger than a man, and smaller than a room. Then the wooden computing technology miniaturized to desktop size. This is important: emerged at the size between 1 and 10 meters, and got smaller in time to fit onto dektop. We could call it manual wooden computing too. Wooden computing technology is still in use nowadays in African countries, China, Russia.
Metal computing emerged after wooden. Charles Babbage designed his analytical engine from metal gears, to be more precise – from Leibniz wheel. That animal was bigger than a man, and smaller than a room. Below is a juxtaposition of inventor himself with his creation (on the left). Metal computing technology miniaturized in time, and fit into a hand.
Curt Herzstark made really small mechanical calculator, named it Curta (on the right). Curta also lived long, well into the mid of XX century. Nowadays Curta is favorite collectible, priced at $1,000 minimum on eBay, while majority of price tags are around $10,000 for good working device, built in Lichtenstein.
Babbage machine became a gym device, when Konrad Zuse designed first fully automatic electro-mechanical machine Z3. Clock speed was 5-10Hz. Z3 was used to model flatter effect for military aircrafts in Nazi Germany. And first Z3 was destroyed during bombardment. Z3 was bigger than a man, and smaller than a room (left photo). Then electro-mechanical computing miniaturized to desktop size, e.g. Lagomarsino semi-automatic calculating machine (right photo).
Here something new happened – growth beyond the size of a room. Harvard Mark I was big electro-mechanical machine, put in big hall. Mark I served for Manhattan Project. There was a problem, how to detonate atomic bomb. Well known von Neumann computed explosive lens on it. Mark I was funded by IBM, Watson Sr.
So, electro-mechanical computing started from the size bigger than a man, smaller than a room, and then evolved in two directions: miniaturized to desktop size, and grown to small stadium size.
At some point, mechanical parts were redesigned to electrical, and first fully electrical machine was created – ENIAC. It used vaccum tubes. Its size was bigger than a man, smaller than a big room (left photo). The fully electrical computing technology on vacuum tubes got miniaturized to desktop size (right photo).
Very interesting and beautiful was miniaturization. Even vacuum tubes could be small and nice. Furthermore, there were many women in the indutry at the time of electrical vacuum tube computing. Below are famous “ENIAC girls”, with the evidence of miniaturization of modules, from left to right, smaller is better. Side question: why women left programming?
ENIAC was very difficult to program. Here is tutorial how to code the modulo function. There were six programmers who could do it really well. ENIAC was intended for balistic computing. But well known same von Neumann from atomic bomb project, got access to it and ordered first ten programs for hydrogen bomb.
Fully automatic electrical machines grew big, very big, bigger than Mark I, II, III etc. They were used for military purposes, and space programs. IBM SAGE on photo, its size is like mid stadium.
First fully transistor machine was build probably by IBM, though there is photo of European [second] machine, called CADET (left photo). There were no vacuum tubes in it anymore. Transistor technology is till alive, very well miniaturized to desktop and hand (right photo).
Miniaturization of transistor computing went even further, than size of the hand. Think of small contact lens, small robots in veins, brain implants, spy devices and so on. And transistors are getting smaller and smaller, today 14nm is not a big deal. There is dozen of silicon foundries capable of doing FinFET at such scale.
Transistor computers grew really big, to the size of the stadium. The Earth is being covered by data centers, sized as multiple stadiums. It’s Titan computer on photo, capable of crunching data at the rate of 10 petaFLOPS. The most powerful supercomputer today is Chinese Sunway TaihuLight at 34 petaFLOPS.
But let me remind the point: electrical transistor computing was designed at the size bigger than a man, smaller than a room, and then evolved into tiny robots, and huge supercomputers.
Designed at the size bigger than a man, smaller than a room.
Everything is a fridge. The magic happens at the edge of that vertical structure, framed by the doorway, 1 meter above the floor. There is a silicon chip, designed by D:Wave, built by Cypress Semiconductor, cooled to absolute zero temperature (-273C). Superconductivity emerges. Quantum physics start its magic. All you need is to shape your problem to the one that quantum machine could run.
It’s somewhat complicated excercise, like modulo function for first fully automatic electrical machines on vacuum tubes years ago. But it is possible. You got to take your time, paper and pen/pencil, and bring your problem to the equivalent Ising model. Then it is easy: give input to quantum machine, switch on, switch off, take output. Do not watch when machine is on, because you will kill the wave features of particles.
Today, D:Wave solves problems 10,000x faster than transistor machines. There is potential to make it 50,000x faster. Cool times ahead!
Why do we need such huge computing capabilities? Who cares? I personally care. Maybe others similar to me, me similar to them. I want to know who we are, what is the world, and what it’s all about.
The Nature does not compute the way we do with transistor machines. As my R&D colleague said about a piece of metal: “You raise the temperature, and solid brick of metal instantly goes liquid. Nature computes it at atomic level, and does it very very fast.” Today one of Chinese supercomputers Tianhe-1A computed behavior of 110 billion atoms during 500,000 evolutions… Is it much? It was only 0.1 nanosecond corresponding to real time, done in three hours of computing.
Let’s do another comparison for same number of atoms. It was about 10^11 atoms. If it was computed at the rate of 1 millisecond, then it would be only 500 seconds, less than 10 minutes. My body has 10 trillions molecules, or about 10^28 atoms. Hence, to simulate entire me during 10 minutes at the level of individual atoms, we would need 10^18x more Tianhe-1A supercomputers… Obviously our current computing is wrong way of computing. Need to invent further. But to invent further, we have to adopt new way of computing – quantum computing.
Who needs such simulations? Here is counter question – what is Intelligence? Intelligence is our capability to predict the future (Michio Kaku). We could compute the future at atomic level and know it for sure. The stronger intelligence is, the more detailed and precise our vision into the future is. As we know the past, and know the future, the understanding of time changes. With really powerful computing, we know for sure what will be in the future as accurately as we know what happened in the past. Distant future is more complicated to compute as distance past. But it is possible, and this is what Intelligence does. It uses computing to know the time. And move in time. In both directions.
All computing technologies together, on one graph, show some pattern. Horizontaly we have time, from past (left) to future (right). Vertically we have scale of sizes, logarithmic, in meters. Red dot shows quantum computing. It is designed already, bigger than a man, smaller than a room. Upper limits are projected bigger than modern transistor supercomputers. Lower is unknown. It’s OK that both transistor and quantum computing technologies coexist and complement each other for a while.
All right, take a look at those charts, imagine quantum lines continuation, what do you see? It is Software is eating the World. Dragon’s tail is on the left, body is in the middle, and the huge mouth is on the right. And this Software Dragon is eating ourselves at all scales. Somebody calls it Digitization.
Software is eating the World, guys. And it’s OK. Right now we could do 10,000x faster computing on quantum machines. Soon we’ll be able to do 50,000x faster. Intelligence is evolving – our ability to see the future and the past. Our pathway to time machine.
How AI tools can be combined with the latest Big Data concepts to increase people productivity and build more human-like interactions with end users. The Second Machine Age is coming. We’re now building thinking tools and machines to help us with mental tasks, in the same way that mechanical robots already help us with physical work. Older technologies are being combined with newly-created smart ones to meet the demands of the emerging experience economy. We are now in-between two computing ages: the older, transactional computing era and a new cognitive one.
In this new world, Big Data is a must-have resource for any cutting-edge enterprise project. And this Big Data serves as an excellent resource for building intelligence of all kinds: artificial smartness, intelligence as a service, emotional intelligence, invisible interfaces, and attempts at true general AI. However, often with new projects you have no data to begin with. So the challenge is, how do you acquire or produce data? During this session, Vasyl will discuss what the process of creation of new technology to solve business problems, and the strategies for approaching the “No Data Challenge”, including:
This new era of computing is all about the end user or professional user, and these new AI tools will help to improve their lifestyle and solve their problems.
…Then the Pterodactyl burst upon the world in all his impressive solemnity and grandeur, and all Nature recognized that the Cainozoic threshold was crossed and a new Period open for business, a new stage begun in the preparation of the globe for man. It may be that the Pterodactyl thought the thirty million years had been intended as a preparation for himself, for there was nothing too foolish for a Pterodactyl to imagine, but he was in error, the preparation was for Man… — Mark Twain
The Man. The man who won Tour de France seven times. Having reached the human limit of physical capabilities, he [and others] extended them. He did blood doping (by taking EPO and other drugs, storing own blood in the fridge, and infusing it before the competition for boosting the number of red blood cells, thus performance). He [and others] took anti asthmatic drugs to increases performance on endurance. And so on, so on. There are Yes or No answers from Lance himself from Oprah’s interview.
Is Lance cheater? Or is Lance hero? I consider him a hero for two reasons. First, he competed against the same or similar. Second, he went beyond the human limits, cutting-edge thinking, cutting-edge behavior, scientific sacrifice, calculated or even bold risk.
What could be said about all other sportsmen? I think the sporting pharmacology is evolutionary logical stage for the humankind to outperform our ancestors, to break the records, to win, and continue winning. If sportsmen are specialized in competing, and society wants them competing, then everything all set. Evolution goes on, biological meets artificial chemical. It improves the function, it solves the problem. Though it slightly distance biological ourselves from what we though we were.
It happens that people lose body parts. It is right way to go to give them missing parts. It’s still very complicated, the technologies involved are still not there, but good progress has been made. There are new materials, new mechanics, new production (digital manufacturing, 3D printing), new bio-signal processing (complex myogram readings), new software designed (with AI), and all together it gives tangible result. Take a look at this robot, integrated with the man:
Some ethical questions emerge. The man with prosthetic body part is still a biological being? What is a threshold between biological parts and synthetic parts to be considered a human being? There are people without arms and legs, because of injuries or because of genetic diseases, like Torso Man. We could and should re-create the missing parts and continue living as before, using our new parts. Bionic parts must evolve until they feel and perform identically to original biological parts.
It relates to invisible organs too. The heart, which happen to be a pump, not a soul keeper. People live with artificial hearts. Look at the man walking out from hospital without human heart. The kidneys, which are served by external hemodialysis machines. New research is performed to embed kidney robots into the body. Ethical questions continue, where is a boundary what we call a ‘human’? Is it head? Or brain only? What makes us human to other humans?
We are defined by our genes. Our biological capabilities are on genes. Then we learn and train to build on top of our given foundation. We are different by genes, hence something that is easy for one could be difficult for another. E.g. since childhood sportsmen usually have better metabolism in comparison to those who grow to ‘office plankton’.
There are diseases caused by harmful mutations on genes. Actually any mutation is bad, because of unpredictable results in first generation with new mutant [gene]. But some mutations are bad from generation to generation, called genetic disease. It is possible to track many diseases down to the genes. There are Genetic Browsers allowing to look into the genome down to the DNA chain. Take a look at the CFTR gene, first snapshot is high-level view, with chromosome number and position; second is zoomed to the base, with ACGT chain visible.
If parents with genetic disease want to prevent their child from that disease, they may want to fix the known gene. Everything else [genetically] will remain naturally biological, but only that one mutant will be fixed to the normal. The kid will not have the disease of ancestors, which is good. A question emerges: is this kid fully biological? How that genetic engineering impacts established social norms?
What if parents are fans of Lance Armstrong and decide to edit more genes, to make their future kid a good sportsman?
Digging down to the DNA level, it is very interesting to figure out what is possible there to improve ourselves, and what is life at all. How to recognize life? How would we recognize life on Mars, if it’s present there?
Here is definition from Wikipedia: “The definition of life is controversial. The current definition is that organisms maintain homeostasis, are composed of cells, undergo metabolism, can grow, adapt to their environment, respond to stimuli, and reproduce.” The very first sentence resonate with questions we are asking…
Craig Venter led the team of scientists to extract the genetic material from the cell (Mycoplasma genitalium), instrumented its genome by inserting the names of 20 scientists and the link to the web site, implanted edited material back into the cell, observed the cell reproducing many times. Their result – Mycoplasma laboratorium – reproduced billions times, passing encoded info through generations. The cell had ~470 genes.
What is absolutely minimum number of genes, and what are those genes, to create life? Is it 150? Or less? And which one exactly? What are their specialization/functions? It’s current on-going experiment… Good luck guys! Looking forward to your research success, and what is Minimum Viable Life (MVL). BTW by doing this experiment, scientists designed new technologies and tools, allowing to model the genes programmatically, and then synthesize them at molecular level.
While somebody are digging into the genome, others are trying to replicate humans (and other creatures) at macro level. Most successful with humanoid machines are Boston Dynamics.
How far we are to make them indistinguishable from humans? Seems that pretty far. The weight, the gravity center, motion, gestures, look & feel are still not there. I bet that humanoids will be first create in military and porn. Military will need robots to operate plenty of outdated military equipment, serve and combat in hazard environments. it’s only old weaponry that require manned control. While new weapons are designed to operate unmanned. Porn will evolve to the level that we will fuck the robots. For military it’s more the economical need. For our leisure it’s romantic need and personal experience.
The size and shape of robots doing mechanical work is so different. From tunnel drilling monsters to blood vessels…
If we look for the commonality in mentioned (and several unmentioned) disrupting technologies, we could select 8 of them (extended and reworked 8 directions of Singularity Univeristy), which stand out:
As we slightly covered Biology, Medicine and Robotics already, more to be said about the rest. But before than, few words about Biotech. We could program new behavior of the biomass, by engineering what the cells must produce, and use those biorobots to clean the landfills around the cities, sewerage, rivers, seas, maybe air. Biorobots also could clean our organisms, inside and outside. Specially engineered micro biorobots could eat the Mars stones and produce the atmosphere there. Not so fast but feasible.
Well, more words about other disrupting technologies. Networks and Sensors next. First of all – it’s about networks between human & human, machine & machine, human & machine. The network effect happens within the network, known as Metcalfe’s Law. Networks are wired and wireless, synchronous and asynchronous, local and geographically distributed, static and dynamic mesh etc. Very promising are Mesh Networks, allowing to avoid Thing-Cloud aka Client-Server architectures, despite all cloud providers pushes for that. Architecturally (and common sense) it’s better to establish the mesh locally, with redundancy and specialization of nodes, and relay the data between the mesh and the cloud via some edge device, which could be dynamically selected.
Sensors will be everywhere. Within interior, on the body, as infrastructure of the streets, in ambient environment, in the food etc. Our life is improved when we sense/measure and proactively prepare. We used to weather forecasts, which are very precise for a day or two. It’s because of huge amount of land sensors, air sensors, satellite imagery. Body sensors are gaining popularity, as wearables for quantified self. There are recommendations for the lifestyle, based of your body readings. It’s early and primitive today, but it will dramatically improve with more data recorded and analyzed. Modern transportation requires more sensors within/along the roads and streets, and cars. It’s evolving. Miniaturization shapes them all. Those sensors must be invisible for the eyes, and fully integrated into the cloths and machines and environment.
3D Printing. The biggest change is related to ownership of intellectual property. 3D model will be the thing, while its replication at any location on demand on any printer will be commodity function. Many things became digital: books, photos, movies, games. Many things are becoming digital: hard goods, food, organs, genome. It’s a matter of time when we have cheap technology capable to synthesize at the atom grid level and molecular. New materials are needed everywhere, especially for human augmentation, for energy storing and for computing.
Nanotech. We learn to engineer at the scale of 10^-9 meter. From non-stick cookware and self restoring paint (for cars), to sunscreen and nanorobots for cleaning our veins, to new computing chips. Nano & Bio are very related, as purification and cleanup processes for industry and environment are being redesigned at nano level. Nano & 3D Printing are related too, as ultimate result will be affordable nanofactory for everyone.
Computing. We’re approaching disruption here, Moore’s Law is still there but it’s slowing down and the end is visible. Some breakthrough required. Hegemony of Intel is being challenged by IBM with POWER8 (and obviously almost ready POWER9) and ARM (v8 chips). Google is experimenting with POWER and ARM. it’s true, Qualcomm is pushing with ARM-based servers. D:Wave is pioneering Quantum Computing (actually it’s superconductivity computing). There is good intro in my Quantum Hello World post. IBM recently opened access to own quantum analog. The bottom line is that we need more computing capacity, it must be elastic, and we want it cheaper.
Artificial Intelligence. AI deserves separate chapter. Here it is.
The purpose of AI was machine making decisions ( as maximization of reward function). But being better at making decisions != making better decisions. Machine decide how to translate English-to-Ukrainian, but not speaking either language. Those programs (and machines) are super screwdrivers, they don’t what to do, we want them to do, we put our want into them.
AI is different intelligence, human cannot recognize 1 billion humans, even really having seen them all many times. AI is Another Intelligence so far. The shape of thinking machines is not human at all: DeepBlue – chess winner – is a toll black box; Watson – Jeopardy winner – 2 units of 5 racks of 10 POWER7 servers between noisy refrigerators in nice alien blue light (watch from 2:20); Facebook Faces – programs and machines recognizing billions of human faces – it’s probably big racks in data center, Google Images – describing context of the image – big part of the data center (detection of cat took 16,000 servers several years ago); Space Probes – totally different from both humans and black toll boxes in the data centers.
BTW if somebody really spots UFO visiting our planet, don’t expect green men, as organics is poor for space travel, because of dangerous +200/-200 Celsius temperature range, ultra violet and radiation, time needed for travel (even through the wormhole)… That UFO is a robot most probably. Or intelligence on non-biological carrier, which means post-biological species (which is worse for us if so).
Our wet brain operates at 100 Watts, while the copy of the simulation of the same number of cells requires 10^12 Watts. Where on Earth will we get 1 trillion watts just for equivalent of one human intelligence? Even not intelligence, but connectivity of the neurons. Isn’t it ridiculous pseudo architecture? We still did not isolate what we call consciousness, and we don’t know it’s structure to properly model it. Brain scanning is in progress, especially for deeper brain. And this Eureka moment, like we got with DNA, is still to come.
We’re remaining at the center, creating and using machines for mental work, like we created and used/use machines for physical work. Humans with new mental tools should perform better than without them. Google is a typical memory machine, and memory prosthesis. Watson as a layer or a doctor is a reality.
Back from the future, at present we have intelligent machines – governments and corporations. We created those artificial bodies many years ago, and just don’t realize they are true intelligent machines. They are integrated into/with society, with law evolved through precedents and legislation, tailored to different locations and cultures. Culture itself is a natural artificial intelligence. Global biological artificial intelligence emerged on politicians, lawyers, organizations like United Nations and hundreds of smaller international ones. They are all candidates for substitution by programs and machines.
Interesting observation is that most intelligent humans neither harmful nor rulers of others. Hence we could assume that really smart AI will not be harmful to humans, when AI will be approximately at our level. But it’s uncertain about accelerated and grown AI later in time. Evolution will shape AI too, continuing from invisible interfaces with machines right now. We could stop clicking, typing, tapping into machines, and talk to them like we do between ourselves. Today we have three streams of AI: < 3yo AI, Artificial Smartness, Intelligence as a Service.
We are what we eat, hence they will have to eat us? Hm… Real AI will not reveal itself. And most probably they will leave, like we left our cradle Africa…
There were some concerns that we had slowed down, by observations and perception of the daily facts. But it’s also visible that several technologies are booming and disrupting our lives almost on weekly basis. Those 8 mentioned earlier technologies in section It All Together. Those technologies are developing exponentially.
The companies are highly specializing within their niches, performing at global scale. Global economy is changing. Few best providers of the narrow function do it world-wide. E.g. Google is serving search globally, with two others far behind (Baidu and Bing, with artificial restriction of Google in China). Illumina chips are used for gene sequencing (90 percent of DNA data produced). Intel chips are primary host processors in the servers. Nvidia are primary coprocessors and so on. Few companies fulfill the 95+ percent of the needs within some niche. Where this has not happen yet, big disruption is expected soon.
This is pure specialization of work at global scale. Shift from normal distribution to power distribution. Some may say that it’s path to global monopolism, with artificially hold high costs. But in fact it is not, as Google search is free. Illumina is promising full human genome sequenced under $1,000. And Intel still ships new chips according to Moore’s Law, 2x productivity per $1 every 1.5 year.
As global specialization reduces global costs, because same functions and products are produced more efficiently on same resources, it is good for our planet, with limited resources. But here another thing happens, we are not preserving resources, we are using them for creating new technologies, which are expensive, unique, disrupting. Provider of such new technology (and product, service) is not a monopolist, because of small scale/capacity at the beginning. Either they scale or others replicate it, and true leader emerges and make it globally. Also new ways for energy are found, from Sun and wind, and new nuclear too. We’re creating more wealth.
Scaling globally is dramatically easier and cheaper for digital products and services, than for physical/hard or hybrid. It is main motivator for digitization of everything. Software is eating the world, because it is simply cheaper to deliver sw vs. hw. Everything will become software, except the hardware to run the software, and power plants to empower the hardware.
Real life is becoming digital very fast. Why we’re taking photos of our meals and rooms, self faces and legs, beautiful and creepy landscapes, compositions? Why we checkin, express status, emotions for others’ expressed statuses, commenting, trolling and even fighting digitally? We also voting, declaring, reporting, learning, curing, buying and consuming, entertaining digitally too. We’re living digitally more than physically sometimes. Notice how people record the event looking at their smartphone small screen instead of looking at the big stage and experience it better. Some motivation drives us to record it to multiple phones, from multiple locations, aspects, angles, distances, and push it into the internet, and share with others. Then see it all from those recordings, own and theirs. Why is it happening? Why we are shifting to digital over natural? Or digital is new natural, as evolution goes on?
Kit Harington was stopped by cop for speeding. The cop made ultimatum – either driver pays fine, or he tells whether Jon Snow is alive in next season. The driver avoided the speeding ticket by telling the virtual/digital story to the cop. For the cop digital virtual was more important than physical biological. Isn’t it natural shift to new better reality?
Many people live is virtual worlds today. Take American and ask about ISIS. Take Syrian and ask about ISIS. Take Ukrainian and ask about Crimea and Donbass. Take Russian and ask about Crimea and Donbass. Same for Israel and Palestina. People will tell opposite everything. People are already living in virtual worlds, created by digital television and internet. Digitization of life is here already, and we are there already.
Specialization is observed at all levels. Molecules specialized into water, gases, salts, acids. Bigger molecules specialized into proteins and DNA. Then we have cells, stem cell and their specialization into connective tissue, soft tissue, bone and so on. Next are organs. Then body parts. Specialization is present at each abstraction level. At the level of people specialization is known as roles and professions. Between businesses and countries it is industries. Between nations it is economics and politics.
It looks like we are part of the bigger machine, which is evolving with acceleration. We are like cells, good and bad, specialized from vision to thinking. Roads, pipes are like transportation systems for other cells and payload. Internet (copper and fiber) is more like a neural system. Connectivity is a true phenomenon. We are now fully disconnected (and useless) without smartphone, or without digital social network in any form. Kevin Kelly once called it the One. The Earth of many people will evolve into earth of augmented people and machines, they all specialize and unite into the One.
And since the One, it all looks like just a beginning. I feel another One, and more cells-ones, organizing something more complex and intelligent from themselves. If our cells could specialize and unite into 10 trillions and walk, think, write, why it can’t be possible with bigger cells like One, at bigger scale like Galaxy?
The Man is not the last smart species on Earth. In other words, there will be a day, when the Last [current] Man on Earth goes extinct. What will happen faster: transhuman or true AI, that could replicate and grow? I bet on transhuman. Better for humanity too. For now.
This post will be about delivery and consumption of information, about the front-end. Big picture introduction has been described in previous post Advanced Analytics, Part I.
It would be neither UI of current gadgets nor new gadgets. The ideal would be HCI leveled up to the human-human communication with visual consistent live look, speech, motions and some other aspects of real-life comms. There will be AI built finally, probably by 2030. Whenever it happens, the machines will try to mimic humans and humans will be able to communicate in really natural way with machines. The machines will deliver information. Imagine boss asking his/her assistant how the things are going and she says: “Perfectly!” and then adds portion of summaries and exceptions in few sentences. If the answers are empathic and proactive enough, then there may probably be no next questions like “So what?”
First such humanized comms will be asynchronous messaging and semi-synchronous chats. If the peer on the other end is indistinguishable (human vs. machine) , and the value and quality of information is high, delivered onto mobile & wearable gadgets in real-time, then it’s first good implementation of the front-end for advanced analytics. The interaction interface is writing & reading. Second leveling up is speech. It’s technically more complicated to switch from writing-reading to listening-talking. But as soon as same valuable information is delivered that way, it would mean advanced analytics got a phase shift. Such speaking humanized assistants would be everywhere around us, in business and life. Third leveling up is visual. As soon as we can see and perceive the peer as a human, with look, speech, motion, then we are almost there. Further leveling up is related to touch, smell and other aspects to mimic real-life. That’s Turing test, with shift towards information delivery for business performance and decision making.
As highlighted in a books on dashboard design and taught by renown professionals, most important are personalized short message, supported with summaries and exceptions. Today we are able to deliver such kind of information in text, chart, table, map, animation, audio, video form onto mobile phone, wristband gadget, glasses, car infotainment unit, TV panel and to the number of other non-humanized devices. With present technologies it’s possible to cover first and partially second levels described in “The Ideal” section earlier. Third – visual – is still premature, but there are interesting and promising experiments with 3D holograms. As it’s gets cheaper we would be able to project whatever look of business assistant we need.
Most challenging is a personalization of ad-hoc real-time answer to the inquiry. Empathy is important to tune to the biological specifics. Context and continuity according to the previous comms is important to add value, on top of previously delivered information. Interests, current intentions, recent connections and real-time motion could help to shape the context properly. That data could be abstracted into the data and knowledge graphs, for further processing. Some details on those graphs are present in Six Graphs of Big Data.
Summary is an art to fit a big picture into single pager. Somebody still don’t understand why single pager does matter (even UX Magazine guys). Here is a tip – anthropologically we’ve got a body and two arms, and the length of the arms, the distance between the arms, distance between the eyes and what we hold in the arms is predefined. There is simply no way to change those anthropological restrictions. Hence a single page (A4 or Letter size) is a most ergonomic and proven size of the artifact to be used for the hands. Remember, we are talking about the summaries now, hence some space assets are needed to represent them [summaries]. Summaries should be structured into Inverted pyramid information architecture, to optimize the process of information consumption by decision maker.
Exceptions are important to be proactively communicated, because they mean we’ve got issue with predictability and expectations. There could be positive exceptions for sure, but if they were not expected, they must be addressed descriptively, explanatory (reason, root-cause, consequences, repeatability and further expectations). Both summaries and exceptions shall fit into single pager or even smaller space.
On one hand main message, summaries and exceptions are too generic and high-level guidelines. On the other hand, prescriptive, predictive and descriptive analytics is too technical classification. Let’s add some soul. For software projects we could introduce more understandable categories of classification. “Projects exist only in two states: either too-early-to-tell or too-late-to-change.” It was said by Edward Tufte during discussion of executive dashboards. Other and more detailed recommendations on information organization are listed below, they are based on Edward Tufte and Peter Drucker experience and vision, reused from Tuftes forum.
Everything clear with single-sentence personalized real-time message. Interest Graph, Intention Graph, Mobile Graph, Social Graph might help to compile such message.
Summaries could be presented as Vital Signs. Like we measure medical patient temperature, blood pressure, heart rate and other parameters, the similar way we could measure vital signs of the business: cache flow, liquidity projections, sales, receivables, ratios.
Other indicators of the business performance could be productivity, innovations in core competency, ABC, human factor, value and value-add. Productivity should go together with predictability. There is excellent blog post by Neil Fox, named The Two Agile Programming Metrics that Matter. Activity-based costing (aka ABC) could show where there is a fat that could be cut out. Very often ABC is bound to the human factor. Another interesting relation exists between productivity and human factor too, which is called emotional intelligence or engagement. Hence we’ve got an interdependent graph of measurements. Core competency defines the future of the particular business, hence innovations shall take place within core competency. It’s possible to track and measure innovation rate, but it’s vital to do it for the right competency, not for multiple ones. And finally – value and value-add. In transforming economy we are switching from wealth orientation towards happiness of users/consumers. In the Experience Economy we must measure and project delivery of happiness to every individual. More details are available in my older post Transformation of Consumption.
Finally in this post, we have to distinguish between executive and operational information. They should be designed/architectured differently. More in next posts. It’s recommended to read Peter Drucker’s work “The Essential Drucker” to unlock the wisdom what executives really need, what is absent on the market, and how to design it for the modern perfect storm of technologies and growing business intelligence needs.
This story is a logical continuation of the previously published Wearable Technology.
Here I will show how two different wearable gadgets complement each other for Quantified Self. For the beginning we need two devices, one is wearable on yourself, second is wearable by your bike.
First device is called BodyMedia, world’s most precise calories meter. It measures 5,000 data snapshots per minute from galvanic skin response, heat flux, skin temperature and 3-axis accelerometer. You can read more about BodyMedia’s sensors online. BodyMedia uses extensive machine learning to classify your activity as cycling, then measuring calories burned according to the cycling Big Data set used during learning. Check out this paper: Machine Learning and Sensor Fusion for Estimating Continuous Energy Expenditure for excellent description how AI works.
Second device is called Garmin Edge 500, simple and convenient bike computer. It has GPS, barometric altimeter, thermometer, motion detection and more features for workouts. You can read more about Garmin Edge 500 spec online. My gadgets are pictured herein.
The route was proposed by Mykola Hlibovych, a distinguished bike addict. So I put my gadgets on and measured it all. Below is info about the route. Summary info such as distance, time, speed, pace, temperature, elevation is provided by Garmin. it tries to guess about the calories too, but it is really poor at that. You should know there is no “silver bullet” and understand what to use for what. Garmin is one of the best GPS trackers, hence don’t try to measure calories with it.
Juxtaposition of elevation vs. speed and temperature vs. elevation is interesting for comparison. Both charts are provided by distance (rather than time). 2D route on the map is pretty standard thing. Garmin uses Bing Maps.
Let’s look at BodyMedia and redraw Garmin charts of speed, elevation and temperature along the time (instead of distance) and stack them together for comparison/analysis. All three charts are aligned along the horizontal time line. Upper chart is real-time calories burn, measured also in METS. The vertical axis reflects Calories per Minute. Several times I burned at the rate of 11 cal/min with was really hot. The big downtime between 1PM and 2:30PM was a lunch.
An interesting fact is observable on Temperature chart – the Garmin was warm itself and was cooling down to the ambient temperature. After that it starter to record the temperature correctly. Another moment is a small spike in speed during downtime window. It was Zhenia Novytskyy trying my bike to compare with his.
For detailed analysis of the performance on the route there is animated playback. It is published on Garmin Cloud. You just need to have Flash Player. Click this link if WordPress does not render the embedded route player from Garmin Cloud. There is iframe instruction below. You may experience some ads from them I think (because the service is free) …
Wearable technology works in different conditions:)