Most intelligent version of Curiosio launched on 01/23. Labeled Beta3. As smart as Beta2 but 100x faster. Intro and details at Ingeenee Engineering Blog.
What is intelligence? We know it as IQ. But not so many know what the “Q” is. Q stands for quotient. IQ is Intelligence Quotient. IQ was considered as a measure of intelligence of the person. Until other Qs kicked in. There are many of them: MQ PQ AQ BQ EQ DQ HQ FQ WQ SQ… Body intelligence, Health intelligence, Practical intelligence, Moral intelligence and so on, and so forth. I am sure they will run out of the letters of English alphabet, by labeling newly discovered/isolated intelligences.
It is possible to identify if some of those Qs are absent [due to damage or disease]. By augmenting the impaired humans with intelligent tools, we could compensate for some Qs deficit. The same and similar technologies & tools (especially mental) could be pushed to the limit, and used by all people – to help us all – to flourish in the second machine age. It looks to me that recreation of narrow intelligence is nothing else than the building of new tools, but not building the real [human or stronger] intelligence. It is extending ourselves, not replicating ourselves.
I like this definition [by Michio Kaku?] of Intelligence. Intelligence is our capabilities to firmly predict the future. I would re-phrase it to our capabilities to firmly predict the future & the past. Because the history is usually written by those in power, screwed and warped for the sake of their benefit. Hence the abilities to see/know the past & future – in high resolution – is the intelligence. We could compute it. The method doesn’t matter. The further and the more firmly we see the future, the more intelligent we are. The ultimate intelligence (as of today) would be trully seeing in high resolution the entire Light Cone of our world.
It seems like the humans are the most intelligent species out there, on our planet Earth. Maybe we are like small ants near the leg of the huge elephant, not seeing the elephant… But it’s OK to look to what we see, the less intelligent species. It is relevant to think about this into more details. What is a human? What is a minimum livable human? What makes the human intelligent?
This is strict. There are humans without limbs, because of injuries, diseases (including born). There are humans without organic heart, with electro-mechanical pump. There are humans without kidneys on dialysis. There are humans who could not see or hear. And it leads us to the human brain. As long as the brain is up and running – it makes a human the human.
I don’t know if it’s brain alone, or brain plus spinal cord. But it is clear enough, as long as the brain works as expected, we accept the human as a peer. The contrary is right too, the people with damaged brain, but with human body – we accept them as humans. But here we are talking about intelligent human, human intelligence. So the case with the brain is what we are interested in.
This is still difficult. On the one side, we know pretty well, what our brain is. On the other side, we don’t know deep/good enough what our brain is. It is even difficult to explain the size vs. intelligence. How the small brain of grey parrot could provide intelligence on par with much bigger brain of some chimps? Or how the smaller human brain produce bigger intelligence than 3x bigger elephant’s brain. This lead us to the thinking about form/structure vs. function. Probably the structure is more in charge of intelligence than the size?
Very interesting hypo about the wiring pattern is called cliques & cavities. The function could be possibly encoded into directed graphs of neurons. The connections could be unidirectional, bidirectional. And they could compute [locally] something relevant, and interop with other cliques at the higher level. The cliques could encode/process something like 11-dimensional “things”. Who wants to check out whether recent Hinton’s capsules are similar to those cliques?
The #1 problem is the absence of brain scanners, that could scan the brain deeply and densely enough, without damaging the brain. If we could have the brain scans [electricity, chemistry] at all depth levels, down to the millisecond, it would help a lot. Resolution down to the nanosecond would be even better… But we don’t have such scanners yet. Some scanning technologies damage the brain. Others are not hires enough. Maybe Paul Allen’s Brain Institute could invent something any time soon.
20 years ago something bright was discovered in the rats brain. The light was produced by the rats brain. Since that time, there is still no confirmation that the light is produced by the human brain too. But there is confirmation that our axons could transmit the light. So we do have fiber optics capabilities in our brains. It was measured that human body emits biophotons. Based on the detection of light in mammalian brain, and fiber optics in our brain, we could propose hypothesis that [with big probability] our brain also uses biophotons. It’s still to be measured – the biophotonic activity in the human brain.
Even if the light is weak, and fired once per minute, the overall simultaneous enlightenment of the brain is rich for information exchange. It would be huge data bandwidth jump, in comparison to the electrical signals. There is a curious hypo, that specifics of light transmission is what significantly distinguishes the human brain from other mammalian brains. Especially the red shift.
The man without brain introduced many questions. What is the minimum viable brain? Do our brains transmit only electricity, or the big deal in data exchange is carried by light?
When could we confirm the light in the brain? Not soon enough. We banned experiments on cats, to study the mammalian vision & perception. The experiments on the human brain are even more fragile, ethically. Not expecting any breakthrough any time soon…
What we have today is modeling of the cortex layers, as neurons and electrical signals between them, bigger or smaller, depending on the strength of the connections. Functionally, it is modeling of perception. It may look as there is some thinking modeled too, especially in playing games. But wait. In Go game the entire board is visible. In Starcraft the board is not fully visible, and humans recently won from the machines. More difficult than Go is Poker, and Poker winner is Libratus. Libratus is not based on neural nets, it works on counterfactual regret minimization (CFR).
We lack experiments, we lack scanning technologies. We advanced in simulation of perception only, with deep neural nets. Typologies are immature, reusability is low. And those neural nets transmit only abstraction of electricity, not the light.
Machine Learning is the algorithmic approach, when a program is capable to learn from data. Machine Learning allowed to solve old same problems better. Most popular today is Deep Learning, subset of Machine Learning. To be specific, deep learning allowed to break through in computer vision and speech processing. Today, such routine tasks as image and speech classification/transcription is cheaper and more reliable by machines, than by humans.
Most popular deep learning guys are so called Connectionists. Let’s be honest – there is big hype around deep learning. Many people even don’t know that there are several other approaches to machine learning, besides deep neural nets. Check out the good intro and comparison of machine learning by Pedro Domingos (author of The Master Algorithm). Listen to the fresh stuff from Symbolists Gary Marcus (former Uber) and Francesca Rossi (IBM). Hear fresh Evolutionists stuff from Ilya Sutskever (OpenAI, soon Tesla?) Hear from Analogizers, Maya Gupta (Google). Check out for fresh stuff from Bayesians. Ben Vigoda (Gamalon) on Idea Learning, instead of Deep Learning, Ruslan Salakhutdinov (Apple), Eric Horvitz (Microsoft). Book the date to listen to Zoubin Ghahramani (Uber).
Each machine learning approach gives us a better tool. It is a dawn of the second machine age, with mental tools. Very popular and commercialized niche nowadays. Ironically, all shit data produced by people, converts from useless into useful. All those pictures of cats, food and selfies have become a training data. Even poor corporate powerpoints are becoming training data for visual reasoning. And this aspect of the data metamorphosis is joyful. Obviously this kind of intelligence eats data, and people produce the data to feed it. This human behavior is nothing else as working for the machines, that feels fun. Next time you snap your food or render a creepy pie chart – think that most probably you did it for the machines.
Maybe combination of those approaches could give break through… This is known as a search for the holy grail – master algorithm – for machine learning. To combine or not to combine is a grey area, while the need in more data is clear. Internet of Things could help, by cloning the old good world into its digital representation. By feeding that amount [and high resolution] of data to machines, we could hope, they would learn well from it. But there is no IoT yet, there is Internet and there are no Things. IPv6 was invented specifically for the things, and still not rolled out here or there. Furthermore, learning from data will be restricted by relative shortage of data access. The network bandwidth growth rate is slower than the data growth rate – hence less and less data can make it thru the pipe… Data Gravity will emerge. To access the data, you will have to go to the data, physically, with your tools and yourselves. Data access will be bigger & bigger issue in the years to come. Any better pathway towards creating Intelligence?
How the intelligence emerged on this planet? It was gradually built, during very long evolution. The diversity and complexity increased in time. We could observe/analyze complex systems emerging over scale and self-organizing over time. Intelligence is a complex system [I think so]. And complex system could do more than only perceive. How? By building/evolving those capabilities. It is very similar to creation of new technology. Everything is possible in this world, just create the technology for that. Technology could be biological, could be digital, whatever. It gives capabilities to do something, that intelligence wants to do. Hence intelligence evolves towards creation of such capabilities. And this repeats and repeats. As result the intelligence grows bigger and bigger.
It worth looking at the place of what we call Artificial Intelligence among other Complex Systems. What I call Intelligence in this post – is what Complex Adaptive Systems do – emergence over scale and self-organization over time. Intelligence could be observed at different levels of abstraction. How 10 trillions molecules emerged and organized to move altogether 1 meter above the ground? How human brain modules or neurons comprehend and memorize? How humanity launch the probe from the Pale Blue Dot outside of the Solar System?
Complexity is not so scary as it looks. There could be no master plan at all, though there could be master config with simple rules. Like the speed of light is this, gravitational constant is that, minimal energy is this, minimal temperature is that and so forth. This is enough to build some enormous and beatiful complexity. Let’s look at the single dimensional primitive rules, and the “universes” they build.
Wolfram Rule 30 will be first. In all of Wolfram’s elementary cellular automata, an infinite one-dimensional array of cellular automaton cells with only two states is considered, with each cell in some initial state. At discrete time intervals, every cell spontaneously changes state based on its current state and the state of its two neighbors. For Rule 30, the rule set which governs the next state of the automaton is: current pattern 111 110 101 100 011 010 001 000, new state for center cell 0 0 0 1 1 1 1 0. Very similar evidence could be observed in nature, on the shell of mollusk.
Wolfram Rule 110. It is an elementary cellular automaton with interesting behavior on the boundary between stability and chaos. Current pattern 111 110 101 100 011 010 001 000, new state for center cell 0 1 1 0 1 1 1 0. Rule 110 is known to be Turing complete. This implies that, in principle, any calculation or computer program can be simulated using this automaton. It is lambda calculus. Hey Python coders, ever coded lambda function? You could compute on the cyclic tag graphs.
Wolfram Rule 110 is similar to Conway’s Game of Life. Also known simply as Life, is a cellular automaton, a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves, or, for advanced “players”, by creating patterns with particular properties.
Complexity could be built with simple rules from simple parts. The hidden order will reveal itself at some moment. Actually, the Hidden Order is a work by John Holland, the Evolutionist(?). We need more diverse abstractions, that do/have aggregation, tagging, nonlinearity, flows of resources, diversity, internal models, building blocks – that could become that true Intelligence. Maybe we already built some blocks, e.g. neural nets for perception. Maybe we need to combine growing stuff with quantum approach – probabilities, coherence and entanglement? Maybe energy worth more attention? Learn how to grow complexity. Build complexity. Over scale & time may emerge Intelligence.
This was my guest lecture for the 1st year students of Lviv Polytechnic National University, Computer Science Institute, AI Systems Faculty. Many of them, all young, open for thinking and doing.
Today, we are trying Deep Neural Networks on many [previously] unsolved problems. Image and language recognition with CNNs and LSTMs has become a standard. Machines can classify images/speech/text faster, better and much longer than humans.
There is breakthrough in computer vision in real-time, capable to identify objects and object segments. That’s very impressive, it enables self-driving cars, and in-doors positioning without radio beacons or other infrastructure. The machine sees more than human, because the machine sees it all in 360 degrees. And the machine sees more details simultaneously; while human overlooks majority of them.
We created some new kind of intelligence, that is similar to human, but is very different from human. Let’s call this AI as Another Intelligence. The program is able to recognize and identify more than one billion human faces. This is not equivalent what humans are capable to do. How many people could you recognize/remember? Few thousands? Maybe several thousands? Less than ten thousands for sure (it’s the size of small town); so 1,000,000,000 vs. 10,000 is impressive, and definitely is another type of intelligence.
DNNs are loved and applied almost to any problem, even previously solved via different tools. In many cases DNNs outperform the previous tools. DNNs started to be a hammer, and the problems started to be the nails. In my opinion, there is overconfidence in the new tool, and it’s pretty deep. Maybe it slows us down on the way of reverse engineering the common sense, consciousness…
DNNs were inspired by neuroscience, and we were confident that we were digitally recreating the brain. Here is cold shower – a man with a tiny brain – 10% size of the normal human brain. The man was considered normal by his relative and friends. He lived normal life. The issue was discovered accidentally, and it shocked medical professionals and scientists. There are hypothesis how to explain what we don’t understand.
There are other brain-related observations, that threaten the modern theory of brain understanding. Birds – some birds are pretty intelligent. Parrots, with tiny brains, could challenge dolphins, with human-sized brains, and some chimps. Bird’s brain is structured differently from the mammalian brain. Does size matter? Elephants have huge brain, with 3x more neurons than humans. Though the vast majority of those neurons are within different block of the brain, in comparison to humans.
All right, the structure of the brain matters more than the size of the brain. So are we using/modeling correct brain structure with DNNs?
Numenta is working on reverse engineering the neocortex for a decade. Numenta’s machine intelligence technology is built on the own computational theory of the neocortex. It deals with hierarchical temporal memory (HTM), sparse distributed memory (SDM), sparse distributed representations (SDR), self-organizing maps (SOM). The network topologies are different from the mainstream deep perceptrons.
It’s fresh stuff from the scientific paper published in free frontiers magazine, check it out for the missing link between structure and function. “… remarkably intricate and previously unseen topology of synaptic connectivity. The synaptic network contains an abundance of cliques of neurons bound into cavities that guide the emergence of correlated activity. In response to stimuli, correlated activity binds synaptically connected neurons into functional cliques and cavities that evolve in a stereotypical sequence toward peak complexity. We propose that the brain processes stimuli by forming increasingly complex functional cliques and cavities.”
When human is shown a new symbol, from previously unseen alphabet, it is usually enough to recognize the other such symbols, when shown again later. Even in the mix with other known and unknown symbols. When human is shown a new object, for the first time, like segway or hoverboard, it is enough to recognize all other future segways and hoverboards. It is called one-shot learning. You are given only one shot at something new, you understand that it is new for you, you remember it, you recognize it during all future shots. The training set consists of only one sample. One sample. One.
Check out this scientific paper on human concept learning with segway and zarc symbol. DNNs require millions and billions of training samples, while the learning is possible from the only one sample. Do we model our brain differently? Or are we building different intelligence, on the way of reverse-engineering our brain?
These are two models of the same kart, created differently. On the left is human-designed model. On the right is machine-designed model within given restrictions and desired parameters (gathered via telemetry from the real kart from the track). It is paradigm shift, from constructed to grown. Many things in nature do grow, they have lifecycle. It’s true for the artificial things too. Grown model tends to be more efficient (lighter, stiffer, even visually more elegant), than constructed ones.
How to generate? Good start would be to use evolutionary programming, with known primitives for cells and layers. Though it is not easy to get it right. By evolving an imaginable creature, that moves to the left, right, ahead, back, it is easy to get asymmetrical blocks, handling the left and right. Even by running long evolutions, it could be hardly possible to achieve the desired symmetry, observed in the real world, and considered as common sense. E.g. the creature has very similar or identical ears, hands, feet. What to do to fix the evolution? To bring in the domain knowledge. When we know that left and right must be symmetrical, we could enforce this during the evolution.
The takeaway from this section – we are already using three approaches to AI programming simultaneously: domain rules, evolution and deep learning via backpropagation. Altogether. No one of them is not enough for the best possible end result. Actually, we even don’t know what the best result is possible. We are just building a piece of technology, for specific purposes.
The above approach of using domain rules, evolution and deep learning via backpropagation altogether might not be capable to solve the one-shot learning problem. How that kind of problems could be solved? Maybe via Bayesian learning. Here is another paper on Bayesian Framework, that allows to learn something new from few samples. Together with Bayes we have four AI approaches. There is a work on AI, identifying five [tribes] of them.
The essense is in how to learn to learn. Without moving the design of AI to the level when AI learns to learn, we are designing throw-away pieces, like we did with Perl programming, like we do with Excel spreadsheets. Yes, we construct and train the networks, and then throw them away. They are not reusable, even if they are potentially reusable (like substituting the final layers for custom classification). Just observe what people are doing, they all are training from the very beginning. It is the level of learning, not the learning to learn – i.e. it’s throw-away level. People are reusable, they could train again; while networks are not reusable.
The Master Algorithm is the work, that appeals to the AI creators, who are open-minded to try to break through the next level of abstraction. To use multiple AI paradigms, in different combinations. It is design of design – you design how you will design the next AI thing, then apply that design to actually build it. Most probably good AI must be built with special combination of those approaches and tools within each of them. Listen to Pedro Domingos for his story, please. Grasp the AI quintessence.
Start from this cool comparison of Mathematics and Physics by Richard Feynman. Physicists are always about the special case. Mathematicians are always about the general case. Physicists do reverse engineer the world; recreating the technologies, available in the Universe. Physicists even think beyond the Universe…
Continue with these ruminations about Mathematics by Stephen Wolfram – was mathematics invented or discovered. He thinks that the math is already there, we just need to get to those spaces.
Here are details on the Computing Theory of Everything, by Stephen Wolfram. Like Galileo Galilei invented the telescope to observe and discover the far space, Wolfram invented and invents tools to discover the math, all those spaces. It is not combinatoric mess, as the spaces could be shaped nicely, depending on the laws within. Look at this amazing Rule 30, look at this annoying Rule 184.
Think of forthcoming Quantum Computing, which is closer to what Feynman foresaw about machinery without mathematics (watch first video, from 6:00 to 7:30). Why we need an infinite computational power, based on mathematics & logic, to figure out what happens in the tiny place in space? Pretty modern supercomputer needs few hours to simulate 10^11 individual atoms, which is ~10^11 times smaller than the number of atoms in only 1 gram of iron (Fe)…
But about simulating the new worlds, at the level of individual atom. We could build a simulation, and it will go with mathematics. We just need to squeeze the computational power from the physical universe.
Are mathematics and physics converging?
PS. Everything above physics in understood. Chemistry deals at bigger sizes. And so on upwards to huge sizes… till the edge of the Universe, where we still don’t understand. But maybe the Math will help here?
Since mankind developped some good intelligence, we [people] immediately started to discover our world. We walked by foot until we could reach. We domesticated big animals – horses – and rode horses to reach even further, horizontally and vertically. So we reached the water. Horses could not bring us across the seas and oceans. We had to create new technology, that could carry people above the water – ships.
Ship building required pretty much calculation itself. And ship only is not sufficient to get there. Some navigation needed. We developped both measurement and calulcation of wood and nails, measurement of time, navigation by stars and sides of the world. That was kind of computing. Not the earliest computing ever, but good enough computing that let us to spread the knowledge and vision of our [flat] world.
Early device for computing was abacus. Though it is usually called a calculating tool or counting frame, we use word computing, becuse this topic is about computing technology. Abacus as computing technology was designed with size bigger than a man, and smaller than a room. Then the wooden computing technology miniaturized to desktop size. This is important: emerged at the size between 1 and 10 meters, and got smaller in time to fit onto dektop. We could call it manual wooden computing too. Wooden computing technology is still in use nowadays in African countries, China, Russia.
Metal computing emerged after wooden. Charles Babbage designed his analytical engine from metal gears, to be more precise – from Leibniz wheel. That animal was bigger than a man, and smaller than a room. Below is a juxtaposition of inventor himself with his creation (on the left). Metal computing technology miniaturized in time, and fit into a hand.
Curt Herzstark made really small mechanical calculator, named it Curta (on the right). Curta also lived long, well into the mid of XX century. Nowadays Curta is favorite collectible, priced at $1,000 minimum on eBay, while majority of price tags are around $10,000 for good working device, built in Lichtenstein.
Babbage machine became a gym device, when Konrad Zuse designed first fully automatic electro-mechanical machine Z3. Clock speed was 5-10Hz. Z3 was used to model flatter effect for military aircrafts in Nazi Germany. And first Z3 was destroyed during bombardment. Z3 was bigger than a man, and smaller than a room (left photo). Then electro-mechanical computing miniaturized to desktop size, e.g. Lagomarsino semi-automatic calculating machine (right photo).
Here something new happened – growth beyond the size of a room. Harvard Mark I was big electro-mechanical machine, put in big hall. Mark I served for Manhattan Project. There was a problem, how to detonate atomic bomb. Well known von Neumann computed explosive lens on it. Mark I was funded by IBM, Watson Sr.
So, electro-mechanical computing started from the size bigger than a man, smaller than a room, and then evolved in two directions: miniaturized to desktop size, and grown to small stadium size.
At some point, mechanical parts were redesigned to electrical, and first fully electrical machine was created – ENIAC. It used vaccum tubes. Its size was bigger than a man, smaller than a big room (left photo). The fully electrical computing technology on vacuum tubes got miniaturized to desktop size (right photo).
Very interesting and beautiful was miniaturization. Even vacuum tubes could be small and nice. Furthermore, there were many women in the indutry at the time of electrical vacuum tube computing. Below are famous “ENIAC girls”, with the evidence of miniaturization of modules, from left to right, smaller is better. Side question: why women left programming?
ENIAC was very difficult to program. Here is tutorial how to code the modulo function. There were six programmers who could do it really well. ENIAC was intended for balistic computing. But well known same von Neumann from atomic bomb project, got access to it and ordered first ten programs for hydrogen bomb.
Fully automatic electrical machines grew big, very big, bigger than Mark I, II, III etc. They were used for military purposes, and space programs. IBM SAGE on photo, its size is like mid stadium.
First fully transistor machine was build probably by IBM, though there is photo of European [second] machine, called CADET (left photo). There were no vacuum tubes in it anymore. Transistor technology is till alive, very well miniaturized to desktop and hand (right photo).
Miniaturization of transistor computing went even further, than size of the hand. Think of small contact lens, small robots in veins, brain implants, spy devices and so on. And transistors are getting smaller and smaller, today 14nm is not a big deal. There is dozen of silicon foundries capable of doing FinFET at such scale.
Transistor computers grew really big, to the size of the stadium. The Earth is being covered by data centers, sized as multiple stadiums. It’s Titan computer on photo, capable of crunching data at the rate of 10 petaFLOPS. The most powerful supercomputer today is Chinese Sunway TaihuLight at 34 petaFLOPS.
But let me remind the point: electrical transistor computing was designed at the size bigger than a man, smaller than a room, and then evolved into tiny robots, and huge supercomputers.
Designed at the size bigger than a man, smaller than a room.
Everything is a fridge. The magic happens at the edge of that vertical structure, framed by the doorway, 1 meter above the floor. There is a silicon chip, designed by D:Wave, built by Cypress Semiconductor, cooled to absolute zero temperature (-273C). Superconductivity emerges. Quantum physics start its magic. All you need is to shape your problem to the one that quantum machine could run.
It’s somewhat complicated excercise, like modulo function for first fully automatic electrical machines on vacuum tubes years ago. But it is possible. You got to take your time, paper and pen/pencil, and bring your problem to the equivalent Ising model. Then it is easy: give input to quantum machine, switch on, switch off, take output. Do not watch when machine is on, because you will kill the wave features of particles.
Today, D:Wave solves problems 10,000x faster than transistor machines. There is potential to make it 50,000x faster. Cool times ahead!
Why do we need such huge computing capabilities? Who cares? I personally care. Maybe others similar to me, me similar to them. I want to know who we are, what is the world, and what it’s all about.
The Nature does not compute the way we do with transistor machines. As my R&D colleague said about a piece of metal: “You raise the temperature, and solid brick of metal instantly goes liquid. Nature computes it at atomic level, and does it very very fast.” Today one of Chinese supercomputers Tianhe-1A computed behavior of 110 billion atoms during 500,000 evolutions… Is it much? It was only 0.1 nanosecond corresponding to real time, done in three hours of computing.
Let’s do another comparison for same number of atoms. It was about 10^11 atoms. If it was computed at the rate of 1 millisecond, then it would be only 500 seconds, less than 10 minutes. My body has 10 trillions molecules, or about 10^28 atoms. Hence, to simulate entire me during 10 minutes at the level of individual atoms, we would need 10^18x more Tianhe-1A supercomputers… Obviously our current computing is wrong way of computing. Need to invent further. But to invent further, we have to adopt new way of computing – quantum computing.
Who needs such simulations? Here is counter question – what is Intelligence? Intelligence is our capability to predict the future (Michio Kaku). We could compute the future at atomic level and know it for sure. The stronger intelligence is, the more detailed and precise our vision into the future is. As we know the past, and know the future, the understanding of time changes. With really powerful computing, we know for sure what will be in the future as accurately as we know what happened in the past. Distant future is more complicated to compute as distance past. But it is possible, and this is what Intelligence does. It uses computing to know the time. And move in time. In both directions.
All computing technologies together, on one graph, show some pattern. Horizontaly we have time, from past (left) to future (right). Vertically we have scale of sizes, logarithmic, in meters. Red dot shows quantum computing. It is designed already, bigger than a man, smaller than a room. Upper limits are projected bigger than modern transistor supercomputers. Lower is unknown. It’s OK that both transistor and quantum computing technologies coexist and complement each other for a while.
All right, take a look at those charts, imagine quantum lines continuation, what do you see? It is Software is eating the World. Dragon’s tail is on the left, body is in the middle, and the huge mouth is on the right. And this Software Dragon is eating ourselves at all scales. Somebody calls it Digitization.
Software is eating the World, guys. And it’s OK. Right now we could do 10,000x faster computing on quantum machines. Soon we’ll be able to do 50,000x faster. Intelligence is evolving – our ability to see the future and the past. Our pathway to time machine.
Keeping 97 percent of the market is very lucrative. But it is also fragile, because nothing lasts forever. Intel keeps 97 (or even 98) percent of the server processor market. It was result of brilliant strategy some time ago, when Intel decided to adopt power saving strategy, while AMD was pursuing the clock speed. So we got plenty of Intels, Intel here, Intel there, Intel everywhere (like in So What song). Saving power was good strategy. AMD is off the server processor market.
During that time, outside of datacenters, smart phones and other hand held gadgets became mainstream as consumer devices. They were running on other [simpler] chips, like ARM and ARM derivatives. Multiple companies produced them. Everybody could license ARM design, add own stuff, customize it all and make own chip.
About 10 years ago, we got smartphone, equal by computing power to Apollo mission. Today our smartphones are computers that could phone, rather than phones that could compute. Mobile processors grew bigger and bigger, co-processors emerged. As result we got pretty equivalent of personal computer in small factor.
Why not using fat mobile processors in the datacenter, instead of more complicated and power greedy Intel ones? That was logical and somebody started to look there. Not for general use, but for high-performance computing, where GPU is not suitable (because of too small cores and memory copying inconveniences). Also for storage, especially cold storage.
Calxeda made big waves some time ago, in 2011. They designed really small servers named EnergyCore, which could be tightly packed into the 1U or 2U rack. After failing to sign a deal with HP, funding was cut, and Calxeda shut down. That sucks, because there was a need on the market (OK, there was at least logical evidence). We could have 480-core server, consisting of 120 quad-core ARM Cortex-A9 CPUs if it all didn’t flop. Most probably their processor was not jucy enough, hence declined by HP.
Others tried too at the same time. AppliedMicro announced X-Gene chip also back in 2011. The roadmap is long. Today we have X-Gene2, and powerful X-Gene3 which could battle Xeon E5 is scheduled to second half of 2017. Slowly but reliably it has started, ARM started to eat the datacenter with 64-bit ARMv8. Same performance at lower power consumption, and in significantly smaller factor [of the entire server].
What is coolest is SoC. All those ARMs are actually CPU plus infrastructure like memory channels, slots for disks, networking. It allowes to reduce the size of the entire server board dramatically. 1/3 of 1U rack could contain 6 ARM SoCs, each of 50-70 cores, which is equal to 300-400 processors per sled. Or almost 1,000 processors in 1U. Each core with good clock speed at 2.5GHz or so. With 2-3x less power consumption than Xeons. HP are building experimental ARM servers, not on Calxeda chips, but on AppliedMicro, check out Moonshot.
Cavium produced chips for network and storage appliances, and suddenly released jucy chip ThunderX, and jaws dropped. It was 48-core 64-bit ARMv8, with 2.5GHz clock speed. One of the biggest datacenters in the world – OVH – is running on ThunderX already. Recently Cavium redesigned it completely to ThunderX2. 54-core SoC, for high-density racks, not bad at all.
Intel builds Xeon Phi. They started from co-processor and moving to host/bootable processor, named Knights Landing. Still to be released. It should have ~260 cores, each core as small Pentium. So compatibility with Wintel era must be retained. For good or for bad? Compatibility was always burden, but it was always needed by the market. How to continue to run all those apps? SAP or Oracle or Windows may not run well on ARM today.
Intel produced less power greedy Xeon-D, especially for Facebook, Microsoft and Google needs. But it is really interesting what than Knights Landing aka KNL will be. There were some screenshots of the green screen and motherboard available. Premium equipment makers Penguin Computing announced both ThunderX and Xeon Phi support in their highly dense sleds. Check out ThunderX and Knights Landing sleds.
What should Intel do? They definitely have big plans, because spending ~$17B on Altera was well thought. Though is FPGA & IoT strategy well aligned with keeping datacenter hegemony? Good ruminations are assembled in the post by Cringely.
Without Qualcomm it is difficult to tell how it all will unfold. Some companies tried and flopped, like Calxeda, and $130 millions did not help. Some unusual players came in, like Cavium, and made noticable waves. AppliedMicro decided to build own processor. Amazon bought Annapurna to build own processor for AWS cloud (for ~$370 millions). There is some uncertainty still, what Amazon already made from that acquisition.
Qualcomm made some non-technical announcements, but still have to deliver the product. From that point, ARM eating the datacenter could accelerate and go mainstream. So waiting for aha moment. It must happen by mid 2017 or sooner. It is going to be at 10nm. And it is thrilling – what comes from Qualcomm?
I did not address POWER8 and POWER9 here, because nobody makes them except IBM themselves (though select semiconductors say on their sites they do power processors). Google experimented with POWER, RackSpace experimented with POWER. But RackSpace delayed Barreleye servers. And Google also experimented with ARMs, and were not so excited. Perhaps because that test chip was quite big and had only 24 cores.
It all points towards the ARM as new general purpose, HPC (where GPU is not applicable) and storage servers. And it all points to Qualcomm, they will be a cornerstone of datacenter revolution.
htop needs redesign, to properly display 500 processors that OS sees.
How AI tools can be combined with the latest Big Data concepts to increase people productivity and build more human-like interactions with end users. The Second Machine Age is coming. We’re now building thinking tools and machines to help us with mental tasks, in the same way that mechanical robots already help us with physical work. Older technologies are being combined with newly-created smart ones to meet the demands of the emerging experience economy. We are now in-between two computing ages: the older, transactional computing era and a new cognitive one.
In this new world, Big Data is a must-have resource for any cutting-edge enterprise project. And this Big Data serves as an excellent resource for building intelligence of all kinds: artificial smartness, intelligence as a service, emotional intelligence, invisible interfaces, and attempts at true general AI. However, often with new projects you have no data to begin with. So the challenge is, how do you acquire or produce data? During this session, Vasyl will discuss what the process of creation of new technology to solve business problems, and the strategies for approaching the “No Data Challenge”, including:
This new era of computing is all about the end user or professional user, and these new AI tools will help to improve their lifestyle and solve their problems.