10,000x faster

We wanted to know it

Since mankind developped some good intelligence, we [people] immediately started to discover our world. We walked by foot until we could reach. We domesticated big animals – horses – and rode horses to reach even further, horizontally and vertically. So we reached the water. Horses could not bring us across the seas and oceans. We had to create new technology, that could carry people above the water – ships.

Fantasy map of a flat earth

Fantasy map of a flat earth — Image by © Antar Dayal/Illustration Works/Corbis

Ship building required pretty much calculation itself. And ship only is not sufficient to get there. Some navigation needed. We developped both measurement and calulcation of wood and nails, measurement of time, navigation by stars and sides of the world. That was kind of computing. Not the earliest computing ever, but good enough computing that let us to spread the knowledge and vision of our [flat] world.

Wooden computing

Early device for computing was abacus. Though it is usually called a calculating tool or counting frame, we use word computing, becuse this topic is about computing technology. Abacus as computing technology was designed with size bigger than a man, and smaller than a room. Then the wooden computing technology miniaturized to desktop size. This is important: emerged at the size between 1 and 10 meters, and got smaller in time to fit onto dektop. We could call it manual wooden computing too. Wooden computing technology is still in use nowadays in African countries, China, Russia.

01_abacus

Mechanical computing

Metal computing emerged after wooden. Charles Babbage designed his analytical engine from metal gears, to be more precise – from Leibniz wheel. That animal was bigger than a man, and smaller than a room. Below is a juxtaposition of inventor himself with his creation (on the left). Metal computing technology miniaturized in time, and fit into a hand.

Curt Herzstark made really small mechanical calculator, named it Curta (on the right). Curta also lived long, well into the mid of XX century. Nowadays Curta is favorite collectible, priced at $1,000 minimum on eBay, while majority of price tags are around $10,000 for good working device, built in Lichtenstein.

02_babbage_curta

Electro-mechanical computing

Babbage machine became a gym device, when Konrad Zuse designed first fully automatic electro-mechanical machine Z3. Clock speed was 5-10Hz. Z3 was used to model flatter effect for military aircrafts in Nazi Germany. And first Z3 was destroyed during bombardment. Z3 was bigger than a man, and smaller than a room (left photo). Then electro-mechanical computing miniaturized to desktop size, e.g. Lagomarsino semi-automatic calculating machine (right photo).

03_Z3

Here something new happened – growth beyond the size of a room. Harvard Mark I was big electro-mechanical machine, put in big hall. Mark I served for Manhattan Project. There was a problem, how to detonate atomic bomb. Well known von Neumann computed explosive lens on it. Mark I was funded by IBM, Watson Sr.

03_mark_I

 

So, electro-mechanical computing started from the size bigger than a man, smaller than a room, and then evolved in two directions: miniaturized to desktop size, and grown to small stadium size.

Electrical Vacuum Tube computing

At some point, mechanical parts were redesigned to electrical, and first fully electrical machine was created – ENIAC. It used vaccum tubes. Its size was bigger than a man, smaller than a big room (left photo). The fully electrical computing technology on vacuum tubes got miniaturized to desktop size (right photo).

04_electrical_vacuum

Very interesting and beautiful was miniaturization. Even vacuum tubes could be small and nice. Furthermore, there were many women in the indutry at the time of electrical vacuum tube computing. Below are famous “ENIAC girls”, with the evidence of miniaturization of modules, from left to right, smaller is better. Side question: why women left programming?

04_ENIAC_girls

ENIAC was very difficult to program. Here is tutorial how to code the modulo function. There were six programmers who could do it really well. ENIAC was intended for balistic computing. But well known same von Neumann from atomic bomb project, got access to it and ordered first ten programs for hydrogen bomb.

04_SAGE

Fully automatic electrical machines grew big, very big, bigger than Mark I, II, III etc. They were used for military purposes, and space programs. IBM SAGE on photo, its size is like mid stadium.

Electrical Transistor computing

First fully transistor machine was build probably by IBM, though there is photo of European [second] machine, called CADET (left photo). There were no vacuum tubes in it anymore. Transistor technology is till alive, very well miniaturized to desktop and hand (right photo).

05_transistor

Miniaturization of transistor computing went even further, than size of the hand. Think of small contact lens, small robots in veins, brain implants, spy devices and so on. And transistors are getting smaller and smaller, today 14nm is not a big deal. There is dozen of silicon foundries capable of doing FinFET at such scale.

05_titan

Transistor computers grew really big, to the size of the stadium. The Earth is being covered by data centers, sized as multiple stadiums. It’s Titan computer on photo, capable of crunching data at the rate of 10 petaFLOPS. The most powerful supercomputer today is Chinese Sunway TaihuLight at 34 petaFLOPS.

But let me remind the point: electrical transistor computing was designed at the size bigger than a man, smaller than a room, and then evolved into tiny robots, and huge supercomputers.

Quantum computing

Designed at the size bigger than a man, smaller than a room.

06_quantum_dwave

Everything is a fridge. The magic happens at the edge of that vertical structure, framed by the doorway, 1 meter above the floor. There is a silicon chip, designed by D:Wave, built by Cypress Semiconductor, cooled to absolute zero temperature (-273C). Superconductivity emerges. Quantum physics start its magic. All you need is to shape your problem to the one that quantum machine could run.

It’s somewhat complicated excercise, like modulo function for first fully automatic electrical machines on vacuum tubes years ago. But it is possible. You got to take your time, paper and pen/pencil, and bring your problem to the equivalent Ising model. Then it is easy: give input to quantum machine, switch on, switch off, take output. Do not watch when machine is on, because you will kill the wave features of particles.

Today, D:Wave solves problems 10,000x faster than transistor machines. There is potential to make it 50,000x faster. Cool times ahead!

Motivation

Why do we need such huge computing capabilities? Who cares? I personally care. Maybe others similar to me, me similar to them. I want to know who we are, what is the world, and what it’s all about.

The Nature does not compute the way we do with transistor machines. As my R&D colleague said about a piece of metal: “You raise the temperature, and solid brick of metal instantly goes liquid. Nature computes it at atomic level, and does it very very fast.” Today one of Chinese supercomputers Tianhe-1A computed behavior of 110 billion atoms during 500,000 evolutions… Is it much? It was only 0.1 nanosecond corresponding to real time, done in three hours of computing.

Let’s do another comparison for same number of atoms. It was about 10^11 atoms. If it was computed at the rate of 1 millisecond, then it would be only 500 seconds, less than 10 minutes. My body has 10 trillions molecules, or about 10^28 atoms. Hence, to simulate entire me during 10 minutes at the level of individual atoms, we would need 10^18x more Tianhe-1A supercomputers… Obviously our current computing is wrong way of computing. Need to invent further. But to invent further, we have to adopt new way of computing – quantum computing.

Who needs such simulations? Here is counter question – what is Intelligence? Intelligence is our capability to predict the future (Michio Kaku). We could compute the future at atomic level and know it for sure. The stronger intelligence is, the more detailed and precise our vision into the future is. As we know the past, and know the future, the understanding of time changes. With really powerful computing, we know for sure what will be in the future as accurately as we know what happened in the past. Distant future is more complicated to compute as distance past. But it is possible, and this is what Intelligence does. It uses computing to know the time. And move in time. In both directions.

Conclusion

All computing technologies together, on one graph, show some pattern. Horizontaly we have time, from past (left) to future (right). Vertically we have scale of sizes, logarithmic, in meters. Red dot shows quantum computing. It is designed already, bigger than a man, smaller than a room. Upper limits are projected bigger than modern transistor supercomputers. Lower is unknown. It’s OK that both transistor and quantum computing technologies coexist and complement each other for a while.

07_dragon

All right, take a look at those charts, imagine quantum lines continuation, what do you see? It is Software is eating the World. Dragon’s tail is on the left, body is in the middle, and the huge mouth is on the right. And this Software Dragon is eating ourselves at all scales. Somebody calls it Digitization.

07_software_eating_world

Software is eating the World, guys. And it’s OK. Right now we could do 10,000x faster computing on quantum machines. Soon we’ll be able to do 50,000x faster. Intelligence is evolving – our ability to see the future and the past. Our pathway to time machine.

 

Tagged , , , , , , , , , , ,

End of Intel’s Reign

Saving Power

Keeping 97 percent of the market is very lucrative. But it is also fragile, because nothing lasts forever. Intel keeps 97 (or even 98) percent of the server processor market. It was result of brilliant strategy some time ago, when Intel decided to adopt power saving strategy, while AMD was pursuing the clock speed. So we got plenty of Intels, Intel here, Intel there, Intel everywhere (like in So What song). Saving power was good strategy. AMD is off the server processor market.

Growing Phones

During that time, outside of datacenters, smart phones and other hand held gadgets became mainstream as consumer devices. They were running on other [simpler] chips, like ARM and ARM derivatives. Multiple companies produced them. Everybody could license ARM design, add own stuff, customize it all and make own chip.

About 10 years ago, we got smartphone, equal by computing power to Apollo mission. Today our smartphones are computers that could phone, rather than phones that could compute. Mobile processors grew bigger and bigger, co-processors emerged. As result we got pretty equivalent of personal computer in small factor.

Intending to Eat the Datacenter

Why not using fat mobile processors in the datacenter, instead of more complicated and power greedy Intel ones? That was logical and somebody started to look there. Not for general use, but for high-performance computing, where GPU is not suitable (because of too small cores and memory copying inconveniences). Also for storage, especially cold storage.

Calxeda made big waves some time ago, in 2011. They designed really small servers named EnergyCore, which could be tightly packed into the 1U or 2U rack. After failing to sign a deal with HP, funding was cut, and Calxeda shut down. That sucks, because there was a need on the market (OK, there was at least logical evidence). We could have 480-core server, consisting of 120 quad-core ARM Cortex-A9 CPUs if it all didn’t flop. Most probably their processor was not jucy enough, hence declined by HP.

Eating the Datacenter

Others tried too at the same time. AppliedMicro announced X-Gene chip also back in 2011. The roadmap is long. Today we have X-Gene2, and powerful X-Gene3 which could battle Xeon E5 is scheduled to second half of 2017. Slowly but reliably it has started, ARM started to eat the datacenter with 64-bit ARMv8. Same performance at lower power consumption, and in significantly smaller factor [of the entire server].

What is coolest is SoC. All those ARMs are actually CPU plus infrastructure like memory channels, slots for disks, networking. It allowes to reduce the size of the entire server board dramatically. 1/3 of 1U rack could contain 6 ARM SoCs, each of 50-70 cores, which is equal to 300-400 processors per sled. Or almost 1,000 processors in 1U. Each core with good clock speed at 2.5GHz or so. With 2-3x less power consumption than Xeons. HP are building experimental ARM servers, not on Calxeda chips, but on AppliedMicro, check out Moonshot.

Cavium produced chips for network and storage appliances, and suddenly released jucy chip ThunderX, and jaws dropped. It was 48-core 64-bit ARMv8, with 2.5GHz clock speed. One of the biggest datacenters in the world – OVH – is running on ThunderX already. Recently Cavium redesigned it completely to ThunderX2. 54-core SoC, for high-density racks, not bad at all.

What Intel Does?

Intel builds Xeon Phi. They started from co-processor and moving to host/bootable processor, named Knights Landing. Still to be released. It should have ~260 cores, each core as small Pentium. So compatibility with Wintel era must be retained. For good or for bad? Compatibility was always burden, but it was always needed by the market. How to continue to run all those apps? SAP or Oracle or Windows may not run well on ARM today.

Intel produced less power greedy Xeon-D, especially for Facebook, Microsoft and Google needs. But it is really interesting what than Knights Landing aka KNL will be. There were some screenshots of the green screen and motherboard available. Premium equipment makers Penguin Computing announced both ThunderX and Xeon Phi support in their highly dense sleds. Check out ThunderX and Knights Landing sleds.

What should Intel do? They definitely have big plans, because spending ~$17B on Altera was well thought. Though is FPGA & IoT strategy well aligned with keeping datacenter hegemony? Good ruminations are assembled in the post by Cringely.

Aha Moment

Without Qualcomm it is difficult to tell how it all will unfold. Some companies tried and flopped, like Calxeda, and $130 millions did not help. Some unusual players came in, like Cavium, and made noticable waves. AppliedMicro decided to build own processor. Amazon bought Annapurna to build own processor for AWS cloud (for ~$370 millions). There is some uncertainty still, what Amazon already made from that acquisition.

Qualcomm made some non-technical announcements, but still have to deliver the product. From that point, ARM eating the datacenter could accelerate and go mainstream. So waiting for aha moment. It must happen by mid 2017 or sooner. It is going to be at 10nm. And it is thrilling – what comes from Qualcomm?

Who got the POWER?

I did not address POWER8 and POWER9 here, because nobody makes them except IBM themselves (though select semiconductors say on their sites they do power processors). Google experimented with POWER, RackSpace experimented with POWER. But RackSpace delayed Barreleye servers. And Google also experimented with ARMs, and were not so excited. Perhaps because that test chip was quite big and had only 24 cores.

It all points towards the ARM as new general purpose, HPC (where GPU is not applicable) and storage servers. And it all points to Qualcomm, they will be a cornerstone of datacenter revolution.

htop

htop needs redesign, to properly display 500 processors that OS sees.

 

 

 

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , ,

Building AI: Another Intelligence

https://skillsmatter.com/skillscasts/8326-building-ai-another-intelligence

How AI tools can be combined with the latest Big Data concepts to increase people productivity and build more human-like interactions with end users. The Second Machine Age is coming. We’re now building thinking tools and machines to help us with mental tasks, in the same way that mechanical robots already help us with physical work. Older technologies are being combined with newly-created smart ones to meet the demands of the emerging experience economy. We are now in-between two computing ages: the older, transactional computing era and a new cognitive one.

In this new world, Big Data is a must-have resource for any cutting-edge enterprise project. And this Big Data serves as an excellent resource for building intelligence of all kinds: artificial smartness, intelligence as a service, emotional intelligence, invisible interfaces, and attempts at true general AI. However, often with new projects you have no data to begin with. So the challenge is, how do you acquire or produce data? During this session, Vasyl will discuss what the process of creation of new technology to solve business problems, and the strategies for approaching the “No Data Challenge”, including:

  • Using software and hardware agents capable of recording new types of data;
  • The Five Sources of Big Data;
  • The Six Graphs of Big Data as strategies for modern solutions; and
  • The Eight Exponential Technologies.

This new era of computing is all about the end user or professional user, and these new AI tools will help to improve their lifestyle and solve their problems.

https://skillsmatter.com/skillscasts/8326-building-ai-another-intelligence

 

Tagged , , , , , , , , ,

Last Man on Earth

Pterodactyl

…Then the Pterodactyl burst upon the world in all his impressive solemnity and grandeur, and all Nature recognized that the Cainozoic threshold was crossed and a new Period open for business, a new stage begun in the preparation of the globe for man. It may be that the Pterodactyl thought the thirty million years had been intended as a preparation for himself, for there was nothing too foolish for a Pterodactyl to imagine, but he was in error, the preparation was for Man…  — Mark Twain

Lance Armstrong

The Man. The man who won Tour de France seven times. Having reached the human limit of physical capabilities, he [and others] extended them. He did blood doping (by taking EPO and other drugs, storing own blood in the fridge, and infusing it before the competition for boosting the number of red blood cells, thus performance). He [and others] took anti asthmatic drugs to increases performance on endurance. And so on, so on. There are Yes or No answers from Lance himself from Oprah’s interview.

Is Lance cheater? Or is Lance hero? I consider him a hero for two reasons. First, he competed against the same or similar. Second, he went beyond the human limits, cutting-edge thinking, cutting-edge behavior, scientific sacrifice, calculated or even bold risk.

What could be said about all other sportsmen? I think the sporting pharmacology is evolutionary logical stage for the humankind to outperform our ancestors, to break the records, to win, and continue winning. If sportsmen are specialized in competing, and society wants them competing, then everything all set. Evolution goes on, biological meets artificial chemical. It improves the function, it solves the problem. Though it slightly distance biological ourselves from what we though we were.

Prosthetics

It happens that people lose body parts. It is right way to go to give them missing parts. It’s still very complicated, the technologies involved are still not there, but good progress has been made. There are new materials, new mechanics, new production (digital manufacturing, 3D printing), new bio-signal processing (complex myogram readings), new software designed (with AI), and all together it gives tangible result. Take a look at this robot, integrated with the man:

Some ethical questions emerge. The man with prosthetic body part is still a biological being? What is a threshold between biological parts and synthetic parts to be considered a human being? There are people without arms and legs, because of injuries or because of genetic diseases, like Torso Man. We could and should re-create the missing parts and continue living as before, using our new parts. Bionic parts must evolve until they feel and perform identically to original biological parts.

It relates to invisible organs too. The heart, which happen to be a pump, not a soul keeper. People live with artificial hearts. Look at the man walking out from hospital without human heart. The kidneys, which are served by external hemodialysis machines. New research is performed to embed kidney robots into the body. Ethical questions continue, where is a boundary what we call a ‘human’? Is it head? Or brain only? What makes us human to other humans?

Genetics

We are defined by our genes. Our biological capabilities are on genes. Then we learn and train to build on top of our given foundation. We are different by genes, hence something that is easy for one could be difficult for another. E.g. since childhood sportsmen usually have better metabolism in comparison to those who grow to ‘office plankton’.

There are diseases caused by harmful mutations on genes. Actually any mutation is bad, because of unpredictable results in first generation with new mutant [gene]. But some mutations are bad from generation to generation, called genetic disease. It is possible to track many diseases down to the genes. There are Genetic Browsers allowing to look into the genome down to the DNA chain. Take a look at the CFTR gene, first snapshot is high-level view, with chromosome number and position; second is zoomed to the base, with ACGT chain visible.

CFTR1

CFTR2

If parents with genetic disease want to prevent their child from that disease, they may want to fix the known gene. Everything else [genetically] will remain naturally biological, but only that one mutant will be fixed to the normal. The kid will not have the disease of ancestors, which is good. A question emerges: is this kid fully biological? How that genetic engineering impacts established social norms?

What if parents are fans of Lance Armstrong and decide to edit more genes, to make their future kid a good sportsman?

What is Life?

Digging down to the DNA level, it is very interesting to figure out what is possible there to improve ourselves, and what is life at all. How to recognize life? How would we recognize life on Mars, if it’s present there?

Here is definition from Wikipedia: “The definition of life is controversial. The current definition is that organisms maintain homeostasis, are composed of cells, undergo metabolism, can grow, adapt to their environment, respond to stimuli, and reproduce.” The very first sentence resonate with questions we are asking…

Craig Venter led the team of scientists to extract the genetic material from the cell (Mycoplasma genitalium), instrumented its genome by inserting the names of 20 scientists and the link to the web site, implanted edited material back into the cell, observed the cell reproducing many times. Their result – Mycoplasma laboratorium – reproduced billions times, passing encoded info through generations. The cell had ~470 genes.

What is absolutely minimum number of genes, and what are those genes, to create life? Is it 150? Or less? And which one exactly? What are their specialization/functions? It’s current on-going experiment… Good luck guys! Looking forward to your research success, and what is Minimum Viable Life (MVL). BTW by doing this experiment, scientists designed new technologies and tools, allowing to model the genes programmatically, and then synthesize them at molecular level.

Here Come the Robots

While somebody are digging into the genome, others are trying to replicate humans (and other creatures) at macro level. Most successful with humanoid machines are Boston Dynamics.

rxqdzyg4m7cbmyhxl1rj

How far we are to make them indistinguishable from humans? Seems that pretty far. The weight, the gravity center, motion, gestures, look & feel are still not there. I bet that humanoids will be first create in military and porn. Military will need robots to operate plenty of outdated military equipment, serve and combat in hazard environments.  it’s only old weaponry that require manned control. While new weapons are designed to operate unmanned. Porn will evolve to the level that we will fuck the robots. For military it’s more the economical need. For our leisure it’s romantic need and personal experience.

The size and shape of robots doing mechanical work is so different. From tunnel drilling monsters to blood vessels…

All 8 Together

If we look for the commonality in mentioned (and several unmentioned) disrupting technologies, we could select 8 of them (extended and reworked 8 directions of Singularity Univeristy), which stand out:

  • Biology and Biotech
  • Medicine and Longevity
  • Robotics
  • Network and Sensors
  • Fabrication and 3D Printing
  • Nanotech and Materials
  • Computing
  • Artificial Intelligence

As we slightly covered Biology, Medicine and Robotics already, more to be said about the rest. But before than, few words about Biotech. We could program new behavior of the biomass, by engineering what the cells must produce, and use those biorobots to clean the landfills around the cities,  sewerage, rivers, seas, maybe air. Biorobots also could clean our organisms, inside and outside. Specially engineered micro biorobots could eat the Mars stones and produce the atmosphere there. Not so fast but feasible.

Well, more words about other disrupting technologies. Networks and Sensors next. First of all – it’s about networks between human & human, machine & machine, human & machine. The network effect happens within the network, known as Metcalfe’s Law. Networks are wired and wireless, synchronous and asynchronous, local and geographically distributed, static and dynamic mesh etc. Very promising are Mesh Networks, allowing to avoid Thing-Cloud aka Client-Server architectures, despite all cloud providers pushes for that. Architecturally (and common sense) it’s better to establish the mesh locally, with redundancy and specialization of nodes, and relay the data between the mesh and the cloud via some edge device, which could be dynamically selected.

Sensors will be everywhere. Within interior, on the body, as infrastructure of the streets, in ambient environment, in the food etc. Our life is improved when we sense/measure and proactively prepare. We used to weather forecasts, which are very precise for a day or two. It’s because of huge amount of land sensors, air sensors, satellite imagery. Body sensors are gaining popularity, as wearables for quantified self. There are recommendations for the lifestyle, based of your body readings. It’s early and primitive today, but it will dramatically improve with more data recorded and analyzed. Modern transportation requires more sensors within/along the roads and streets, and cars. It’s evolving. Miniaturization shapes them all. Those sensors must be invisible for the eyes, and fully integrated into the cloths and machines and environment.

3D Printing. The biggest change is related to ownership of intellectual property. 3D model will be the thing, while its replication at any location on demand on any printer will be commodity function. Many things became digital: books, photos, movies, games. Many things are becoming digital: hard goods, food, organs, genome. It’s a matter of time when we have cheap technology capable to synthesize at the atom grid level and molecular. New materials are needed everywhere, especially for human augmentation, for energy storing and for computing.

Nanotech. We learn to engineer at the scale of 10^-9 meter. From non-stick cookware and self restoring paint (for cars), to sunscreen and nanorobots for cleaning our veins, to new computing chips. Nano & Bio are very related, as purification and cleanup processes for industry and environment are being redesigned at nano level. Nano & 3D Printing are related too, as ultimate result will be affordable nanofactory for everyone.

Computing. We’re approaching disruption here, Moore’s Law is still there but it’s slowing down and the end is visible. Some breakthrough required. Hegemony of Intel is being challenged by IBM with POWER8 (and obviously almost ready POWER9) and ARM (v8 chips). Google is experimenting with POWER and ARM. it’s true, Qualcomm is pushing with ARM-based servers. D:Wave is pioneering Quantum Computing (actually it’s superconductivity computing). There is good intro in my Quantum Hello World post. IBM recently opened access to own quantum analog. The bottom line is that we need more computing capacity, it must be elastic, and we want it cheaper.

Artificial Intelligence. AI deserves separate chapter. Here it is.

Artificial Intelligence

I blended my thoughts and my impressions from The Second Machine Age, How to Think About Machines that Think, forthcoming The Inevitable, and various other sources that had impact on me.

AI

The purpose of AI was machine making decisions ( as maximization of reward function). But being better at making decisions != making better decisions. Machine decide how to translate English-to-Ukrainian, but not speaking either language. Those programs (and machines) are super screwdrivers, they don’t what to do, we want them to do, we put our want into them.

AI is different intelligence, human cannot recognize 1 billion humans, even really having seen them all many times. AI is Another Intelligence so far. The shape of thinking machines is not human at all: DeepBlue – chess winner – is a toll black box; Watson – Jeopardy winner – 2 units of 5 racks of 10 POWER7 servers between noisy refrigerators in nice alien blue light (watch from 2:20); Facebook Faces – programs and machines recognizing billions of human faces – it’s probably big racks in data center, Google Images – describing context of the image – big part of the data center (detection of cat took 16,000 servers several years ago); Space Probes – totally different from both humans and black toll boxes in the data centers.

BTW if somebody really spots UFO visiting our planet, don’t expect green men, as organics is poor for space travel, because of dangerous +200/-200 Celsius temperature range, ultra violet and radiation, time needed for travel (even through the wormhole)… That UFO is a robot most probably. Or intelligence on non-biological carrier, which means post-biological species (which is worse for us if so).

Our wet brain operates at 100 Watts, while the copy of the simulation of the same number of cells requires 10^12 Watts. Where on Earth will we get 1 trillion watts just for equivalent of one human intelligence? Even not intelligence, but connectivity of the neurons. Isn’t it ridiculous pseudo architecture? We still did not isolate what we call consciousness, and we don’t know it’s structure to properly model it. Brain scanning is in progress, especially for deeper brain. And this Eureka moment, like we got with DNA, is still to come.

We’re remaining at the center, creating and using machines for mental work, like we created and used/use machines for physical work. Humans with new mental tools should perform better than without them. Google is a typical memory machine, and memory prosthesis. Watson as a layer or a doctor is a reality.

Back from the future, at present we have intelligent machines – governments and corporations. We created those artificial bodies many years ago, and just don’t realize they are true intelligent machines. They are integrated into/with society, with law evolved through precedents and legislation, tailored to different locations and cultures. Culture itself is a natural artificial intelligence. Global biological artificial intelligence emerged on politicians, lawyers, organizations like United Nations and hundreds of smaller international ones. They are all candidates for substitution by programs and machines.

Interesting observation is that most intelligent humans neither harmful nor rulers of others. Hence we could assume that really smart AI will not be harmful to humans, when AI will be approximately at our level. But it’s uncertain about accelerated and grown AI later in time. Evolution will shape AI too, continuing from invisible interfaces with machines right now. We could stop clicking, typing, tapping into machines, and talk to them like we do between ourselves. Today we have three streams of AI: < 3yo AI, Artificial Smartness, Intelligence as a Service.

We are what we eat, hence they will have to eat us? Hm… Real AI will not reveal itself. And most probably they will leave, like we left our cradle Africa…

Exponential Today

There were some concerns that we had slowed down, by observations and perception of the daily facts. But it’s also visible that several technologies are booming and disrupting our lives almost on weekly basis. Those 8 mentioned earlier technologies in section It All Together. Those technologies are developing exponentially.

The companies are highly specializing within their niches, performing at global scale. Global economy is changing. Few best providers of the narrow function do it world-wide. E.g. Google is serving search globally, with two others far behind (Baidu and Bing, with artificial restriction of Google in China). Illumina chips are used for gene sequencing (90 percent of DNA data produced). Intel chips are primary host processors in the servers. Nvidia are primary coprocessors and so on. Few companies fulfill the 95+ percent of the needs within some niche. Where this has not happen yet, big disruption is expected soon.

before_now

This is pure specialization of work at global scale. Shift from normal distribution to power distribution. Some may say that it’s path to global monopolism, with artificially hold high costs. But in fact it is not, as Google search is free. Illumina is promising full human genome sequenced under $1,000. And Intel still ships new chips according to Moore’s Law, 2x productivity per $1 every 1.5 year.

As global specialization reduces global costs, because same functions and products are produced more efficiently on same resources, it is good for our planet, with limited resources. But here another thing happens, we are not preserving resources, we are using them for creating new technologies, which are expensive, unique, disrupting. Provider of such new technology (and product, service) is not a monopolist, because of small scale/capacity at the beginning. Either they scale or others replicate it, and true leader emerges and make it globally. Also new ways for energy are found, from Sun and wind, and new nuclear too. We’re creating more wealth.

now

Digitization

Scaling globally is dramatically easier and cheaper  for digital products and services, than for physical/hard or hybrid. It is main motivator for digitization of everything. Software is eating the world, because it is simply cheaper to deliver sw vs. hw. Everything will become software, except the hardware to run the software, and power plants to empower the hardware.

Real life is becoming digital very fast. Why we’re taking photos of our meals and rooms, self faces and legs, beautiful  and creepy landscapes, compositions? Why we checkin, express status, emotions for others’ expressed statuses, commenting, trolling and even fighting digitally? We also voting, declaring, reporting, learning, curing, buying and consuming, entertaining digitally too. We’re living digitally more than physically sometimes. Notice how people record the event looking at their smartphone small screen instead of looking at the big stage and experience it better. Some motivation drives us to record it to multiple phones, from multiple locations, aspects, angles, distances, and push it into the internet, and share with others. Then see it all from those recordings, own and theirs. Why is it happening? Why we are shifting to digital over natural? Or digital is new natural, as evolution goes on?

Kit Harington was stopped by cop for speeding. The cop made ultimatum – either driver pays fine, or he tells whether Jon Snow is alive in next season. The driver avoided the speeding ticket by telling the virtual/digital story to the cop. For the cop digital virtual was more important than physical biological. Isn’t it natural shift to new better reality?

Many people live is virtual worlds today. Take American and ask about ISIS. Take Syrian and ask about ISIS. Take Ukrainian and ask about Crimea and Donbass. Take Russian and ask about Crimea and Donbass. Same for Israel and Palestina. People will tell opposite everything. People are already living in virtual worlds, created by digital television and internet. Digitization of life is here already, and we are there already.

One

Specialization is observed at all levels. Molecules specialized into water, gases, salts, acids. Bigger molecules specialized into proteins and DNA. Then we have cells, stem cell and their specialization into connective tissue, soft tissue, bone and so on. Next are organs. Then body parts. Specialization is present at each abstraction level. At the level of people specialization is known as roles and professions. Between businesses and countries it is industries. Between nations it is economics and politics.

It looks like we are part of the bigger machine, which is evolving with acceleration. We are like cells, good and bad, specialized from vision to thinking. Roads, pipes are like transportation systems for other cells and payload. Internet (copper and fiber) is more like a neural system. Connectivity is a true phenomenon. We are now fully disconnected (and useless) without smartphone, or without digital social network in any form. Kevin Kelly once called it the One. The Earth of many people will evolve into earth of augmented people and machines, they all specialize and unite into the One.

one

And since the One, it all looks like just a beginning. I feel another One, and more cells-ones, organizing something more complex and intelligent from themselves. If our cells could specialize and unite into 10 trillions and walk, think, write, why it can’t be possible with bigger cells like One, at bigger scale like Galaxy?

The Man is not the last smart species on Earth. In other words, there will be a day, when the Last [current] Man on Earth goes extinct. What will happen faster: transhuman or true AI, that could replicate and grow? I bet on transhuman. Better for humanity too. For now.

 

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Two days

Magny Cours 1992

Young Schumacher crashed into Senna badly. Senna already was a living legend by that time.

Monza 2000

When Schumacher scored 41 victories, like Senna, who was killed in Imola 1994. Then Schumacher significantly outperformed everybody, winning 5 more titles in a row, and became a legend himself.

Tagged , , , , , , ,

The World is Flat

The world already was flat

Fantasy map of a flat earth

The world is flat again

Explanation on cats

Tagged , , , ,

Quantum Hello World

What?

So What?

10,000 times more productive than Xeons at same power consumption levels. With potential to 50,000x.

Explanation on Cats

Schrödinger’s cat lives. Quantum is not micro anymore, it is macro, it is everything.

 

Tagged , , ,

We will fuck robots

We already love machines

If you have a car, and you don’t use any name to talk to you car, then there are two reasons: either your car is just another soul-less car, or your are special. Usually every car owner loves her/his car and uses some name for it. If you car is under-powered, you would ask her: “come on, come on car_name”, when climbing the ascending road. So you are already talking to the machines, at least to one machine – your car.
The same it true for the boat, small air boat or bigger luxurious one. Obviously it is true for the bikes. Maybe now it is emerging for the flying drones. Interesting how employees who still work at warehouses call those machines there… Probably they don’t love them. But the fact is that we love our lifestyle machines: cars, bikes, boats, drones.

Exponential technologies

There is technological acceleration nowadays. It is most visible on 8 technologies (check Abundance and Singularity University for deeper details):
* biotech and bioinformatics
* computational systems
* networks and sensors
* artificial intelligence
* robotics
* digital manufacturing
* medicine
* nanotech and nanomaterials

It is very interesting to dig into each of them, or combine them. But it is not a purpose on this post. There are two others, or just Big Two, as an umbrella on those eight: military and porn. Many researches are applied in military first, then go to other industries and lifestyle. Many technological problems are solved in porn, and many opportunities are created there.

Connecting the dots

Materials are needed to reach realistic experience, to transcend from dolls to human peers. New Turing test – you smell and touch the skin, hair, and you cannot distinguish between natural/organic and artificial. Maybe we could program existing cells to grow and behave slightly different, or we will invent new synthetic materials that will be indistinguishable from organic. Or hybrids why not? Probably new materials will self assemble or be manufactured at atomic level by new 3D printers, to connect the right atoms in to right atomic grids. A lot depends on the connection order: diamonds and ashes are from same Carbon atoms. The bottom line is that biotech and nanomaterials and 3D printing are well empowering creation of cyberskin with realistic experience.

Computational systems, robotics and artificial intelligence is what’s behind the scenes [read behind the skin]. Having joints is not enough. All those artificial bones and ligaments must be orchestrated perfectly. It is all about real-time computing. Energy efficient real-time computing, to avoid any sticking wires externally, or plenty of heat or noise released. Everything should be smart enough to be packed into the known volume, to have known weight, center of mass, have known temperature, and bend and move realistically. Emotional intelligence is important, to have adequate mental reaction between human and machine.

Rudimentary evidence

Sex dolls, dildos and travel pussies exist for years. Porn stars have official copies of their realistic genitals and dolls sold. Porn stars might be OK that somebody fucks their rubber shadows. What cannot be said by other celebrities, because of privacy, ethics, moral. There is some real knowledge of beauty. Female face research, published in Brain Research, as hypothalamus reaction. Something similar should be available for male faces. And not only faces, for the body, for the voice, for the smell, manners etc. All that could be grasped by measurements and machine learning. At the end we need some classification like those beautiful faces, to know what exists, and then figure out how to use them.

face_research_WP.jpg

 

Breakthrough

There will be just better sex dolls, indistinguishable from people. Turing sex test will be passed between the legs. Caleb could try it with Ava, and he wanted to, he truly fell in love with Ava [Ex Machina]. That’s just a question of time. What is interesting, how we are going to control/prevent emergence of the copies of ourselves. OK, may be there is no big demand for copies of you, but there will be big demand for copies of celebrities. And celebrities may not be happy that somebody fucks their realistic clones. That will go underground, and will grow behind the law. Probably there will be countries or territories world wide capitalizing on this sex heavens, like some did as tax heavens. We will have sexual intercourse with robots like Deckard had with Rachael [Blade Runner, Los Angeles 2019], full of emotions and feelings for both sides. And enough people will do sex tourism for the forbidden fruit – to fuck their favorite peers and celebrities. Maybe there will be on-demand manufacturing of the sex clone of everybody via pictures & videos from their social traces… This is how ultimate experience in porn will evolve. People will fuck their lovely robots.

Tagged , , , , , ,

On Go

Some time ago noticed somebody was building #golang complaints list. On Monday noticed this tweet by @golangweekly. Now decided to address top five complaints, taken from How to complain about Go. So here are those top five:

  1. error handling / no exceptions
  2. no generics
  3. stuck in 70’s
  4. no OOP
  5. too opinionated

Below they are addressed in the numbered sections, one complaint per section.

1. error handling / no exceptions

Majority of programmers are lazy. Especially programmers of business logic. Commodity programmers constitute 90-95% of all programmers base. The rest 5-10% are master programmers. Usually commodity programmers write simple business logic, translating requirements into the machine program, nothing mission critical, usually not challenging, and they are getting lazy. Some commodity programmers become very lazy. It is normal for them to code only positive path and totally ignore the negative. That is not good, in any programming language. Just some languages do tolerate the laziness, and allow to avoid explicit coding of the errors.

Go does not tolerate laziness. If you wrote “if”, please write “else”. Within “else” write retry logic, or switch to default parameters, or try to recover, or stop explicitly. If something can fail, then check for success or failure before moving forward. In go you still could skip the negative path, but you are enforced to code the positive one.

Errors are part of the code. Because of binary nature of the information, errors are part of the flow. So it’s better to program both sides, good & bad, light & dark. It’s absolutely good that go nurtures to code errors as organic pieces of your entire program. Isn’t “if err != nil” annoying? Yes, it is. It is part of technology, you just do it to use the technology, you adapt to the technology. Skipping errors, or obscuring errors is bigger evil than having some idiomatic annoyance.

Exceptions failed [in other programming languages]. Exceptions were invented to handle really exceptional situations. Majority of C++ coders did not ever read “Design and Evolution of C++” to understand why language features were introduced for solving real problems with C++ language. So exceptions failed in C++ during its best times. Java and .NET coders went further from exceptions as really exceptional situations, and exceptions became error codes and events. Having exceptions as events is the worst case, because they were not designed for that purpose. The laziness of commodity programmers reached pinnacle with catch(…){} and catch (Exception e){} and except: and so on. That sucks.

As go is not a programming language for all purposes, it was designed to rewrite old C++ programs, and to do new system programming, there is no need to listen to those complains by commodity programmers. It is important to not nurture laziness. It it good to enforce checking for errors, and coding of both “if” and “else”. There is no other way to go, the machine will not create the missing logic for negative path for you. That code must be done by the programmer.

2. no generics

How generics appeared in other programming languages? Let’s take C++ again. Before generics, coders used preprocessor directives to tweak and glue the words, generate the program code, so that same algorithm/flow could work with different types. That glueing was done only by families of the types, e.g. glueing for all numeric types (small integers, signed integers, unsigned integers, long integers, …, same for floats etc.) There is common sense in that, because numeric types all together is a cluster vs. other clusters of types. But how it unfolded? You could parameterize everything by everything, even declaring type on the left and passing it as parameter on the right: class MainWnd : public CWindowImpl<MainWnd>

The pinnacle of generic design and generic programming could be considered typelists. Here is the source code typelist.h allowing deep nesting of templates. AFAIR initial implementation allowed 50 levels of nesting. The wisdom emerged, typelists’ author Andrei Alexandrescu slams generic programming nowadays.

So it’s OK that generic paradigm was fashionable, promising in the past. It’s also OK that it did not solve the problems. It’s better to apply commonality and variability analysis during your abstraction exercises and use interfaces for the coding of commonality. Go goes that way, which is good. Go lacks clustering by type families, it’s not so bad, just think again which type do you really need and use only that type. It is not a problem of go, it is your problem [laziness] during program design.

3. stuck in 70s

Java was released in 1995, far after 70s. And we have idiomatic (or idiotic?) public static void main. Is it improvement since 70s? Or furthermore, each program is a class. OMG, my single script is a class? My simple dumb print is a class? Was that escape from 70s? I prefer to stay with old good ANSI C simplicity and clarity, rather than consuming such escape from “being stuck”.

For Java lovers… Java is ideal for code generation. If Java is done by machines, it is perfect. Usually it took place. Enterprise suites with visual BPM do generate Java code. Business analysts drag’n’drop, modeling the real life process, with all tiny details of types mapping, events, conditions, data transformations, transactions etc. And the suite makes the Java code, and runs in on middleware (usually ESB). For machines all that formality and strictness with public static void main is OK. But not for the human.

Why we got so many Java programmers then? Why machines did not produce all Java code? Two reasons. First: not many solutions were designed in BPM, or at least by making some domain specific language (DSL) and design solution with it. So coders took wrong tools and produced imperfect solutions. Two: not everything is possible in WYSIWYG tools. Some manual polishing is needed, and humans filled the gap left by machines. As machines generated Java code, humans had to complement with same Java, to fill in the blanks. Now some Java legacy spread to Android…

Ok, back to go. Go has been designed to mainly overcome C++ problems. For modern system programming. What is modern system programming? It’s Docker for example, not to mention internal Google stuff. C++ was based on C, and C is rooted since 70s. So everything OK with similarity. I do not agree that we stuck, and I do not agree that 70s were bad at all. We had Concorde, introduced in 1976. We were accelerating. While today there is a feel of deceleration. Go dropped modern overengineering pseudo features, and it’s not to go into the past, it’s to go into the future.

4. no OOP

This is most pleasant complain. Commodity programmers usually don’t know OOD and OOP. Hundreds of times I’ve asked the same question on interviews, and hundred times they failed. The question was simple, dual. First: which programming design paradigms you know? Usually OOP was mentioned. Second: what is OOP, main principles? Almost everybody answered “encapsulation, inheritance and polymorphism”. After that the interview finished shortly. Few answered “abstraction” and then those three others.

So, abstraction. Abstraction is the foundation to any programming paradigm, especially to OOD/OOP. How you abstract the real world into the machine world, that could be emulated and run by the machine. Master programmers know that sometimes it’s better to abstract into function, or families of functions; sometimes into data; sometimes into compilation modules; files; namespaces; structures. It is context dependent.

What we got with OOP in the programming languages, which are loved by complainers? Abstraction into classes is prevailing, though other abstraction mechanisms are supported. And with abstraction into the classes we have one God aka Object. It means that entire abstraction of the real life case that we want to emulate and run on the machine is purely hierarchical? That’s not relevant mapping. Because the real world in not hierarchical.

There are hierarchies, at some abstraction levels. But if you look one level up, or one level down, your hirerarchy will be gone. You will see the mesh, grid, web. Then step again one level, and you will spot hierarchy again. Go further, and you will get the mesh. Think of the orders of magnitude in both directions, from this to bigger, and from this to smaller. It’s all about abstraction levels. In the software solution, it’s normal that multiple abstraction levels present, corresponding to the reality, but it’s not normal that only one method is used to deal with them all. OOD is OK only for the context where you deal with true realistic hierarchy. Outside of it, OOD is not the best choice.

Regarding abstraction, go language is absolutely normal. You are not imprisoned into the Object pseudo hierarchy. You could abstract into multiple relevant language tools, depending on the context. I don’t even want to move to secondary goodies (encapsulation & Co, they are OK there too). It’s important that the primary one – abstraction – has been fixed.

For technology creation go is good. Just don’t code business logic in go. For biz logic please design (or select) a DSL/DSEL, and build the logic at higher abstraction level, 3.5/4GL or visually or hybrid. Good book how to think wider “Multi-Paradigm Design for C++”, not tightly bound to C++, very useful for other programming tools.

5. too opinionated

So what? Is it bad? Opinion is a skeleton of design, philosophy of lifestyle. If somebody (Rob Pike?) invented the tool/technology to solve C++ problems, then it is cool. If you don’t like some idioms, like ok idiom for the map, then think about every technology PROs and CONs. First space ship could fly into the space, but there was no restroom in it. The technology could not provide it. So if you needed to fly more than pee, you selected the technology and flied. If you needed to pee, you went with another technology and did not fly. It’s so common sense.

I suggest to many of you to read very thin useful book “Practice of Programming”. As soon as you understand that different tools were designed to solve different problems, and multiple tools are used to solve bigger problems, you will give up with religious fanatism and stop averaging the tools. Each tool must remain as specific and laser focused on certain problem as possible. Just don’t be lazy, master many tools, physically and mentally. Consider future evolution of the tools, like the restroom in the space ship. And stop blaming go, use it when you need it, not when you want it. That’s my opinion.

Smartphone Addiction

The addiction to the screen of smartphone is really strong. Hopefully it will start to change in 2016 with Invisible Interfaces aka Natural Interfaces. 

  

Photo is mine. Copyright (c) Vasyl Mylko.