Author Archives: Vasyl Mylko

Last Man on Earth

Pterodactyl

…Then the Pterodactyl burst upon the world in all his impressive solemnity and grandeur, and all Nature recognized that the Cainozoic threshold was crossed and a new Period open for business, a new stage begun in the preparation of the globe for man. It may be that the Pterodactyl thought the thirty million years had been intended as a preparation for himself, for there was nothing too foolish for a Pterodactyl to imagine, but he was in error, the preparation was for Man…  — Mark Twain

Lance Armstrong

The Man. The man who won Tour de France seven times. Having reached the human limit of physical capabilities, he [and others] extended them. He did blood doping (by taking EPO and other drugs, storing own blood in the fridge, and infusing it before the competition for boosting the number of red blood cells, thus performance). He [and others] took anti asthmatic drugs to increases performance on endurance. And so on, so on. There are Yes or No answers from Lance himself from Oprah’s interview.

Is Lance cheater? Or is Lance hero? I consider him a hero for two reasons. First, he competed against the same or similar. Second, he went beyond the human limits, cutting-edge thinking, cutting-edge behavior, scientific sacrifice, calculated or even bold risk.

What could be said about all other sportsmen? I think the sporting pharmacology is evolutionary logical stage for the humankind to outperform our ancestors, to break the records, to win, and continue winning. If sportsmen are specialized in competing, and society wants them competing, then everything all set. Evolution goes on, biological meets artificial chemical. It improves the function, it solves the problem. Though it slightly distance biological ourselves from what we though we were.

Prosthetics

It happens that people lose body parts. It is right way to go to give them missing parts. It’s still very complicated, the technologies involved are still not there, but good progress has been made. There are new materials, new mechanics, new production (digital manufacturing, 3D printing), new bio-signal processing (complex myogram readings), new software designed (with AI), and all together it gives tangible result. Take a look at this robot, integrated with the man:

Some ethical questions emerge. The man with prosthetic body part is still a biological being? What is a threshold between biological parts and synthetic parts to be considered a human being? There are people without arms and legs, because of injuries or because of genetic diseases, like Torso Man. We could and should re-create the missing parts and continue living as before, using our new parts. Bionic parts must evolve until they feel and perform identically to original biological parts.

It relates to invisible organs too. The heart, which happen to be a pump, not a soul keeper. People live with artificial hearts. Look at the man walking out from hospital without human heart. The kidneys, which are served by external hemodialysis machines. New research is performed to embed kidney robots into the body. Ethical questions continue, where is a boundary what we call a ‘human’? Is it head? Or brain only? What makes us human to other humans?

Genetics

We are defined by our genes. Our biological capabilities are on genes. Then we learn and train to build on top of our given foundation. We are different by genes, hence something that is easy for one could be difficult for another. E.g. since childhood sportsmen usually have better metabolism in comparison to those who grow to ‘office plankton’.

There are diseases caused by harmful mutations on genes. Actually any mutation is bad, because of unpredictable results in first generation with new mutant [gene]. But some mutations are bad from generation to generation, called genetic disease. It is possible to track many diseases down to the genes. There are Genetic Browsers allowing to look into the genome down to the DNA chain. Take a look at the CFTR gene, first snapshot is high-level view, with chromosome number and position; second is zoomed to the base, with ACGT chain visible.

CFTR1

CFTR2

If parents with genetic disease want to prevent their child from that disease, they may want to fix the known gene. Everything else [genetically] will remain naturally biological, but only that one mutant will be fixed to the normal. The kid will not have the disease of ancestors, which is good. A question emerges: is this kid fully biological? How that genetic engineering impacts established social norms?

What if parents are fans of Lance Armstrong and decide to edit more genes, to make their future kid a good sportsman?

What is Life?

Digging down to the DNA level, it is very interesting to figure out what is possible there to improve ourselves, and what is life at all. How to recognize life? How would we recognize life on Mars, if it’s present there?

Here is definition from Wikipedia: “The definition of life is controversial. The current definition is that organisms maintain homeostasis, are composed of cells, undergo metabolism, can grow, adapt to their environment, respond to stimuli, and reproduce.” The very first sentence resonate with questions we are asking…

Craig Venter led the team of scientists to extract the genetic material from the cell (Mycoplasma genitalium), instrumented its genome by inserting the names of 20 scientists and the link to the web site, implanted edited material back into the cell, observed the cell reproducing many times. Their result – Mycoplasma laboratorium – reproduced billions times, passing encoded info through generations. The cell had ~470 genes.

What is absolutely minimum number of genes, and what are those genes, to create life? Is it 150? Or less? And which one exactly? What are their specialization/functions? It’s current on-going experiment… Good luck guys! Looking forward to your research success, and what is Minimum Viable Life (MVL). BTW by doing this experiment, scientists designed new technologies and tools, allowing to model the genes programmatically, and then synthesize them at molecular level.

Here Come the Robots

While somebody are digging into the genome, others are trying to replicate humans (and other creatures) at macro level. Most successful with humanoid machines are Boston Dynamics.

rxqdzyg4m7cbmyhxl1rj

How far we are to make them indistinguishable from humans? Seems that pretty far. The weight, the gravity center, motion, gestures, look & feel are still not there. I bet that humanoids will be first create in military and porn. Military will need robots to operate plenty of outdated military equipment, serve and combat in hazard environments.  it’s only old weaponry that require manned control. While new weapons are designed to operate unmanned. Porn will evolve to the level that we will fuck the robots. For military it’s more the economical need. For our leisure it’s romantic need and personal experience.

The size and shape of robots doing mechanical work is so different. From tunnel drilling monsters to blood vessels…

All 8 Together

If we look for the commonality in mentioned (and several unmentioned) disrupting technologies, we could select 8 of them (extended and reworked 8 directions of Singularity Univeristy), which stand out:

  • Biology and Biotech
  • Medicine and Longevity
  • Robotics
  • Network and Sensors
  • Fabrication and 3D Printing
  • Nanotech and Materials
  • Computing
  • Artificial Intelligence

As we slightly covered Biology, Medicine and Robotics already, more to be said about the rest. But before than, few words about Biotech. We could program new behavior of the biomass, by engineering what the cells must produce, and use those biorobots to clean the landfills around the cities,  sewerage, rivers, seas, maybe air. Biorobots also could clean our organisms, inside and outside. Specially engineered micro biorobots could eat the Mars stones and produce the atmosphere there. Not so fast but feasible.

Well, more words about other disrupting technologies. Networks and Sensors next. First of all – it’s about networks between human & human, machine & machine, human & machine. The network effect happens within the network, known as Metcalfe’s Law. Networks are wired and wireless, synchronous and asynchronous, local and geographically distributed, static and dynamic mesh etc. Very promising are Mesh Networks, allowing to avoid Thing-Cloud aka Client-Server architectures, despite all cloud providers pushes for that. Architecturally (and common sense) it’s better to establish the mesh locally, with redundancy and specialization of nodes, and relay the data between the mesh and the cloud via some edge device, which could be dynamically selected.

Sensors will be everywhere. Within interior, on the body, as infrastructure of the streets, in ambient environment, in the food etc. Our life is improved when we sense/measure and proactively prepare. We used to weather forecasts, which are very precise for a day or two. It’s because of huge amount of land sensors, air sensors, satellite imagery. Body sensors are gaining popularity, as wearables for quantified self. There are recommendations for the lifestyle, based of your body readings. It’s early and primitive today, but it will dramatically improve with more data recorded and analyzed. Modern transportation requires more sensors within/along the roads and streets, and cars. It’s evolving. Miniaturization shapes them all. Those sensors must be invisible for the eyes, and fully integrated into the cloths and machines and environment.

3D Printing. The biggest change is related to ownership of intellectual property. 3D model will be the thing, while its replication at any location on demand on any printer will be commodity function. Many things became digital: books, photos, movies, games. Many things are becoming digital: hard goods, food, organs, genome. It’s a matter of time when we have cheap technology capable to synthesize at the atom grid level and molecular. New materials are needed everywhere, especially for human augmentation, for energy storing and for computing.

Nanotech. We learn to engineer at the scale of 10^-9 meter. From non-stick cookware and self restoring paint (for cars), to sunscreen and nanorobots for cleaning our veins, to new computing chips. Nano & Bio are very related, as purification and cleanup processes for industry and environment are being redesigned at nano level. Nano & 3D Printing are related too, as ultimate result will be affordable nanofactory for everyone.

Computing. We’re approaching disruption here, Moore’s Law is still there but it’s slowing down and the end is visible. Some breakthrough required. Hegemony of Intel is being challenged by IBM with POWER8 (and obviously almost ready POWER9) and ARM (v8 chips). Google is experimenting with POWER and ARM. it’s true, Qualcomm is pushing with ARM-based servers. D:Wave is pioneering Quantum Computing (actually it’s superconductivity computing). There is good intro in my Quantum Hello World post. IBM recently opened access to own quantum analog. The bottom line is that we need more computing capacity, it must be elastic, and we want it cheaper.

Artificial Intelligence. AI deserves separate chapter. Here it is.

Artificial Intelligence

I blended my thoughts and my impressions from The Second Machine Age, How to Think About Machines that Think, forthcoming The Inevitable, and various other sources that had impact on me.

AI

The purpose of AI was machine making decisions ( as maximization of reward function). But being better at making decisions != making better decisions. Machine decide how to translate English-to-Ukrainian, but not speaking either language. Those programs (and machines) are super screwdrivers, they don’t what to do, we want them to do, we put our want into them.

AI is different intelligence, human cannot recognize 1 billion humans, even really having seen them all many times. AI is Another Intelligence so far. The shape of thinking machines is not human at all: DeepBlue – chess winner – is a toll black box; Watson – Jeopardy winner – 2 units of 5 racks of 10 POWER7 servers between noisy refrigerators in nice alien blue light (watch from 2:20); Facebook Faces – programs and machines recognizing billions of human faces – it’s probably big racks in data center, Google Images – describing context of the image – big part of the data center (detection of cat took 16,000 servers several years ago); Space Probes – totally different from both humans and black toll boxes in the data centers.

BTW if somebody really spots UFO visiting our planet, don’t expect green men, as organics is poor for space travel, because of dangerous +200/-200 Celsius temperature range, ultra violet and radiation, time needed for travel (even through the wormhole)… That UFO is a robot most probably. Or intelligence on non-biological carrier, which means post-biological species (which is worse for us if so).

Our wet brain operates at 100 Watts, while the copy of the simulation of the same number of cells requires 10^12 Watts. Where on Earth will we get 1 trillion watts just for equivalent of one human intelligence? Even not intelligence, but connectivity of the neurons. Isn’t it ridiculous pseudo architecture? We still did not isolate what we call consciousness, and we don’t know it’s structure to properly model it. Brain scanning is in progress, especially for deeper brain. And this Eureka moment, like we got with DNA, is still to come.

We’re remaining at the center, creating and using machines for mental work, like we created and used/use machines for physical work. Humans with new mental tools should perform better than without them. Google is a typical memory machine, and memory prosthesis. Watson as a layer or a doctor is a reality.

Back from the future, at present we have intelligent machines – governments and corporations. We created those artificial bodies many years ago, and just don’t realize they are true intelligent machines. They are integrated into/with society, with law evolved through precedents and legislation, tailored to different locations and cultures. Culture itself is a natural artificial intelligence. Global biological artificial intelligence emerged on politicians, lawyers, organizations like United Nations and hundreds of smaller international ones. They are all candidates for substitution by programs and machines.

Interesting observation is that most intelligent humans neither harmful nor rulers of others. Hence we could assume that really smart AI will not be harmful to humans, when AI will be approximately at our level. But it’s uncertain about accelerated and grown AI later in time. Evolution will shape AI too, continuing from invisible interfaces with machines right now. We could stop clicking, typing, tapping into machines, and talk to them like we do between ourselves. Today we have three streams of AI: < 3yo AI, Artificial Smartness, Intelligence as a Service.

We are what we eat, hence they will have to eat us? Hm… Real AI will not reveal itself. And most probably they will leave, like we left our cradle Africa…

Exponential Today

There were some concerns that we had slowed down, by observations and perception of the daily facts. But it’s also visible that several technologies are booming and disrupting our lives almost on weekly basis. Those 8 mentioned earlier technologies in section It All Together. Those technologies are developing exponentially.

The companies are highly specializing within their niches, performing at global scale. Global economy is changing. Few best providers of the narrow function do it world-wide. E.g. Google is serving search globally, with two others far behind (Baidu and Bing, with artificial restriction of Google in China). Illumina chips are used for gene sequencing (90 percent of DNA data produced). Intel chips are primary host processors in the servers. Nvidia are primary coprocessors and so on. Few companies fulfill the 95+ percent of the needs within some niche. Where this has not happen yet, big disruption is expected soon.

before_now

This is pure specialization of work at global scale. Shift from normal distribution to power distribution. Some may say that it’s path to global monopolism, with artificially hold high costs. But in fact it is not, as Google search is free. Illumina is promising full human genome sequenced under $1,000. And Intel still ships new chips according to Moore’s Law, 2x productivity per $1 every 1.5 year.

As global specialization reduces global costs, because same functions and products are produced more efficiently on same resources, it is good for our planet, with limited resources. But here another thing happens, we are not preserving resources, we are using them for creating new technologies, which are expensive, unique, disrupting. Provider of such new technology (and product, service) is not a monopolist, because of small scale/capacity at the beginning. Either they scale or others replicate it, and true leader emerges and make it globally. Also new ways for energy are found, from Sun and wind, and new nuclear too. We’re creating more wealth.

now

Digitization

Scaling globally is dramatically easier and cheaper  for digital products and services, than for physical/hard or hybrid. It is main motivator for digitization of everything. Software is eating the world, because it is simply cheaper to deliver sw vs. hw. Everything will become software, except the hardware to run the software, and power plants to empower the hardware.

Real life is becoming digital very fast. Why we’re taking photos of our meals and rooms, self faces and legs, beautiful  and creepy landscapes, compositions? Why we checkin, express status, emotions for others’ expressed statuses, commenting, trolling and even fighting digitally? We also voting, declaring, reporting, learning, curing, buying and consuming, entertaining digitally too. We’re living digitally more than physically sometimes. Notice how people record the event looking at their smartphone small screen instead of looking at the big stage and experience it better. Some motivation drives us to record it to multiple phones, from multiple locations, aspects, angles, distances, and push it into the internet, and share with others. Then see it all from those recordings, own and theirs. Why is it happening? Why we are shifting to digital over natural? Or digital is new natural, as evolution goes on?

Kit Harington was stopped by cop for speeding. The cop made ultimatum – either driver pays fine, or he tells whether Jon Snow is alive in next season. The driver avoided the speeding ticket by telling the virtual/digital story to the cop. For the cop digital virtual was more important than physical biological. Isn’t it natural shift to new better reality?

Many people live is virtual worlds today. Take American and ask about ISIS. Take Syrian and ask about ISIS. Take Ukrainian and ask about Crimea and Donbass. Take Russian and ask about Crimea and Donbass. Same for Israel and Palestina. People will tell opposite everything. People are already living in virtual worlds, created by digital television and internet. Digitization of life is here already, and we are there already.

One

Specialization is observed at all levels. Molecules specialized into water, gases, salts, acids. Bigger molecules specialized into proteins and DNA. Then we have cells, stem cell and their specialization into connective tissue, soft tissue, bone and so on. Next are organs. Then body parts. Specialization is present at each abstraction level. At the level of people specialization is known as roles and professions. Between businesses and countries it is industries. Between nations it is economics and politics.

It looks like we are part of the bigger machine, which is evolving with acceleration. We are like cells, good and bad, specialized from vision to thinking. Roads, pipes are like transportation systems for other cells and payload. Internet (copper and fiber) is more like a neural system. Connectivity is a true phenomenon. We are now fully disconnected (and useless) without smartphone, or without digital social network in any form. Kevin Kelly once called it the One. The Earth of many people will evolve into earth of augmented people and machines, they all specialize and unite into the One.

one

And since the One, it all looks like just a beginning. I feel another One, and more cells-ones, organizing something more complex and intelligent from themselves. If our cells could specialize and unite into 10 trillions and walk, think, write, why it can’t be possible with bigger cells like One, at bigger scale like Galaxy?

The Man is not the last smart species on Earth. In other words, there will be a day, when the Last [current] Man on Earth goes extinct. What will happen faster: transhuman or true AI, that could replicate and grow? I bet on transhuman. Better for humanity too. For now.

 

Advertisements
Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Two days

Magny Cours 1992

Young Schumacher crashed into Senna badly. Senna already was a living legend by that time.

Monza 2000

When Schumacher scored 41 victories, like Senna, who was killed in Imola 1994. Then Schumacher significantly outperformed everybody, winning 5 more titles in a row, and became a legend himself.

Tagged , , , , , , ,

The World is Flat

The world already was flat

Fantasy map of a flat earth

The world is flat again

Explanation on cats

Tagged , , , ,

Quantum Hello World

What?

So What?

10,000 times more productive than Xeons at same power consumption levels. With potential to 50,000x.

Explanation on Cats

Schrödinger’s cat lives. Quantum is not micro anymore, it is macro, it is everything.

 

Tagged , , ,

We will fuck robots

We already love machines

If you have a car, and you don’t use any name to talk to you car, then there are two reasons: either your car is just another soul-less car, or your are special. Usually every car owner loves her/his car and uses some name for it. If you car is under-powered, you would ask her: “come on, come on car_name”, when climbing the ascending road. So you are already talking to the machines, at least to one machine – your car.
The same it true for the boat, small air boat or bigger luxurious one. Obviously it is true for the bikes. Maybe now it is emerging for the flying drones. Interesting how employees who still work at warehouses call those machines there… Probably they don’t love them. But the fact is that we love our lifestyle machines: cars, bikes, boats, drones.

Exponential technologies

There is technological acceleration nowadays. It is most visible on 8 technologies (check Abundance and Singularity University for deeper details):
* biotech and bioinformatics
* computational systems
* networks and sensors
* artificial intelligence
* robotics
* digital manufacturing
* medicine
* nanotech and nanomaterials

It is very interesting to dig into each of them, or combine them. But it is not a purpose on this post. There are two others, or just Big Two, as an umbrella on those eight: military and porn. Many researches are applied in military first, then go to other industries and lifestyle. Many technological problems are solved in porn, and many opportunities are created there.

Connecting the dots

Materials are needed to reach realistic experience, to transcend from dolls to human peers. New Turing test – you smell and touch the skin, hair, and you cannot distinguish between natural/organic and artificial. Maybe we could program existing cells to grow and behave slightly different, or we will invent new synthetic materials that will be indistinguishable from organic. Or hybrids why not? Probably new materials will self assemble or be manufactured at atomic level by new 3D printers, to connect the right atoms in to right atomic grids. A lot depends on the connection order: diamonds and ashes are from same Carbon atoms. The bottom line is that biotech and nanomaterials and 3D printing are well empowering creation of cyberskin with realistic experience.

Computational systems, robotics and artificial intelligence is what’s behind the scenes [read behind the skin]. Having joints is not enough. All those artificial bones and ligaments must be orchestrated perfectly. It is all about real-time computing. Energy efficient real-time computing, to avoid any sticking wires externally, or plenty of heat or noise released. Everything should be smart enough to be packed into the known volume, to have known weight, center of mass, have known temperature, and bend and move realistically. Emotional intelligence is important, to have adequate mental reaction between human and machine.

Rudimentary evidence

Sex dolls, dildos and travel pussies exist for years. Porn stars have official copies of their realistic genitals and dolls sold. Porn stars might be OK that somebody fucks their rubber shadows. What cannot be said by other celebrities, because of privacy, ethics, moral. There is some real knowledge of beauty. Female face research, published in Brain Research, as hypothalamus reaction. Something similar should be available for male faces. And not only faces, for the body, for the voice, for the smell, manners etc. All that could be grasped by measurements and machine learning. At the end we need some classification like those beautiful faces, to know what exists, and then figure out how to use them.

face_research_WP.jpg

 

Breakthrough

There will be just better sex dolls, indistinguishable from people. Turing sex test will be passed between the legs. Caleb could try it with Ava, and he wanted to, he truly fell in love with Ava [Ex Machina]. That’s just a question of time. What is interesting, how we are going to control/prevent emergence of the copies of ourselves. OK, may be there is no big demand for copies of you, but there will be big demand for copies of celebrities. And celebrities may not be happy that somebody fucks their realistic clones. That will go underground, and will grow behind the law. Probably there will be countries or territories world wide capitalizing on this sex heavens, like some did as tax heavens. We will have sexual intercourse with robots like Deckard had with Rachael [Blade Runner, Los Angeles 2019], full of emotions and feelings for both sides. And enough people will do sex tourism for the forbidden fruit – to fuck their favorite peers and celebrities. Maybe there will be on-demand manufacturing of the sex clone of everybody via pictures & videos from their social traces… This is how ultimate experience in porn will evolve. People will fuck their lovely robots.

Tagged , , , , , ,

On Go

Some time ago noticed somebody was building #golang complaints list. On Monday noticed this tweet by @golangweekly. Now decided to address top five complaints, taken from How to complain about Go. So here are those top five:

  1. error handling / no exceptions
  2. no generics
  3. stuck in 70’s
  4. no OOP
  5. too opinionated

Below they are addressed in the numbered sections, one complaint per section.

1. error handling / no exceptions

Majority of programmers are lazy. Especially programmers of business logic. Commodity programmers constitute 90-95% of all programmers base. The rest 5-10% are master programmers. Usually commodity programmers write simple business logic, translating requirements into the machine program, nothing mission critical, usually not challenging, and they are getting lazy. Some commodity programmers become very lazy. It is normal for them to code only positive path and totally ignore the negative. That is not good, in any programming language. Just some languages do tolerate the laziness, and allow to avoid explicit coding of the errors.

Go does not tolerate laziness. If you wrote “if”, please write “else”. Within “else” write retry logic, or switch to default parameters, or try to recover, or stop explicitly. If something can fail, then check for success or failure before moving forward. In go you still could skip the negative path, but you are enforced to code the positive one.

Errors are part of the code. Because of binary nature of the information, errors are part of the flow. So it’s better to program both sides, good & bad, light & dark. It’s absolutely good that go nurtures to code errors as organic pieces of your entire program. Isn’t “if err != nil” annoying? Yes, it is. It is part of technology, you just do it to use the technology, you adapt to the technology. Skipping errors, or obscuring errors is bigger evil than having some idiomatic annoyance.

Exceptions failed [in other programming languages]. Exceptions were invented to handle really exceptional situations. Majority of C++ coders did not ever read “Design and Evolution of C++” to understand why language features were introduced for solving real problems with C++ language. So exceptions failed in C++ during its best times. Java and .NET coders went further from exceptions as really exceptional situations, and exceptions became error codes and events. Having exceptions as events is the worst case, because they were not designed for that purpose. The laziness of commodity programmers reached pinnacle with catch(…){} and catch (Exception e){} and except: and so on. That sucks.

As go is not a programming language for all purposes, it was designed to rewrite old C++ programs, and to do new system programming, there is no need to listen to those complains by commodity programmers. It is important to not nurture laziness. It it good to enforce checking for errors, and coding of both “if” and “else”. There is no other way to go, the machine will not create the missing logic for negative path for you. That code must be done by the programmer.

2. no generics

How generics appeared in other programming languages? Let’s take C++ again. Before generics, coders used preprocessor directives to tweak and glue the words, generate the program code, so that same algorithm/flow could work with different types. That glueing was done only by families of the types, e.g. glueing for all numeric types (small integers, signed integers, unsigned integers, long integers, …, same for floats etc.) There is common sense in that, because numeric types all together is a cluster vs. other clusters of types. But how it unfolded? You could parameterize everything by everything, even declaring type on the left and passing it as parameter on the right: class MainWnd : public CWindowImpl<MainWnd>

The pinnacle of generic design and generic programming could be considered typelists. Here is the source code typelist.h allowing deep nesting of templates. AFAIR initial implementation allowed 50 levels of nesting. The wisdom emerged, typelists’ author Andrei Alexandrescu slams generic programming nowadays.

So it’s OK that generic paradigm was fashionable, promising in the past. It’s also OK that it did not solve the problems. It’s better to apply commonality and variability analysis during your abstraction exercises and use interfaces for the coding of commonality. Go goes that way, which is good. Go lacks clustering by type families, it’s not so bad, just think again which type do you really need and use only that type. It is not a problem of go, it is your problem [laziness] during program design.

3. stuck in 70s

Java was released in 1995, far after 70s. And we have idiomatic (or idiotic?) public static void main. Is it improvement since 70s? Or furthermore, each program is a class. OMG, my single script is a class? My simple dumb print is a class? Was that escape from 70s? I prefer to stay with old good ANSI C simplicity and clarity, rather than consuming such escape from “being stuck”.

For Java lovers… Java is ideal for code generation. If Java is done by machines, it is perfect. Usually it took place. Enterprise suites with visual BPM do generate Java code. Business analysts drag’n’drop, modeling the real life process, with all tiny details of types mapping, events, conditions, data transformations, transactions etc. And the suite makes the Java code, and runs in on middleware (usually ESB). For machines all that formality and strictness with public static void main is OK. But not for the human.

Why we got so many Java programmers then? Why machines did not produce all Java code? Two reasons. First: not many solutions were designed in BPM, or at least by making some domain specific language (DSL) and design solution with it. So coders took wrong tools and produced imperfect solutions. Two: not everything is possible in WYSIWYG tools. Some manual polishing is needed, and humans filled the gap left by machines. As machines generated Java code, humans had to complement with same Java, to fill in the blanks. Now some Java legacy spread to Android…

Ok, back to go. Go has been designed to mainly overcome C++ problems. For modern system programming. What is modern system programming? It’s Docker for example, not to mention internal Google stuff. C++ was based on C, and C is rooted since 70s. So everything OK with similarity. I do not agree that we stuck, and I do not agree that 70s were bad at all. We had Concorde, introduced in 1976. We were accelerating. While today there is a feel of deceleration. Go dropped modern overengineering pseudo features, and it’s not to go into the past, it’s to go into the future.

4. no OOP

This is most pleasant complain. Commodity programmers usually don’t know OOD and OOP. Hundreds of times I’ve asked the same question on interviews, and hundred times they failed. The question was simple, dual. First: which programming design paradigms you know? Usually OOP was mentioned. Second: what is OOP, main principles? Almost everybody answered “encapsulation, inheritance and polymorphism”. After that the interview finished shortly. Few answered “abstraction” and then those three others.

So, abstraction. Abstraction is the foundation to any programming paradigm, especially to OOD/OOP. How you abstract the real world into the machine world, that could be emulated and run by the machine. Master programmers know that sometimes it’s better to abstract into function, or families of functions; sometimes into data; sometimes into compilation modules; files; namespaces; structures. It is context dependent.

What we got with OOP in the programming languages, which are loved by complainers? Abstraction into classes is prevailing, though other abstraction mechanisms are supported. And with abstraction into the classes we have one God aka Object. It means that entire abstraction of the real life case that we want to emulate and run on the machine is purely hierarchical? That’s not relevant mapping. Because the real world in not hierarchical.

There are hierarchies, at some abstraction levels. But if you look one level up, or one level down, your hirerarchy will be gone. You will see the mesh, grid, web. Then step again one level, and you will spot hierarchy again. Go further, and you will get the mesh. Think of the orders of magnitude in both directions, from this to bigger, and from this to smaller. It’s all about abstraction levels. In the software solution, it’s normal that multiple abstraction levels present, corresponding to the reality, but it’s not normal that only one method is used to deal with them all. OOD is OK only for the context where you deal with true realistic hierarchy. Outside of it, OOD is not the best choice.

Regarding abstraction, go language is absolutely normal. You are not imprisoned into the Object pseudo hierarchy. You could abstract into multiple relevant language tools, depending on the context. I don’t even want to move to secondary goodies (encapsulation & Co, they are OK there too). It’s important that the primary one – abstraction – has been fixed.

For technology creation go is good. Just don’t code business logic in go. For biz logic please design (or select) a DSL/DSEL, and build the logic at higher abstraction level, 3.5/4GL or visually or hybrid. Good book how to think wider “Multi-Paradigm Design for C++”, not tightly bound to C++, very useful for other programming tools.

5. too opinionated

So what? Is it bad? Opinion is a skeleton of design, philosophy of lifestyle. If somebody (Rob Pike?) invented the tool/technology to solve C++ problems, then it is cool. If you don’t like some idioms, like ok idiom for the map, then think about every technology PROs and CONs. First space ship could fly into the space, but there was no restroom in it. The technology could not provide it. So if you needed to fly more than pee, you selected the technology and flied. If you needed to pee, you went with another technology and did not fly. It’s so common sense.

I suggest to many of you to read very thin useful book “Practice of Programming”. As soon as you understand that different tools were designed to solve different problems, and multiple tools are used to solve bigger problems, you will give up with religious fanatism and stop averaging the tools. Each tool must remain as specific and laser focused on certain problem as possible. Just don’t be lazy, master many tools, physically and mentally. Consider future evolution of the tools, like the restroom in the space ship. And stop blaming go, use it when you need it, not when you want it. That’s my opinion.

Smartphone Addiction

The addiction to the screen of smartphone is really strong. Hopefully it will start to change in 2016 with Invisible Interfaces aka Natural Interfaces. 

  

Photo is mine. Copyright (c) Vasyl Mylko. 

Internet of Things @ Stockholm, Copenhagen

This is an IoT story combined from what was delivered during Q1’15 in Stockholm, Copenhagen and Bad Homburg (Frankfurt).

We stuck in the past

When I first heard Peter Thiel about our technological deceleration, it collided in my head with technological acceleration by Ray Kurzveil. It seems that both gentlemen are right and we [humanity] follow multi-dimensional spiral pathway. Kurzveil reveals shorter intervals between spiral cycles. Thiel reveals we are moving in negative direction within single current spiral cycle. Let’s drill down within the current cycle.

Cars are getting more and more powerful (in terms of horse power), but we don’t move faster with cars. Instead we move slower, because of so many traffic lights, speed limits, traffic jams, grid locks. It is definitely not cool to stare at red light and wait. It is not cool either to break because your green light ended. In Copenhagen majority of people use bikes. It means they move at the speed of 20 kph or so… Way more slower than our modern cars would have allowed. Ain’t strange?

Aircrafts are faster than cars, but air travel is slow either. We have strange connections. A trip from Copenhagen to Stockholm takes one full day because you got to fly Copenhagen-Frankfurt, wait and then fly Frankfurt-Stockholm. That’s how airlines work and suck money from your pocket for each air mile. Now add long security lines, weather precautions and weather cancellations of flights. Add union strikes. Dream about decommissioned Concord… 12 years ago already.

Smartphone computing power equals to Apollo mission levels, so what? Smartphone is used to browse people and play games mainly. At some moment we will start using it as a hub, to connect tens of devices, to process tons of data before submitting into the Cloud (because Data will soon not fit into the Cloud). But for now we under-use smartphones. I am sick of charging every day. I am sick for all those wires and adapters. That’s ridiculous.

Cancer, Alzheimer and HIV still not defeated. And there is not optimistic mid term forecast yet.

We [thinking people] admit that we have stuck in the past. We admit our old tools are not capable to bring us into the future. We admit that we need to build new tools to break into the future. Internet of Things is such a macro trend – building those new tools what would breakthrough us into the future.

Where exactly we stuck?

We are within 3rd wave of IoT called Identification and at the beginning of 4th wave of IoT called Miniaturization. Those two slightly overlap.

Miniaturization is evidence of Moore’s Law still working. Pretty small devices are capable of running same calculations as not so old desktops. Connecting industrial machinery via man-in-the-middle small device is on the rise. It is known as Machine-to-Machine (M2M). Two common scenarios here: wire protocol – to break into dumb machine’s wires and hook it there for readings and control; optical protocol – read from analog or digital screens and do optical recognition of the information.

More words about optical protocol in M2M. Imagine you are running biotech lab. You have good old centrifuges, doing their layering job perfectly. But they are not connected, so you need to read from display and push the buttons manually. You don’t want to break into the good working machines and decide to read from their screens or panels optically, hence doing optical identification for useful information. The centrifuges become connected. M2M without wires.

Identification is also on the rise in manufacturing. Just put a small device to identify something like vibration, smoke, volume, motion, proximity, temperature etc. Just attach a small device with right sensors to dumb machine and identify/measure what you are interested in. Identification is on the rise in life style. It is Wearables we put onto ourselves for measure various aspects of our activity. Have you ever wondered how many methods exist to measure temperature? Probably more than 10. Your (or my?) favorite wearables usually have thermistors (BodyMedia) and IR sensors (Scanadu).

Optical identification as powerful field of entire Internet of Things Identification requires special section. Continue reading.

Optical identification

Why optical is so important? Guess: at what bandwidth our eyes transmit data into the brain?
It is definitely more than 1 megabit per second and may be (may be not) slightly less than 10 megabit per second. For you geeks, it is not so old Ethernet speed. With 100 video sensors we end up with 1+ Terabyte during business hours (~10 hours). That’s hell a lot of data. It’s better to extract useful information out of those data streams and continue with information rather than with data. Volume reduction could be 1000x and much more if we deal with relevant information only. Real-time identification is vital for self-driving cars.

Even for already accumulated media archives this all is very relevant. How to index video library? How to index image library? It is the work for machines, to crawl and parse each frame and sequences of frames to classify what it there, remember timestamp and clip location, make a thumbnail and give this information to those who are users of the media archives (other apps, services and people). Usually images are identified/parsed by Convolution Neural Networks (CNN) or Autoencoder + Perceptron. For various business purposes, the good way to start doing visual object identification right away is Berkeley Caffe framework.

Ever heard about DeepMind? They are not on Kaggle today. They were there much earlier. One of them, Volodymyr Mnih, won the prize in early 2013. DeepMind invented some breakthrough technology and was bought by Google for $400 million (Facebook was another potential buyer of DeepMind). So what is interesting with them? Well yeah, the acquisition was conditional that Google would not abuse the technology. There is special Ethical Board set up at Google to validate use of DeepMind technology. We could try to figure out what their secret sauce is. All I know is that they went beyond dumb predefined machine learning by applying more neuroscience stuff which unlocked learning from own experience, with nothing predefined a priori.

Volodymyr Mnih has been featured in recent (at the moment of this post) issue of Nature magazine, with affiliation to DeepMind in the references. Read what they did – they build neural network that learns game strategy, ran it on old Atari games and outperformed human players on 43 games!! It is CNN, with time dimension (four chronological frames given to input). Besides time dimension, another big difference to classic CNN is delayed rewards learning mechanism, i.e. it’s true strategy from your previous moves. The algorithm is called Deep Q-learning, and the entire network is called Deep Q-learning Network (DQN). It is a question of time when DQN will be able to handle more complicated graphical screens than old Atari. They have tried Doom already. May be StarCraft is next. And soon it will be business processes and workflows…

Those who subscribed to Nature, log in and read main article, especially Methods part. Others could check out reader-friendly New Yorker post. Pay attention to Nature link there, you might be lucky to access Methods section on Nature site without subscription. Check out DQN code in Lua and DQN + Caffe and DQN/Caffe ping pong demo.

Who eats whom?

All right with importance of optical identification, hope it’s time to switch back to high-level and continue on the Internet of Things as macro trend, at global scale. Many of you got used to the statement that Software is eating the World. That’s correct for two aspects: hardware flexibility is being shifted to software flexibility; fabricators are making hard goods from digital models.

Shifting flexibility from hardware to software is huge cost reduction of maintenance and reconfiguration. The evidence of hardware eaten by software are all those SDX, Software Defined Everything. E.g. SDN aka Software Defined Networks, SDR aka Software Defined Radio, SRS aka Storage, and so of for Data Center etc. Tesla car is a pretty software defined car.

But this World has not been even eaten by hardware yet! Miniaturization of electric digital devices allows the Hardware to eat the World today and tomorrow. Penetration and reach of devices into previously inaccessible territories is stunning. We establish stationary identification devices (surveillance cameras, weather sensors, industrial meters etc.) and launch movable devices (flying drones, swimming drones, balloons, UAVs, self-driving cars, rovers etc.) Check out excellent hardware trends for 2015. Today we put plenty of remora devices onto the cars and ourselves. Further miniaturization will allow to take devices inside ourselves. The evidence is that Hardware is eating the World.

Wait, there are fabricators or nanofactories, producing hard goods from 3D models! 3D printed goods and 3D printed hamburger are evidences of Software directly eating the World. Then, the conclusion could be that Software is eating the World previously eaten by Hardware, while Hardware is eating the rest of the World at higher pace than Software is eating via fabrication.

Who eats whom? Second pass

Things are not so straightforward. We [you and me] have stuck in silicon world. Those ruminations are true for electrical/digital devices/technologies. Things are not limited to digital and electrical. The movement of biohackers can’t be ignored. Those guys are doing garage bio experiments on 5K equipment exactly as Jobs and Woz did electrical/digital experiments in their garage during PC era birth.

Biohackers are also eating the World. I am not talking about standard boring initiation [of biohacker] to make something glowing… There are amazing achievements. One of them is night vision. Electrical/digital approach to night vision is infra red camera, cooler and analog optical picture into your eyes, or radio scanner and analog/digital reconstruction of the scene for your eyes. Bio approach is injection of Chlorin e6 drops into your eyes. With the aid of Ce6 you could see in the darkness in the range of 10 to 50 meters. Though there is some controversy with that Ce6 experiment.

The new conclusion for the “Eaters Club” is this:

  • Software is eating the world previously eaten by Hardware
  • Hardware is eating the rest of the World, much bigger part than it’d already been eaten, at high pace
  • Software is eating the world slowly thru fabrication and nanofactories
  • Biohackers are eating the world, ignoring both Hardware & Software eaters

Will convergence of hardware and bio happen as it happened with software and hardware? I bet yes. For remote devices it could be very beneficial to take energy from the ambient environment, which potentially could be implemented via biological mechanisms.

sw_hw_bio

Blending it all together

Time for putting it all together and emphasizing onto practical consequences. Small and smaller devices are needed to wrap entire business (machines, people, areas). Many devices needed, 50 billion by 2020. Networking is needed to connect 50 billion devices. Data flow will grow from 50 billion devices and within the network. Data Gravity phenomenon will become more and more observable, when data attracts apps, services and people to itself. Keep reading for details.

Internet of Things is a sweet spot at the intersection of three technological macro trends: Semiconductors, Telecoms and Big Data. All three parts work together, but have different evolution pace. That’s lead to new rules of the ‘common sense’ emerging within IoT.
1_IoT

Remote devices need networking, good networking. And we got an issue, which will only strengthen. The pace of evolution for semiconductors is 60%, while the pace of evolution of networks is 50%. The pace of evolution of storage technology is even faster than 60% annually. It means that newly acquired data will fit into the network less and less in time [less chances for data to get into the Cloud] . It means that more and more data will be left beyond the network [and beyond the Cloud].

Off-the-Cloud data must be handled in-place, at location of acquisition or so. It means huge growth of Embedded Programming. All those small and smaller devices will have to acquire, store, filter, reduce and sync data. It is Embedded Programming with OS, without OS. It is distributed and decentralized programming. It is programming of dynamic mesh networks. It is connectivity from device to device without central tower. It is new kind of the cloud programming, closest to the ground, called Fog. Hence Fog Programming, Fog Computing. Dynamic mesh networks, plenty of DSP, potentially applicable distributed technologies for business logic foundation such as BitTorrent, Telehash, Blockchain. Interesting times in Embedded Programming are coming. This is just Internet of Things Miniaturization phase. Add smart sensing on those P2P connected small and smaller devices in the Fog, and Internet of Things Identification phase will be addressed properly.

The Reference Architecture of IoT is seven-layered (because 7 is a lucky number?).
5_7layers

Conclusion

We are building new tools that we will use to build our future. We’re doing it through digitization of the World. Everything physical becomes connected and reflected into its digital representation. Don’t overfocus onto Software, think about Hardware. Don’t overfocus onto Hardware, think about Bio. Expected convergence of software-hardware-bio as most stable and eco-friendly foundation for those 50 billion devices by 2020.

Recall Peter Thiel and biz frustrations nowadays. With digitized connected World we will turn from negative direction within current spiral cycle into positive. And of course we will continue with long term acceleration. The future looks exciting.

PS.
Music for reading and thinking: from the near future, Blade Runner, Los Angeles 2019

Tagged , , , , , , , , , , , , , , , ,

Svindal: Crash, Training, Recovery

Crash

He had multiple injuries… broken bones in his face and a 15 cm laceration to his groin and abdominal area. Svindal missed the remainder of the 2008 season.

Training

20 minutes on bike, then balance and stretching on balls, then 140 kilograms… then many other stuff… then again 10 minutes on the bike. May be Red Bull helps.

Recovery

His first two victories following his return were a downhill and a super-G in Beaver Creek, on the same Birds of Prey course where he was injured the year before. Remarkable comeback from his crash a year ago, true testament to mental strength. BTW his focus is so strong that he does not blink once during a 2 minute run…

Tagged , , , , ,