Tag Archives: vmylko

Reverse Engineering the Mathematics

Start from this cool comparison of Mathematics and Physics by Richard Feynman. Physicists are always about the special case. Mathematicians are always about the general case. Physicists do reverse engineer the world; recreating the technologies, available in the Universe. Physicists even think beyond the Universe…

Continue with these ruminations about Mathematics by Stephen Wolfram – was mathematics invented or discovered. He thinks that the math is already there, we just need to get to those spaces.

Here are details on the Computing Theory of Everything, by Stephen Wolfram. Like Galileo Galilei invented the telescope to observe and discover the far space, Wolfram invented and invents tools to discover the math, all those spaces. It is not combinatoric mess, as the spaces could be shaped nicely, depending on the laws within. Look at this amazing Rule 30, look at this annoying Rule 184.

Think of forthcoming Quantum Computing, which is closer to what Feynman foresaw about machinery without mathematics (watch first video, from 6:00 to 7:30). Why we need an infinite computational power, based on mathematics & logic, to figure out what happens in the tiny place in space? Pretty modern supercomputer needs few hours to simulate 10^11 individual atoms, which is ~10^11 times smaller than the number of atoms in only 1 gram of iron (Fe)…

But about simulating the new worlds, at the level of individual atom. We could build a simulation, and it will go with mathematics. We just need to squeeze the computational power from the physical universe.

Are mathematics and physics converging?

PS. Everything above physics in understood. Chemistry deals at bigger sizes. And so on upwards to huge sizes… till the edge of the Universe, where we still don’t understand. But maybe the Math will help here?

Tagged , , , , , , , ,

Internet of Things @ Stockholm, Copenhagen

This is an IoT story combined from what was delivered during Q1’15 in Stockholm, Copenhagen and Bad Homburg (Frankfurt).

We stuck in the past

When I first heard Peter Thiel about our technological deceleration, it collided in my head with technological acceleration by Ray Kurzveil. It seems that both gentlemen are right and we [humanity] follow multi-dimensional spiral pathway. Kurzveil reveals shorter intervals between spiral cycles. Thiel reveals we are moving in negative direction within single current spiral cycle. Let’s drill down within the current cycle.

Cars are getting more and more powerful (in terms of horse power), but we don’t move faster with cars. Instead we move slower, because of so many traffic lights, speed limits, traffic jams, grid locks. It is definitely not cool to stare at red light and wait. It is not cool either to break because your green light ended. In Copenhagen majority of people use bikes. It means they move at the speed of 20 kph or so… Way more slower than our modern cars would have allowed. Ain’t strange?

Aircrafts are faster than cars, but air travel is slow either. We have strange connections. A trip from Copenhagen to Stockholm takes one full day because you got to fly Copenhagen-Frankfurt, wait and then fly Frankfurt-Stockholm. That’s how airlines work and suck money from your pocket for each air mile. Now add long security lines, weather precautions and weather cancellations of flights. Add union strikes. Dream about decommissioned Concord… 12 years ago already.

Smartphone computing power equals to Apollo mission levels, so what? Smartphone is used to browse people and play games mainly. At some moment we will start using it as a hub, to connect tens of devices, to process tons of data before submitting into the Cloud (because Data will soon not fit into the Cloud). But for now we under-use smartphones. I am sick of charging every day. I am sick for all those wires and adapters. That’s ridiculous.

Cancer, Alzheimer and HIV still not defeated. And there is not optimistic mid term forecast yet.

We [thinking people] admit that we have stuck in the past. We admit our old tools are not capable to bring us into the future. We admit that we need to build new tools to break into the future. Internet of Things is such a macro trend – building those new tools what would breakthrough us into the future.

Where exactly we stuck?

We are within 3rd wave of IoT called Identification and at the beginning of 4th wave of IoT called Miniaturization. Those two slightly overlap.

Miniaturization is evidence of Moore’s Law still working. Pretty small devices are capable of running same calculations as not so old desktops. Connecting industrial machinery via man-in-the-middle small device is on the rise. It is known as Machine-to-Machine (M2M). Two common scenarios here: wire protocol – to break into dumb machine’s wires and hook it there for readings and control; optical protocol – read from analog or digital screens and do optical recognition of the information.

More words about optical protocol in M2M. Imagine you are running biotech lab. You have good old centrifuges, doing their layering job perfectly. But they are not connected, so you need to read from display and push the buttons manually. You don’t want to break into the good working machines and decide to read from their screens or panels optically, hence doing optical identification for useful information. The centrifuges become connected. M2M without wires.

Identification is also on the rise in manufacturing. Just put a small device to identify something like vibration, smoke, volume, motion, proximity, temperature etc. Just attach a small device with right sensors to dumb machine and identify/measure what you are interested in. Identification is on the rise in life style. It is Wearables we put onto ourselves for measure various aspects of our activity. Have you ever wondered how many methods exist to measure temperature? Probably more than 10. Your (or my?) favorite wearables usually have thermistors (BodyMedia) and IR sensors (Scanadu).

Optical identification as powerful field of entire Internet of Things Identification requires special section. Continue reading.

Optical identification

Why optical is so important? Guess: at what bandwidth our eyes transmit data into the brain?
It is definitely more than 1 megabit per second and may be (may be not) slightly less than 10 megabit per second. For you geeks, it is not so old Ethernet speed. With 100 video sensors we end up with 1+ Terabyte during business hours (~10 hours). That’s hell a lot of data. It’s better to extract useful information out of those data streams and continue with information rather than with data. Volume reduction could be 1000x and much more if we deal with relevant information only. Real-time identification is vital for self-driving cars.

Even for already accumulated media archives this all is very relevant. How to index video library? How to index image library? It is the work for machines, to crawl and parse each frame and sequences of frames to classify what it there, remember timestamp and clip location, make a thumbnail and give this information to those who are users of the media archives (other apps, services and people). Usually images are identified/parsed by Convolution Neural Networks (CNN) or Autoencoder + Perceptron. For various business purposes, the good way to start doing visual object identification right away is Berkeley Caffe framework.

Ever heard about DeepMind? They are not on Kaggle today. They were there much earlier. One of them, Volodymyr Mnih, won the prize in early 2013. DeepMind invented some breakthrough technology and was bought by Google for $400 million (Facebook was another potential buyer of DeepMind). So what is interesting with them? Well yeah, the acquisition was conditional that Google would not abuse the technology. There is special Ethical Board set up at Google to validate use of DeepMind technology. We could try to figure out what their secret sauce is. All I know is that they went beyond dumb predefined machine learning by applying more neuroscience stuff which unlocked learning from own experience, with nothing predefined a priori.

Volodymyr Mnih has been featured in recent (at the moment of this post) issue of Nature magazine, with affiliation to DeepMind in the references. Read what they did – they build neural network that learns game strategy, ran it on old Atari games and outperformed human players on 43 games!! It is CNN, with time dimension (four chronological frames given to input). Besides time dimension, another big difference to classic CNN is delayed rewards learning mechanism, i.e. it’s true strategy from your previous moves. The algorithm is called Deep Q-learning, and the entire network is called Deep Q-learning Network (DQN). It is a question of time when DQN will be able to handle more complicated graphical screens than old Atari. They have tried Doom already. May be StarCraft is next. And soon it will be business processes and workflows…

Those who subscribed to Nature, log in and read main article, especially Methods part. Others could check out reader-friendly New Yorker post. Pay attention to Nature link there, you might be lucky to access Methods section on Nature site without subscription. Check out DQN code in Lua and DQN + Caffe and DQN/Caffe ping pong demo.

Who eats whom?

All right with importance of optical identification, hope it’s time to switch back to high-level and continue on the Internet of Things as macro trend, at global scale. Many of you got used to the statement that Software is eating the World. That’s correct for two aspects: hardware flexibility is being shifted to software flexibility; fabricators are making hard goods from digital models.

Shifting flexibility from hardware to software is huge cost reduction of maintenance and reconfiguration. The evidence of hardware eaten by software are all those SDX, Software Defined Everything. E.g. SDN aka Software Defined Networks, SDR aka Software Defined Radio, SRS aka Storage, and so of for Data Center etc. Tesla car is a pretty software defined car.

But this World has not been even eaten by hardware yet! Miniaturization of electric digital devices allows the Hardware to eat the World today and tomorrow. Penetration and reach of devices into previously inaccessible territories is stunning. We establish stationary identification devices (surveillance cameras, weather sensors, industrial meters etc.) and launch movable devices (flying drones, swimming drones, balloons, UAVs, self-driving cars, rovers etc.) Check out excellent hardware trends for 2015. Today we put plenty of remora devices onto the cars and ourselves. Further miniaturization will allow to take devices inside ourselves. The evidence is that Hardware is eating the World.

Wait, there are fabricators or nanofactories, producing hard goods from 3D models! 3D printed goods and 3D printed hamburger are evidences of Software directly eating the World. Then, the conclusion could be that Software is eating the World previously eaten by Hardware, while Hardware is eating the rest of the World at higher pace than Software is eating via fabrication.

Who eats whom? Second pass

Things are not so straightforward. We [you and me] have stuck in silicon world. Those ruminations are true for electrical/digital devices/technologies. Things are not limited to digital and electrical. The movement of biohackers can’t be ignored. Those guys are doing garage bio experiments on 5K equipment exactly as Jobs and Woz did electrical/digital experiments in their garage during PC era birth.

Biohackers are also eating the World. I am not talking about standard boring initiation [of biohacker] to make something glowing… There are amazing achievements. One of them is night vision. Electrical/digital approach to night vision is infra red camera, cooler and analog optical picture into your eyes, or radio scanner and analog/digital reconstruction of the scene for your eyes. Bio approach is injection of Chlorin e6 drops into your eyes. With the aid of Ce6 you could see in the darkness in the range of 10 to 50 meters. Though there is some controversy with that Ce6 experiment.

The new conclusion for the “Eaters Club” is this:

  • Software is eating the world previously eaten by Hardware
  • Hardware is eating the rest of the World, much bigger part than it’d already been eaten, at high pace
  • Software is eating the world slowly thru fabrication and nanofactories
  • Biohackers are eating the world, ignoring both Hardware & Software eaters

Will convergence of hardware and bio happen as it happened with software and hardware? I bet yes. For remote devices it could be very beneficial to take energy from the ambient environment, which potentially could be implemented via biological mechanisms.


Blending it all together

Time for putting it all together and emphasizing onto practical consequences. Small and smaller devices are needed to wrap entire business (machines, people, areas). Many devices needed, 50 billion by 2020. Networking is needed to connect 50 billion devices. Data flow will grow from 50 billion devices and within the network. Data Gravity phenomenon will become more and more observable, when data attracts apps, services and people to itself. Keep reading for details.

Internet of Things is a sweet spot at the intersection of three technological macro trends: Semiconductors, Telecoms and Big Data. All three parts work together, but have different evolution pace. That’s lead to new rules of the ‘common sense’ emerging within IoT.

Remote devices need networking, good networking. And we got an issue, which will only strengthen. The pace of evolution for semiconductors is 60%, while the pace of evolution of networks is 50%. The pace of evolution of storage technology is even faster than 60% annually. It means that newly acquired data will fit into the network less and less in time [less chances for data to get into the Cloud] . It means that more and more data will be left beyond the network [and beyond the Cloud].

Off-the-Cloud data must be handled in-place, at location of acquisition or so. It means huge growth of Embedded Programming. All those small and smaller devices will have to acquire, store, filter, reduce and sync data. It is Embedded Programming with OS, without OS. It is distributed and decentralized programming. It is programming of dynamic mesh networks. It is connectivity from device to device without central tower. It is new kind of the cloud programming, closest to the ground, called Fog. Hence Fog Programming, Fog Computing. Dynamic mesh networks, plenty of DSP, potentially applicable distributed technologies for business logic foundation such as BitTorrent, Telehash, Blockchain. Interesting times in Embedded Programming are coming. This is just Internet of Things Miniaturization phase. Add smart sensing on those P2P connected small and smaller devices in the Fog, and Internet of Things Identification phase will be addressed properly.

The Reference Architecture of IoT is seven-layered (because 7 is a lucky number?).


We are building new tools that we will use to build our future. We’re doing it through digitization of the World. Everything physical becomes connected and reflected into its digital representation. Don’t overfocus onto Software, think about Hardware. Don’t overfocus onto Hardware, think about Bio. Expected convergence of software-hardware-bio as most stable and eco-friendly foundation for those 50 billion devices by 2020.

Recall Peter Thiel and biz frustrations nowadays. With digitized connected World we will turn from negative direction within current spiral cycle into positive. And of course we will continue with long term acceleration. The future looks exciting.

Music for reading and thinking: from the near future, Blade Runner, Los Angeles 2019

Tagged , , , , , , , , , , , , , , , ,

A Story behind IoE @ THINGS EXPO

This post is related to the published visuals from my Internet of Everything session at THINGS EXPO in June 2014 in New York City. The story is relevant to the visuals but there is no firm affinity to particular imagery. Now there story is more like a stand alone entity.

How many things are connected now?

Guess how many things (devices & humans) are connected to the Internet? Guess who knows? The one who produces those routers, that moves your IP packets across the global web – Cisco. Just navigate to the link http://newsroom.cisco.com/ioe and check the counter in right top corner. The counter doesn’t look beautiful, but it’s live, it works, and I hope will continue to work and report approximate number of the connected things within the Internet. Cisco predicts that by 2020, the Internet of Everything has the potential to connect 50 billion. You could check yourself whether the counter is already tuned to show 50,000,000,000 on 1st of January 2020…

Internet of Everything is next globalization

Old good globalization was already described in The World is Flat. With the rise of smart phones with local sensors (GPS, Bluetooth, Wi-Fi) the flatness of the world has been challenged. Locality unflattened the world. New business models emerged as “a power of local”. The picture got mixed: on one hand we see same burgers,  Coca-Cola and blue jeans everywhere, consumed by many; while on the other hand we already consume services tailored to locality. Even hard goods are tailored to locality, such as cars for Alaska vs. Florida. Furthermore, McDonald’s proposes locally augmented/extended menu, and Coca-Cola wraps the bottles with locally meaningful images.

Location itself is insufficient for the next big shift in biz and lives. A context is a breakthrough personalizer. And that personal experience is achievable via more & smaller electronics, broadband networks without roaming burden, and analytics from Big Data. New globalization is all about personal experience, everywhere for everyone.

Experience economy

Today you have to take your commodities, together with made good, together with services, bring it all onto the stage and stage personal experience for a client. It is called Experience Economy. Nowadays clients/users want experiences. Repeatable experiences like in Starbucks or lobster restaurant or soccer stadium or taxi ride. I already have a post on Transformation of Consumption. Healthcare industry is one of early adopters of the IoT, hence they deserved separate mentioning, there is a post on Next Five Years of Healthcare.

So you have to get prepared for the higher prices… It is a cost of staging of personal experience. Very differentiated offering at premium price. That’s the economical evolution. Just stick to it and think how to fit there with your stuff. Augment business models correspondingly. Allocate hundreds of MB (or soon GB) for user profiles. You will need a lot to store about everybody to be able to personalize.

Remember that it’s not all about consumer. There are many things around consumer. They are part of the context, service, ecosystem. Count on them as well. People use those helper things [machines, software] to improve something in biz process or in life style, either cost or quality or time or emotions. Whatever it is, the interaction between people and between people-machine is crucial for proper abstraction and design for the new economy.

Six Webs by Bill Joy

Creator of Berkeley Unix, creator of vi editor, co-founder of Sun Microsystems, partner at KPCB – Bill Joy – outlined six levels of human-human, human-machine, machine-machine interaction. That was about 20 years ago.

  • Hear – human & intimate device like phone, watch, glasses.
  • Near – human & personal while less intimate device like laptop, car infotainment.
  • Far – human & remote machines like TV panels, projections, kiosks.
  • Weird – human-machine via voice & gesture interfaces.
  • B2B – machine-machine at high level, via apps & services.
  • D2D – machine-machine as device to device, mesh networks.

About 10 years ago Bill Joy reiterated on Six Webs. He pointed to “The Hear Web” as most promising and exciting for innovations.

“The Hear Web” is anatomical graphene

The human body is anatomically the same through the hundreds of years. Hence the ergonomics of wearables and handhelds is predefined. Braceletswristwatchesarmbands are those gadgets that could we wear for now on our arms. The difference is in technology. Earlier it was mechanical, now it is electrical.


We are still not there with human augmentation to speak about underskin chips… but that stuff is being tested already… on dogs & cats. Microchip with information about rabies vaccination is put under the skin. Humans also pioneer some things, but it is still not mainstream to talk much about.

For sure “The Hear Web” was a breakthrough during recent years. The evolution of smartphones was amazing. The emergence of wrist-sized gadgets was pleasant. We are still to get clarity what will happen with glasses. Google experiments a lot, but there is a long way to go until the gadget is polished. That’s why Google experiments with contact lenses. Because GLASS still looks awkward…

The brick design of touch smartphone is not the true final one. I’ve figured out significant issue with iPhone design. LG Flex is experimenting with bendable, but that’s probably not significantly better option. High hopes are on Graphene.    Nokia sold it’s plastic phone business to Microsoft, because Nokia got multi-billion grant to research in graphene wearables. Graphene is good for electricity, highly durable, flexible, transparent. It is much better for the new generation of anatomically friendly wearables.

BTW there will be probably no space for the full-blown HTML5/CSS3/JavaScript browsers on those new gadgets. Hence forget about SaaS and think about S+S. Or client-server which is called client-cloud nowadays. Programming language could be JavaScript, as it works on small hardware already, without fat browsers running spreadsheets. Check out Tessel. The pathway from current medium between gadgets & clouds is: smartphone –> raspberry pi –> arduino.


D2D stands for Device-to-Device. There must be standards. High hopes are on Qualcomm. They are respected chipset & patents maker. They propose AllJoyn – open source approach for connecting things – during recent years. All common functionality such as discovery/onboarding, Wi-Fi comms, data streaming to be standardized and adopted by developers community.

AllSeen Alliance is an organization of supporters of the open source initiative for IoT. It is good to see there names like LG, Sharp, Haier, Panasonic, Technicolor (Thomson) as premier members, Cisco, Symantec and HTC as community members. And really nice to see one of etalons of Wikinomics – Local Motors!

For sure Google would try to push Android onto as many devices as possible, but Google must understand that they are players in plastic gadgets. It’s better to invest money into hw & graphene companies and support the alliance with money and authority. IoT needs standards, especially at D2D/M2M level.

How to design for new webs?

If you know how – then go ahead. Else – design for personal experience. Internet of Everything includes semiconductors, telecoms and analytics from Big Data.



Assuming you are in software business, let semiconductors continue with Moore’s Law, let telecoms continue with Metcalfe’s Law, while concentrate on Big Data to unlock analytics potential, for context handling, for staging personal experience. Just consider that Metcalfe’s Law could be spread onto human users and machines/devices.

Start design of Six Graphs of Big Data from Five Sources of Big Data. The relation between graphs and sources is many-to-many. Blending of the graphs is not trivial. Look into Big Data Graphs Revisited. Conceptualization of the analytics pipeline is available in Advanced Analytics, Part I. Most interesting graphs are Intention & Consumption, because first is a plan, second is a fact. When they begin to match, then your solution begin to rock. Write down and follow it – the data is the next currency. 23andme and Uber logs so much data besides the cap of service you see and consume…

Where we are today?

There are clear five waves of the IoT. Some of those waves overlap. Especially ubiquitous identification of people or things indoors and outdoors. If the objects is big enough to be labeled with RFID tag or visual barcode than it is easy. But small objects are not labeled neither with radio chip nor with optical code. No radio chip because it is not good money-wise. E.g. bottles/cans of beer are not labeled because it’s too expensive per item. The pallets of beer bottles are labeled for sure, while single bottle is not. There is no optical code as well, to not spoil the design/brand of the label. Hence it is a problem to look for alternative identification – optical – via image recognition.

Third wave includes image recognition, which is not new, but it is still tough today. Google has trained Street View brain to recognize house numbers and car plates at such high level of accuracy, that they could crack captcha now. But you are not Google and you will get 75-78% with OpenCV (properly tuned) and 79-80% with deep neural networks (if trained properly). The training set for deep learning is a PITA. You will need to go to each store & kiosk and make pictures of the beer bottles under different light conditions, rotations, distances etc. Some footage could be made in the lab (like Amazon shoots the products from 360) but plenty of work is your own.



Fourth wave is about total digitization of the world, then newer world will work with digital things vial telepresence & teleoperations. Hopefully we will dispense with all those power adapters and wires by that time. “Software is eating the World”. All companies become software companies. Probably you are comfortable with digital music (both consuming and authoring), digital publishing, digital photos and digital movies. But you could have concerns with digital goods, when you pay for the 3D model and print on 3D printer. While atomic structure of the printed goods is different, your concern is right, but as soon as atomic structure is identical [or even better] then old original good has, then your concern is useless. Read more in Transformation of Consumption.

With 3D printing of hard goods it’s less or more understandable. Let’s switch tp 3D printed food. Modern Meadow printed a burger year ago. It costed $300K, approximately as much as Sergei Brin (Googler) invested into Modern Meadow. Surprised? Think about printed newest or personal vaccines and so forth…

Who is IoT? Who isn’t?

Is Uber IoT or not? With human drivers it is not. When human-driven cabs are substituted by self-driving cabs, then Uber will become an IoT. There is excellent post by Tim O’Reilly about Internet of Things & Humans. CEO of Box.com Levie tweeted “Uber is a $3.5 billion lesson in building for how the world *should* work instead of optimizing for how the world *does* work.” IoT is not just more data [though RedHat said it is], IoT is how this world should work.

How to become IoT?

  • Yesterday it was from sensors + networks + actuators.
  • Today it becomes sensors + networks + actuators + cloud + local + UX.
  • Tomorrow it should be sensors + networks + actuators + cloud + local + social + interest + intention + consumption + experience + intelligence.

Next 3000 days of the Web

It was vision for 5000 days, but today only 3000 days left. Check it out.

Next 6000 days of the Web

Check out There will be no End of the World. We will build so big and smart web, that we as humans will prepare the world to the phase shift. Our minds are limited. Our bodies are weird. They survive in very narrow temperature range. They afraid of radiation, gravity. We will not be able to go into deep space, to continue reverse engineering of this World. But we are capable to create the foundation for smarter intelligence, who could get there and figure it out. Probably we would even don’t grasp what it was… But today IoT pathway brings better experiences, more value, more money and more emotions.


Let’s check Cisco internet of things counter. ~300,000 new things have connected to the Internet while I wrote this story.

Tagged , , , , , , , , , , ,

Mobile Programming is Commodity

This post is about why the programming for smart phones and tablets is commodity.

Mobiles are no more novelty.

Mobiles are substituting PCs. As we programmed in VB and Delphi 15 years ago, the same way we will program in Objective-C and Java today.  Because adoption rate for cell phone as technology (in USA) is fastest from other technologies, and the scale of adoption surpassed 80% in 2005. Smart phones are being adopted at same pace, surpassing 35% in 2011, just in several years since iPhone revolution happened in 2007. Go check out the evidence from New York Times since 2008 for cell phones , evidence from Technology Review since 2010 for smart phones , more details by Harvard Business Review on accelerated technology adoption.

Visionaries look further. O’Reilly.

The list of hottest conferences by direction from visionary O’Reilly:

  • BigData
  • New Web
  • SW+HW
  • DevOps

BigData still matters, matching approach to Gartner’s “peak of inflated expectations”. Strata, Strata Rx (Healthcare flavor), Strata Hadoop. http://strataconf.com/strata2014 Tap into the collective intelligence of the leading minds in data—decision makers using the power of big data to drive business strategy, and practitioners who collect, analyze, and manipulate data. Strata gives you the skills, tools, and technologies you need to make data work today—and the insights and visionary thinking O’Reilly is known for.

JavaScript got out of the web browser and penetrated all domains of programming. Expectations and progress for HTML5 .Web 2.0 abandoned, fluent created. Emerging technologies for new Web Platform and new SaaS. http://fluentconf.com/fluent2014 O’Reilly’s Fluent Conference was created to give developers working with JavaScript a place to gather and learn from each other. As JavaScript has become a significant tool for all kinds of development, there’s a lot of new information to wrap your head around. And the best way to learn is to spend time with people who are actually working with JavaScript and related technologies, inventing ways to apply its power, scalability, and platform independence to new products and services.

“The barriers between software and physical worlds are falling”. “Hardware startups are looking like the software startups of the previous digital age”. Internet of Things has longer cycle (according to Gartner’s hype cycle), but it is coming indeed. With connected machines, machine-to-machine, smart machines, embedded programming, 3D printing and DIY to assemble them (machines). Solid. http://solidcon.com/solid2014 The programmable world is creating disruptive innovation as profound as the Internet itself. As barriers blur between software and the manufacture of physical things, industries and individuals are scrambling to turn this disruption into opportunity.

DevOps & Performance is popular. Velocity. Most companies with outward-facing dynamic websites face the same challenges: pages must load quickly, infrastructure must scale efficiently, and sites and services must be reliable, without burning out the team or breaking the budget. Velocity is the best place on the planet for web ops and performance professionals like you to learn from your peers, exchange ideas with experts, and share best practices and lessons

Open Source matters more and more. Open Source is about sharing partial IP for free according to WikinomicsOSCON. http://www.oscon.com/oscon2014 OSCON is where all of the pieces come together: developers, innovators, business people, and investors. In the early days, this trailblazing O’Reilly event was focused on changing mainstream business thinking and practices; today OSCON is about how the close partnership between business and the open source community is building the future. That future is everywhere you look.

Digitization of conent continues. TOC.

Innovation in leadership and processes. cultivate.

Visionaries look further. GigaOM.

The list of conferences by direction from GigaOM:

  • BigData
  • UX
  • IoT
  • Cloud

BigData. STRUCTURE DATA. http://events.gigaom.com/structuredata-2014/ From smarter cars to savvier healthcare, today’s data strategies are driving business in compelling new directions.

User Experience. ROADMAP. http://events.gigaom.com/roadmap-2013/ As data and connectivity shape our world, experience design is now as important as the technology itself. It covers (and will cover) ubiquitous UI, wearables and HCI with all those new smarter machines (3D printed & DIY & embedded programming).

Internet of Things. MOBILIZE. http://event.gigaom.com/mobilize/ Five years ago, Mobilize was the first conference of its kind to outline the future of mobility after Apple’s iPhone exploded onto the scene. We continue to track the hottest early adopters, the bold visionaries and those about to disrupt the ecosystem. We hope that you will join us at Mobilize and be the first in line to ride this next wave of innovation. This year we’ll cover: The internet of things and industrial internet; Mobile big data and new product alchemy; Wearable devices; BYOD and mobile security.

Cloud. STRUCTURE. http://event.gigaom.com/structure/ Structure 2013 focused on how real-time business needs are shaping IT architectures, hyper-distributed infrastructure and creating a cloud that will look completely different from everything that’s come before. Questions we answered at Structure 2013 included: Which architects are choosing open source solutions, and what are the advantages? Will to-the-minute cloud availability be an advantage for Azure? What are the lessons learned in building a customized enterprise PaaS? Where is there still space to innovate for next-generation leaders?


To be strong programmer for today you have to be able to design and code for smart phones and tablets as your father and mother did 20 years ago for PC and workstations. Mobile programming is shaped by the trends, described in Mobile Trends for 2014.

To be strong programmer for tomorrow you have to tame the philosophy, technologies and tools of BigData (despite Gartners prediction of inflated expectations), Cloud,  Embedded and Internet of Things. It is much less Objective-C but probably still plenty of Java. Seems like the future is better suited for Android developers. IoT is positioned last in the list because its adoption rate is significantly lower than for cell phones (after 2000 dotcom burst).

Tagged , , , , , ,

Transformation of Consumption

Some time ago I’ve posted on Six Graphs of Big Data and mentioned Consumption Graph there. Then I presented Five Sources of Big Data on the data-aware conference, mentioned how retailers track people (time, movement, sex, age, goods etc.) and felt the keen interest from the audience about Consumption Data Source. Since that time I’ve thought a lot about consumption ‘as is’. Recently I’ve paid attention to the glimpses of the impact onto old model from micro-entrepreneurs, who 3D-prints at home and sell on Etsy. Today I want to reveal more about all that as consumption and its transformation. It will be much less about Big Data but much more about mid term future of Economics.

The Experience Economy

It was mentioned 15 years ago. The experience economy was identified as the next economy following the agrarian economy, the industrial economy, and the most recent service economy. Here is a link to 1998 Harvard Business Review, “Welcome to the Experience Economy”. Guys did an excellent job by predicting the progression of economic value. The experience is a real thing, like hard goods. Recall your feelings when you are back to the favorite restaurant where you order without looking into the menu. You got there to repeat the experience. Hence modern consumption is staged as experience, from the services and goods. Personal experience is even better. Services and goods without staging are getting weaker… Below is a diagram of the progression of economic value.


It would be useful to compare the transformation by multiple parameters such as model, function, offering, supply, seller, buyer and demand. The credit goes to HBR. I have improved the readability of the table in comparison to their. There is a clear trend towards experience and personalization. Pay attention to the rightmost column, because it will be addressed in more details later in this post.  To make it more familiar and friendly for you, I’ll appeal to your memories again: recall your visits to Starbucks or McDonalds. What is a driving force behind your will? How have you gained that internal feeling over past periods? Multiple other samples are available, especially from the leisure and hospitality industry. Pioneers of new economics are there already, others are joining the league. And yes… people are moving towards those fat guys from WALL-E movie…


The Invisible Economy

Staging experience is not enough. Starbucks provides multiple coffee blends, Apple provides multiple gadgets and even colors. But it is not enough. I am an example. I need curved phone (suitable for my butt shape, because I keep it in the back pocket). Furthermore, I need a bendable phone, friendly for sitting whet it’s in the pocket. While majority of manufacturers-providers are ignoring it, LG is planning something. Let’s see what it will be, there is evidence of curved and flexible one. But I am not alone with my personal [strange?] wills. Others are dreaming of other things. Big guys may not be nimble enough to catch the pace of transforming and accelerated demand. It’s cool to be able to select colors for New Balance 993 or 574, but it’s not enough. My foot is different that yours, I need more exclusivity (towards usability and sustainability) than just colors. Why not to use some kind of digitizer to scan my foot and then deliver my personal shoes?

“The holy place is never empty” is my free word translation of Ukrainian proverb. It means that opportunity overlooked by current guys is fulfilled by others, new comers. There is a rising army of craftsmen and artists producing at home (manually of on 3D printers) and selling on Etsy. Fast Company has a great insight on that: “… Micro-entrepreneurs are doing something so nontraditional we don’t even know how to measure it…” There are bigger communities, like Ponoko. It is new breed of doers, called fabricators. And Ponoko is a new breed of the environment, where they meet, design, make, sell, buy and interact. The conclusion here is straightforward – our demand is fulfilled by new guys and in different way we used to. You can preview 3D model or simulation being thousand miles away and your thing will be delivered to your door. You can design your own thing. They can design for you and so on. And this economy is growing. Hey, big guys, it’s a threat for you!

The most existing in economy transformation is a foreseen death of banks. Sometimes banks are good, but in majority of modern cases they are bad. We don’t need Wells Fargo and similar dinosaurs. Amazon, Google, Apple, PayPal could perform the same functions more efficiently and make less evil to the people. There are emerging alternatives [to banks] how to fund initiatives, exchange funds between each other. Kickstarter and JumpStartFund are on the rise. Even for very serious projects like Hyperloop. Those things are still small (that’s why the section is called Invisible), but they are gaining the momentum and will hit the overall economy quite soon and heavy, less than in five years.

3D Printing

Here we are, taking digital goods and printing them into hard goods. Still early stage, but very promising and accelerating. MakerBot Replicator costs $2,199 which is affordable for personal use. There is a model priced at $2,799, which is still qualified for personal use. What does it mean for consumption? The World is being digitized. We are creating a digital copy of our world, everything is digitized and virtualized. Then digital can be implemented in the physical (hard good) on 3D printer. There are very serious 3D printers by Solid Concepts, that are capable to print the metal gun, which survives 500 round torture test. As soon as internal structure at molecular level is recreated and we achieve identical material characteristics, the question left is about cost reduction for the technology. As soon as 3D printing is cheap, we are there, in new exciting economy.

Let’s review other, more useful application of technology than guns. We eat to live, entertain to live good, and we cure diseases (which sometimes happen because of lifestyle and food). So, food first. 3D printed meat is already a reality. Meat is printed on bioprinter. Guess who funded the research? Sergey Brin, the googler. Modern Meadow creates leather and meat without slaughtering the animals. Next is health. The problem of waiting lists for organ exchange is ending. Your organs will be 3D printed. It is better than transplant because of no immune risks anymore. And finally, drugs. Recall pandemic situations with flue. Why you have to wait for vaccine for a week? You can 3D print your drugs from the digital model instantly, as soon as you download the digital model over the Internet. Downloaded and printed drugs is additional argument for Personalized Medicine in my recent post on the Next Five Years of Healthcare. I assume that answering essential application of technology to the basic aspects of life such as food, lifestyle and healthcare is sufficient to start taking it [technology] seriously. You can guess for other less life-critical applications yourself.

4D Printing

3D printing is on the rise, but there is even more powerful technology, called 4D printing. Fourth dimension is delayed in time and is related to the common environment characteristics such as temperature, water or some more specific like chemical. When external impact is applied, the 3D-printed strand folds into new structure, hence it uses its 4th dimension. It is very similar to the protein folding. There are tools for design of 4D things. One of them is cadnano for three-dimensional DNA origami nanostructures. It gives certainty of the stability of the designed structures. Another tool is Cyborg by Autodesk. It’s set of tools for modeling, simulation and multi-objective design optimization. Cyborg allows creation of specialized design platforms specific for the domains, from nanoparticle design to tissue engineering, to self-assembling human-scale manufacturing. Check out this excellent introduction into self-assembly and 4D printing by Skylar Tibbits from MIT Media Lab:

Forecast [on Consumption]

We will complete digitization of everything. This should be obvious for you at this stage. If not, then check out slightly different view on what Kevin Kelly called The One. No bits will live outside of the one distributed self-healing digital environment. Actually it will be us, digital copy of us. Data-wise it will be All Data together. Second reference will be to James Burke, who predicted the rise of PCs, in-vitro fertilization and cheap air travel in far 1973. Recently Burke admitted: “…The hardest factor to incorporate into my prediction, however, is that the future is no longer what it has always been: more of the same, but faster. This time: faster, yes, but unrecognisably different…” And I see it in same way, we are facing different future than we used to. It’s a bit scary but on the other hand it is very exciting. In 30 years we will have nano-fabricators, which manipulate at the level of atoms and molecules, to produce anything you want, from dirt, air, water and cheap carbon-rich acetylene gas. As you may already feel, those ingredients are virtually free, hence production of the goods by fabricator is almost free. Probably food will be a bit more expensive, but also cheap. By the way, each fabber will be able to copy itself… from the same cheap ingredients. We will not need plenty of wood, coal, oil, gas for nanofabrication. This is good for ecology. But I think we will invent other ways how to spoil Earth.

The value will shift from equipment to the digital models of the goods. Advanced 3D (and 4D models) will be not free; the rest will be crowdsourced and available for free. Autodesk, not a new company, but one of those serious, is a pioneer there with 123D apps platform. They are moving together with MakerBot. You can buy MakerBot Replicator on Autodesk site and vice versa, you will get Autodesk software together with MakerBot you bought elsewhere. It’s how it all is starting. In few years it will take off at large scale. Then we will get different economy, with much personal, sustainable and sensational consumption.

It would be interesting to draw parallels with the creation of Artificial Intelligence, because in 2030 we should have human brain simulated on non-biological carrier. Or may be we will be able to 4D or 5D-print more powerful brains than human on biological, but non-human carrier? Stay tuned.

Tagged , , , , , , , , , , , , , ,

Next Five Years of Healthcare

This insight is related to all of you and your children and relatives. It is about the health and healthcare. I feel confident to envision the progress for five years, but cautious to guess for longer. Even next five years seem pretty exciting and revolutionary. Hope you will enjoy they pathway.

We have problems today

I will not bind this to any country, hence American readers will not find Obamacare, ACO or HIE here. I will go globally as I like to do.

The old industry of healthcare still sucks. It sucks everywhere in the world. The problem is in uncertainty of our [human] nature. It’s a paradox: the medicine is one of the oldest practices and sciences, but nowadays it is one of least mature. We still don’t know for sure why and how are bodies and souls operate. The reverse engineering should continue until we gain the complete knowledge.

I believe there were civilisations tens of thousands years ago… but let’s concentrate on ours. It took many years to start in-depth studying ourselves. Leonardo da Vinci did breakthrough into anatomy in early 1500s. The accuracy of his anatomical sketches are amazing. Why didn’t others draw at the same level of perfection? The first heart transplant was performed only in 1967 in Cape Town by Christiaan Barnard. Today we are still weak at brain surgeries, even the knowledge how brain works and what is it. Paul Allen significantly contributed to the mapping of the brain. The ambitious Human Genome project was performed only in early 2000s, with 92% of sampling at 99.99% accuracy. Today, there is no clear vision or understanding what majority of DNA is for. I personally do not believe into Junk DNA, and ENCODE project confirmed it might be related to the protein regulation. Hence there is still plenty of work to complete…

But even with the current medical knowledge the healthcare could be better. Very often the patient is admitted from the scratch as a new one. Almost always the patient is discharged without proper monitoring of the medication, nutrition, behaviour and lifestyle. There are no mechanisms, practices or regulations to make it possible. For sure there are some post-discharge recommendations, assignments to the aftercare professionals, but it is immature and very inaccurate in comparison to what it could be. There are glimpses of telemedicine, but it is still very immature.

And finally, the healthcare industry in comparison to other industries such as retail, media, leisure and tourism is far behind in terms of consumer orientation. Even automotive industry is more consumer oriented than healthcare today. Economically speaking, there must be transformation to the consumer centric model. It is the same winning pattern across the industries. It [consumerism] should emerge in healthcare too. Enough about the current problems, let’s switch to the positive things – technology available!

There could be Care Anywhere

We need care anywhere. Either it is underground in the diamond mine, or in the ocean on-board of Queen Mary 2, or in the medical center or at home, at secluded places, or in the car, bus, train or plane.

There is wireless network (from cell providers), there are wearable medical devices, there is a smartphone as a man-in-the-middle to connect with the back-end. It is obvious that diagnostics and prevention, especially for the chronical diseases and emergency cases (first aid, paramedics) could be improved.

care anywhere

I personally experienced two emergency landings, once by being on-board of the six hour flight, second time by driving for the colleague to another airport. The impact is significant. Imagine that 300+ people landed in Canada, then according to the Canadian law all luggage was unloaded, moved to X-ray, then loaded again; we all lost few hours because of somebody’s heart attack.

It could be prevented it the passenger had heart monitor, blood pressure monitor, other devices and they would trigger the alarm to take the pill or ask the crew for the pill in time. The best case is that all wearable devices are linked to the smartphone [it is often allowed to turn on Bluetooth or Wi-Fi in airplane mode]. Then the app would ring and display recommendations to the passenger.

4P aka Four P’s

The medicine should go Personal, Predictive, Preventive and Participatory. It will become so in five years.

Personal is already partially explained above. Besides consumerism, which is a social or economic aspect, there should be really biological personal aspect. We all are different by ~6 million genes. That biological difference does matter. It defines the carrier status for illnesses, it is related to risks of the illnesses, it is related to individual drug response and it uncovers other health-related traits [such as Lactose Intolerance or Alcohol Addiction].

Personal medicine is an equivalent to the Mobile Health. Because you are in motion and you are unique. The single sufficiently smart device you carry with you everywhere is a smartphone. Other wearable devices are still not connected [directly into the Internet of Things]. Hence you have to use them all with the smartphone in the middle.

The shift is from volume to value. From pay to procedures to pay for performance. The model becomes outcome based. The challenge is how to measure performance: good treatment vs. poor bedside, poor treatment vs. good bedside and so on.

Predictive is a pathway to the healthcare transformation. As healthcare experts say: “the providers are flying blind”. There is no good integration and interoperability between providers and even within a single provider. The only rationale way to “open the eyes” is analytics. Descriptive analytics to get a snapshot of what is going on, predictive analytics to foresee the near future and make right decisions, and prescriptive analytics to know even better the reasoning of the future things.

Why there is still no good interoperability? Why there is no wide HL7 adoption? How many years have gone since those initiatives and standards? My personal opinion is that the current [and former] interoperability efforts are the dead end. The rationale is simple: if it worth to be done, it would be already done. There might be something in the middle – the providers will implement interoperability within themselves, but not at the scale of the state or country or globally.

Two reasons for “dead interop”. First is business related. Why should I share my stuff with others? I spent on expensive labs or scans, I don’t want others to benefit from my investments into this patient treatment. Second is breakthrough in genomics and proteomics. Only 20 minutes needed to purify the DNA from the body liquids with Zymo Research DNA Kit. Genome in 15 minutes under $100 has been planned by Pacific Biosciences by this year. Intel invested 100 million dollars into Pacific Biosciences in 2008. Besides gene mechanisms, there are others, not related to DNA change. They are also useful for analysis, predicting and decision making per individual patient. [Read about epigenetics for more details]. There is a third reason – Artificial Intelligence. We already classify with AI, very soon will put much more responsibility onto AI.

Preventive is very interesting transformation, because it is blurring the boarders between treatment and behaviour/lifestyle/wellness and between drugs and nutrition. It is directly related to the chronic diseases and to post-discharge aftercare, even self aftercare. To prevent from readmission the patient should take proper medication, adjust her behaviour and lifestyle, consume special nutrition. E.g. diabetes patients should eat special sugar-free meal. There is a question where drug ends and where nutrition starts? What Coca Cola Diet is? First step towards the drugs?

Pharmacogenomics is on the rise to do proactive steps into the future, with known individual’s response to the drugs. It is both predictive and preventive. It will be normal that mass universal drugs will start to disappear, while narrowly targeted drugs will be designed. Personal drugs is a next step, when the patient is a foundation for almost exclusive treatment.

Participatory is interesting in the way that non-healthcare organisations become related to the healthcare. P&G produce sun screens, designed by skin type [at molecular level], for older people and for children. Nestle produces dietary food. And recall there are Johnson & Johnson, Unilever and even Coca Cola. I strongly recommend to investigate PWC Health practice for the insights and analysis.

Personal Starts from Wearable

The most important driver for the adoption of wearable medical devices is ageing population. The average age of the population increases, while the mobility of the population decreases. People need access to healthcare from everywhere, and at lower cost [for those who retired]. Chronic diseases are related to the ageing population too. Chronic diseases require constant control, interventions of physician in case of high or low measurements. Such control is possible via multiple medical devices. Many of them are smartphone-enabled, where corresponding application runs and “decides” what to tell to the user.

Glucose meter is much smaller now, here is a slick one from iBGStar. Heart rate monitors are available in plenty of choices. Fitness trackers and dietary apps are present as vast majority of [mobile health] apps in the stores. Wrist bands are becoming the element of lifestyle, especially with fashionably designed Jawbone Up. Triceps band BodyMedia is good for calories tracking. Add here wireless weight… I’ve described gadgets and principles in previous posts Wearable Technology and Wearable Technology, Part II. Here I’d like to distinguish Scanadu Scout, measuring vitals like temperature, heart rate, oxymetry [saturation of your hemoglobin], ECG, HRV, PWTT, UA [urine analysis] and mood/stress. Just put appropriate gadgets onto your body, gather data, analyse and apply predictive analytics to react or to prevent.


Personal is a Future of Medicine

If you think about all those personal gadgets and brick mobile phones as sub-niche within medicine, then you are deeply mistaken. Because the medicine itself will become personal as a whole. It is a five year transition from what we have to what should be [and will be]. Computer disappears, into the pocket and into the cloud. All pocket sized and wearable gadgets will miniaturise, while cloud farms of servers will grow and run much smarter AI.

Everybody of us will become a “thing” within the Internet of Things. IoT is not a Facebook [it’s too primitive], but it is quantified and connected you, to the intelligent health cloud, and sometimes to the physicians and other people [patients like you]. This will happen within next 5-10 years, I think rather sooner or later. The technology changes within few years. There were no tablets 3.5 years ago, now we have plenty of them and even new bendable prototypes. Today we experience first wearable breakthroughs, imagine how it will advance within next 3 years. Remember we are accelerating, the technology is accelerating. Much more to come and it will change out lives. I hope it will transform the healthcare dramatically. Many current problems will become obsolete via new emerging alternatives.

Predictive & Preventive is AI

Both are AI. Period. Providers must employ strong mathematicians and physicists and other scientists to create smarter AI. Google works on duplication of the human brain on non-biological carrier. Qualcomm designs neuro chips. IBM demonstrated brainlike computing. Their new computing architecture is called TrueNorth.

Other healthcare participatory providers [technology companies, ISVs, food and beverage companies, consumer goods companies, pharma and life sciences] must adopt strong AI discipline, because all future solutions will deal with extreme data [even All Data], which is impossible to tame with usual tools. Forget simple business logic of if/else/loop. Get ready for the massive grid computing by AI engines. You might need to recall all math you was taught and multiply it 100x. [In case of poor math background get ready to 1000x efforts]

Education is a Pathway

Both patients and providers must learn genetics, epigenetics, genomics, proteomics, pharmacogenomics. Right now we don’t have enough physicians to translate your voluntarily made DNA analysis [by 23andme] to personal treatment. There are advanced genetic labs that takes your genealogy and markers to calculate the risks of diseases. It should be simpler in the future. And it will go through the education.

Five years is a time frame for the new student to become a new physician. Actually slightly more needed [for residency and fellowship], but we could consider first observable changes in five years from today. You should start learning it all for your own needs right now, because you also must be educated to bring better healthcare to ourselves!


Tagged , , , , , , , , , , , , , , , ,

Five Sources of Big Data

Some time ago I’ve described how to think when you build solutions from Big Data in the post Six Graphs of Big Data. Today I am going to look in the opposite direction, where Big Data come from? I see distinctive five sources of the data: Transactional, Crowdsourced, Social, Search and Machine. All details are below.

Transactional Data

This is old good data, most familiar and usual for the geeks and managers. It’s plenty of RBDMSes, running or archived, on premise and in the cloud. Majority of transactional data belong to corporations, because the data was authored/created mainly by businesses. It was a golden era of Oracle and SQL Server (and some others). At some point the RDBMS technology appeared to be incapable of handling more transactional data, thus we got Teradata (and others) to fix the problem. But there was no significant shift for the way we work with those data sources. Data warehouses and analytic cubes are trending, but they were used for years already. Financial systems/modules of the enterprise architectures will continue to rely on transactional data solutions from Oracle or IBM.

Crowdsourced Data

This data source has emerged from the activity rather than from type of technology. The phenomenon of Wikipedia confirmed that crowdsourcing really works. Much time passed since Wikipedia adoption by the masses… We got other fine data sources built by the crowds, for example Open Street Maps, Flickr, Picasa, Instagram.

Interesting things happen with the rise of personal genetic testing (verifying DNA for million of known markers via 23andme). This leads to public crowdsourced databases. More samples available, e.g. amateur astronomy. Volunteers do author useful data. The size of crowdsourced data is increasing.

What differentiates it from transactional/enterprise data? It’s a price. Usually crowdsourced data is free for use, with one of creative commons licenses. Often, the motivation for creation of such data set is digitization of our world or making free alternative to paid content. With the rise of nanofactories, we will see the growth of 3D models of every physical product. By using crowdsourced models we will print the goods at home (or elsewhere).

Social Data

With the rise of Friendster–>MySpace–>Facebook and then others (Linkedin, Twitter etc.) we got new type of data — Social. It should not be mixed for Crowdsourced data, because of completely different nature of it. The social data is a digitization of ourselves as persons and our behavior. Social data is very well complementing the Crowdsourced data. Eventually there will be digital representation of everyone… So far social profiles are good enough for meaningful use. Social data is dynamic, it is possible to analyze it in real-time. E.g. put Tweets or Facebook posts thru the Google Predictive API to grab emotions. I’m sure everybody intuitively understands this type of data source.

Search Data

This is my favourite. Not obvious for many of you, while really strong data source. Just recall how much do you search on Amazon or eBay? How do you search on Wikis (not messing up with Wikipedia). Quora gets plenty of search requests. StackOverflow is a good source of search data within Information Technology. There are intranet searches within Confluence and SharePoint. If those search logs are analyzed properly, then it is clear about potential usefulness and business application. E.g. Intention Graph and Interest Graph are related to the search data.

There is a problem of “walled gardens” for search data… This problem is big, bigger than for social data, because public profiles are fully or partially available, while searches are kept behind the walls.

Machine Data

This is also my favourite. In the Internet of Things every physical thing will be connected. New things are designed to be connectable. Old things are got connected via M2M. Consumers adopted wearable technology. I’ve posted about it earlier. Go to Wearable Technology and Wearable Technology, Part II.

The cost of data gathering is decreasing. The cost of wireless data transfer is decreasing. The bandwidth of wireless transfer is increasing dramatically. Fraunhofer and KIT completed 100Gbps transmission. It’s fourteen times faster than the most robust 802.11ac. The moral is — measure everything, just gather data until it become Big Data, then analyze it properly and operate proactively. Machine data is probably the most important data source for Big Data during next years. We will digitize the world and ourselves via devices. Open Street Map got competitors, the fleet of eBees described Matterhorn with million of spatial points. More to expect from machines.

Tagged , , , , , , , , , , , , , , , , , , , , ,

Six Graphs of Big Data

This post is about Big Data. We will talk about the value and economical benefits of Big Data, not the atoms that constitute it [Big Data]. For the atoms you can refer to Wearable Technology or Getting Ready for the Internet of Things by Alex Sukholeyster, or just logging of the click stream… and you will get plenty of data, but it will be low-level, atom level, not much useful.

The value starts at the higher levels, when we use social connections of the people, understand their interests and consumptions, know their movement, predict their intentions, and link it all together semantically. In other words, we are talking about six graphs: Social, Interest, Consumption, Intention, Mobile and Knowledge. Forbes mentions five of them in Strategic Big Data insight. Gartner provided report “The Competitive Dynamics of the Consumer Web: Five Graphs Deliver a Sustainable Advantage”, it is paid resource unfortunately. It would be fine to look inside, but we can move forward with our vision, then compare to Gartner’s and analyze the commonality and variability. I foresee that our vision is wider and more consistent!

Social Graph

This is mostly analyzed and discussed graph. It is about connections between people. There are fundamental researches about it, like Six degrees of separation. Since LiveJournal times (since 1999), the Social Graph concept has been widely adopted and implemented. Facebook and its predecessors for non-professionals, LinkedIn mainly for professionals, and then others such as Twitter, Pinterest. There is a good overview about Social Graph Concepts and Issues on ReadWrite. There is good practical review of social graph by one of its pioneers, Brad Fitzpatrick, called Thoughts on the Social Graph. Mainly he reports a problem of absence of a single graph that is comprehensive and decentralized. It is a pain for integrations because of all those heterogeneous authentications and “walled garden” related issues.

Regarding implementation of the Social Graph, there are advices from the successful implementers, such as Pinterest. Official Pinterest engineering blog revealed how to Build a Follower Model from scratch. We can look at the same thing [Social Graph] from totally different perspective – technology. The modern technology provider Redis features tutorial how to Build a Twitter clone in PHP and (of course) Redis. So situation with Social Graph is less or more established. Many build it, but nobody solved the problem of having single consistent independent graph (probably built from other graphs).

Interest Graph

It is representation of the specific things in which an individual is interested. Read more about Interest Graph on Wikipedia. This is the next hot graph after the social. Indeed, the Interest Graph complements the Social one. Social Commerce see the Interest + Social Graphs together. People provide the raw data on their public and private profiles. Crawling and parsing of that data, plus special analysis is capable of building the Interest Graph for each of you. Gravity Labs created a special technology for building the Interest Graph. They call it Interest Graph Builder. There is an overview (follow previous link) and a demo. There are ontologies, entities, entity matching etc. Interesting insight about the Future of Interest Graph is authored by Pinterest’s head of engineering. The idea is to improve the Amazon’s recommendation engine, based on the classifiers (via pins). Pinterest knows the reasoning, “why” users pinned something, while Amazon doesn’t know. We are approaching Intention Graph.

Intention Graph

Not much could be said about intentions. It is about what we do and why we do.  Social and Interests are static in comparison to Intentions. This is related to prescriptive analytics, because it deals with the reasoning and motivation, “why” it happens or will happen. It seems that other graphs together could reveal much more about intentions, than trying to figure them [Intentions] out separately.

Intention Graph is tightly bound to the personal experience, or personal UX. It was foreseen in far 1999, by Harvard Business Review, as Experience Economy. Many years were spent, but not much implemented towards personal UX. We still don’t stage a personal ad hoc experience from goods and services exclusively for each user. I predict that Social + Interest + Consumption + Mobile graphs will allow us to build useful Intention Graph and achieve capabilities to build/deliver individual experiences. When the individual is within the service, then we are ready to predict some intentions, but it is true when Service Design was done properly.

Consumption Graph

One of the most important graphs of Big Data. Some call it Payment Graph. But Consumption is a better name, because we can consume without payment, Consumption Graph is relatively easy for e-commerce giants, like Amazon and eBay, but tricky for 3rd parties, like you. What if you want to know what user consumes? There are no sources of such information. Both Amazon and eBay are “walled gardens”. Each tracks what you do (browse, buy, put into wish list etc.), how you do it (when log in, how long staying within, sequence of your activities etc.), they send you some notifications/suggestions and measure how do you react, and many other tricks how to handle descriptive, predictive and prescriptive analytics. But what if user buys from other e-stores? There is a same problem like with Social Graph. IMHO there should be a mechanism to grab user’s Consumption Graph from sub-graphs (if user identifies herself).

Well, but there is still big portion of retail consumption. How to they build your Consumption Graph? Very easy, via loyalty cards. You think about discounts by using those cards, while retailers think about your Consumption Graph and predicts what to do with all of users/client together and even individually. There is the same problem of disconnected Consumption Graphs as in e-commerce, because each store has its own card. There are aggregators like Key Ring. Theoretically, they simplify the life of consumer by shielding her from all those cards. But in reality, the back-end logic is able to build a bigger Consumption Graph for retail consumption! Another aspect: consumption of goods vs. consumption of services and experiences, is there a difference? What is a difference between hard goods and digital goods? There are other cool things about retail, like tracking clients and detecting their sex and age. It is all becoming the Consumption Graph. Think about that yourself:)

Anyway, Consumption Graph is very interesting, because we are digitizing this World. We are printing digital goods on 3D printers. So far the shape and look & feel is identical to the cloned product (e.g. cup), but internals are different. As soon as 3D printer will be able to reconstruct the crystal structure, it will be brand new way of consumption. It is thrilling and wide topic, hence I am going to discuss it separately. Keep in touch to not miss it.

Mobile Graph

This graph is built from mobile data. It does not mean the data comes from mobile phones. Today may be majority of data is still generated by the smartphones, but tomorrow it will not be the truth. Check out Wearable Technology to figure out why. Second important notion is about the views onto the understanding of the Mobile Graph. Marketing based view described on Floatpoint is indeed about the smartphones usage. It is considered that Mobile Graph is a map of interactions (with contexts how people interact) such as Web, social apps/bookmarks/sharing, native apps, GPS and location/checkins, NFC, digital wallets, media authoring, pull/push notifications. I would view the Mobile Graph as a user-in-motion. Where user resides at each moment (home, office, on the way, school, hospital, store etc.), how user relocates (fast by car, slow by bike, very slow by feet; or uniformly or not, e.g. via public transport), how user behaves on each location (static, dynamic, mixed), what other users’ motions take place around (who else traveled same route, or who also reside on same location for that time slot) and so on. I am looking at the Motion Graph more as to the Mesh Network.

Why dynamic networking view makes more sense? Consider users as people and machines. Recall about IoT and M2M. Recall the initiatives by Ford and Nokia for resolving the gridlock problems in real-time. Mobile Graphs is better related to the motion, mobility, i.e. to the essence of the word “mobile”. If we consider it from motion point of view and add/extend with the marketing point of view, we will get pretty useful model for the user and society. Mobile Graph is not for oneself. At least it is more efficient for many than for one.

Knowledge Graph

This is a monster one. It is about the semantics between all digital and physical things. Why Google rocks still? Because they built the Knowledge Graph. You can see it action here. Check out interesting tips & tricks here. Google’s Knowledge Graph is a tool to find the UnGoogleable. There is a post on Blumenthals that Google’s Local Graph is much better than Knowledge, but this probably will be eliminated with time. IMHO their Knowledge Graph is being taught iteratively.

As Larry Page said many times, Google is not a search engine or ads engine, but the company that is building the Artificial Intelligence. Ray Kurzweil joined Google to simulate the human brain and recreate kind of intelligence. Here is a nice article How Larry Page and Knowledge Graph helped to seduce Ray Kurzweil to join Google. “The Knowledge Graph knows that Santa Cruz is a place, and that this list of places are related to Santa Cruz”.

We can look at those graphs together. Social will be in the middle, because we (people) like to be in the center of the Universe:) The Knowledge Graph could be considered as meta-graph, penetrating all other graphs, or as super-graph, including multiple parts from other graphs. Even now, the Knowledge Graph is capable of handling dynamics (e.g. flight status).

Other Graphs

There are other graphs in the world of Big Data. The technology ecosystems are emerging around those graphs. The boost is expected from the Biotech. There is plenty of gene data, but lack of structured information on top of it. Brand new models (graphs) to emerge, with ease of understanding those terabytes of data. Circos was invented in the field of genomic data, to simplify understanding of data via visualization. More experiments could be found on Visual Complexity web site. We are living in the different World than a decade ago. And it is exciting. Just plan your strategies correspondingly. Consider Big Data strategically.

Tagged , , , , , , , , , , , , , , , , , , , , , , , ,

[20 Facts about SoftServe]

SoftServe sets up its Research and Development Department in 2008…

Tagged , , , , , , , , , , ,