Tag Archives: personal

Consumerism via IoT @ IT2I in Munich

We buy things we don’t need
with money we don’t have
to impress people we don’t like

Tyler Durden
Fight Club


Consumption sucks

That sucks. That got to be changed. Fight Club changed it that violent way… Thanks God it was in book/movie only. We are changing it different way, peacefully, via consumerism. We are powerful consumers in new economy – Experience Economy. Consumers don’t need goods only, they need experiences, staged from goods, services and something else.


Staging experience is difficult. Staging personal experience is a challenge for this decade. We have to gather, calculate and predict about literally each customer. The situation gets more complicated with growing Do It Yourself attitude from consumers. They want to make it, not just to buy it…

If you have not so many customers then staging of experience could be done by people, e.g. Vitsoe. They are writing on letter cards exclusively for you! To establish realistic human-human interface from the very beginning. You, as consumer, do make it, by shooting pictures of your rooms and describing the concept of your shelving system. New Balance sneakers maker directly provides “Make button”, not buy button, for number of custom models. You are involved into the making process, it takes 2 days, you are informed about the facilities [in USA] and so on; though you are just changing colors of the sneaker pieces, not a big deal for a man but the big deal for all consumers.

There are big etalons in Experience Economy to look for: Starbucks, Walt Disney. Hey, old school guys, to increase revenue and profit think of price goes up, and cost too; think of staging great experiences instead of cutting costs.


Webization of life

Computers disrupted our lives, lifestyle, work. Computers changed the world and still continue to change it. Internet transformed our lives tremendously. It was about connected machines, then connected people, then connected everything. The user used to sit in front of computer. Today user sits within big computer [smart house, smart ambient environment, ICU room] and wears tiny computers [wristbands, clasps, pills]. Let’s recall six orders of magnitude for human-machine interaction, as Bill Joy named them – Six Webs – Near, Hear, Far, Weird, B2B, D2D. http://video.mit.edu/embed/9110/

Nowadays we see boost for Hear, Weird, D2D. Reminder what they are: Hear is your smartphone, smartwatch [strange phablets too], wearables; Weird is voice interface [automotive infotaintent, Amazon Echo]; D2D is device to device or machine to machine [aka M2M]. Wearables are good with anatomical digital gadgets while questionable with pseudo-anatomical like Google Glass. Mobile first strategies prevail. Voice interop is available on all new smartphones and cars. M2M is rolling out, connecting “dumb” machines via small agents, which are connected to the cloud with some intelligent services there.

At the end of 2007 we experienced 5000 days of the Web. Check out what Kevin Kelly predicts for next 5000 days [actually less than 3000 days from now]. There will be only One machine, its OS is web, it encodes trillions of things, all screens look into One, no bits live outside, to share is to gain, One reads it all, One is us…


Webization of things

Well, next 3000 days are still to come, but where we are today? At two slightly overlapping stages: Identification of Everything and Miniaturization & Connecting of Everything. Identification difficulties delay connectivity of more things. Especially difficult is visual identification. Deep Neural Networks did not solve the problem, reached about 80% accuracy. It’s better than old perceptrons but not sufficient for wide generic application. Combinations with other approaches, such as Random Forests bring hope to higher accuracy of visual recognition.

Huge problem with neural networks is training. While breakthrough is needed for ad hoc recognition via creepy web camera. Intel released software library for computer vision OpenCV to engage community to innovate. Then most useful features are observed, improved and transferred from sw library into hw chips by Intel. Sooner or later they are going to ship small chips [for smartphones for sure] with ready-made special object recognition bits processing, so that users could identify objects via small phone camera in disconnected mode with better accuracy than 85-90%, which is less or more applicable for business cases.


As soon as those two IoT stages [Identification and Miniaturization] are passed, we will have ubiquitous identification of everything and everyone, and everything and everyone will be digitized and connected – in other words we will create a digital copy of us and our world. It is going to be completed somewhere by 2020-2025.

Then we will augment ourselves and our world. Then I don’t know how it will unfold… My personal vision is that humanity was a foundation for other more intelligent and capable species to complete old human dream of reverse engineering of this world. It’s interesting what will start to evolve after 2030-2040. You could think about Singularity. Phase shift.


Hot industries in IoT era

Well, back to today. Today we are still comfortable on the Earth and we are doing business and looking for lucrative industries. Which industries are ready to pilot and rollout IoT opportunities right away? Here is a list by Morgan Stanley since April 2014:

Utilities (smart metering and distribution)
Insurance (behavior tracking and biometrics)
Capital Goods (factory automation, autonomous mining)
Agriculture (yield improvement)
Pharma (critical trial monitoring)
Healthcare (leveraging human capital and clinical trials)
Medtech (patient monitoring)
Automotive (safety, autonomous driving)


What IoT is indeed?

Time to draw baseline. Everybody is sure to have true understanding of IoT. But usually people have biased models… Let’s figure out what IoT really is. IoT is synergistic phenomenon. It emerged at the interaction of Semiconductors, Telecoms and Software. There was tremendous acceleration with chips and their computing power. Moore’s Law still has not reached its limit [neither at molecular nor atomic level nor economic]. There was huge synergy from wide spread connectivity. It’s Metcalfe’s Law, and it’s still in place, initially for people, now for machines too. Software scaled globally [for entire planet, for all 7 billions of people], got Big Data and reached Law of Large Numbers.


As a result of accelerated evolution of those three domains – we created capability to go even further – to create Internet of Things at their intersection, and to try to benefit from it.


Reference architecture for IoT

If global and economic description is high-level for you, then here you go – 7 levels of IoT – called IoT Reference Architecture by Cisco, Intel and IBM in October 2014 at IoT World Forum. A canonical model sounds like this: devices send/receive data, interacting with network where the data is transmitted, normalized and filtered using edge computing before landing in databases/data storage, accessible by applications and services, which process it [data] and provide it to people, who will act and collaborate.



Who is IoT?

You could ask which company is IoT one. This is very useful question, because your next question could be about criteria, classifier for IoT and non-IoT. Let me ask you first: is Uber IoT or not?

Today Uber is not, but as soon as the cars are self-driven Uber will be. An only missing piece is a direct connection to the car. Check out recent essay by Tim O’Reilly. Another important aspect is to mention society, as a whole and each individual, so it is not Internet of Things, but it is Internet of Things & Humans. Check out those ruminations http://radar.oreilly.com/2014/04/ioth-the-internet-of-things-and-humans.html

Humans are consumers, just a reminder. Humans is integral part of IoT, we are creating IoT ourselves, especially via networks, from wide social to niche professional ones.


Software is eating the world

Chips and networks are good, let’s look at booming software, because technological process is depending vastly on software now, and it’s accelerating. Each industry opens more and more software engineering jobs. It started from office automation, then all those classical enterprise suites PLM, ERP, SCADA, CRM, SCM etc. Then everyone built web site, then added customer portal, web store, mobile apps. Then integrated with others, as business app to business app aka B2B. Then logged huge clickstreams and other logs such as search, mobile data. Now everybody is massaging the data to distill more information how to meet business goals, including consumerism shaped goals.

  1. Several examples to confirm that digitization of the world is real.
    Starting from easiest example for understanding – newspapers, music, books, photography, movies went digital. Some of you have never seen films and film cameras, but google it, they were non-digital not so long ago. Well, last example from this category is Tesla car. It is electrical and got plenty of chips with software & firmware on them.
  2. Next example is more advanced – intellectual property shifts to digital models of goods. 3D model with all related details does matter, while implementation of that model in hard good is trivial. You have to pay for the digital thing, then 3D print it at home or store. As soon as fabrication technology gets cheaper, the shift towards digital property will be complete. Follow Formula One, their technologies are transferred to our simpler lives. There is digital modeling and simulations, 3D printed into carbon, connected cars producing tons of telemetry data. As soon as consumer can’t distinguish 3D printed hard goods from produced with today’s traditional method, and as soon as technology is cheap enough – it is possible to produce as late as possible and as adequate as possible for each individual customer.
  3. All set with hard goods. What about others? Food is also 3D printed. First 3D printed burger from Modern Meadow was printed more than year ago, BTW funded by googler Sergey Brin. The price was high, about $300K, exactly the amount of his investment. Whether food will be printed or produced via biotech goo, the control and modeling will be software. You know recipes, processes, they are digital. They are applied to produce real food.
  4. Drugs and vaccines. Similar to the food and hard goods. Just great opportunity to get quick access to the brand new medications is unlocked. The vaccine could be designed in Australia and transferred as digital model to your [or nearby] 3D printer or synthesizer, your instance will be composed from the solutions and molecules exclusively, and timely.

So whatever your industry is, think about more software coding and data massage. Plenty of data, global scale, 7 billions of people and 30 billions of internet devices. Think of traditional and novel data, augmented reality and augmented virtuality are also digitizers of our lives towards real virtuality.


How  to design for IoT?

If you know how, then don’t read further, just go ahead with your vision, I will learn from you. For others my advice will be to design for personal experience. Just continue to ride the wave of more & more software piece in the industries, and handle new software problems to deliver personal experience to consumers.

First of all, start recognizing novel data sources, such as Search, Social, Crowdsourced, Machine. It is different from Traditional CRM, ERP data. Record data from them, filter noise, recognize motifs, find intelligence origins, build data intelligence, bind to existing business intelligence models to improve them. Check out Five Sources of Big Data.

Second, build information graphs, such as Interest, Intention, Consumption, Mobile, Social, Knowledge. Consumer has her interests, why not count on them? Despite the interests consumer’s intentions could be different, why not count on them? Despite the intentions consumer’s consumption could be different, why not count on them? And so on. Build mobility graph, communication graph and other specific graphs for your industry. Try to build a knowledge graph around every individual. Then use it to meet that’s individual expectations or bring individualized unexpected innovations to her. Check out Six Graphs of Big Data.

As soon as you grasp this, your next problem will be handling of multi-modality. Make sure you got mathematicians into your software engineering teams, because the problem is not trivial, exactly vice versa. Good that for each industry some graph may prevail, hence everything else could be converted into the attributes attached to the primary graph.



PLM in IoT era

Taking simplified PLM as BEFORE –> DURING –> AFTER…

Design of the product should start as early as possible, and it is not isolated, instead foster co-creation and co-invention with your customers. There is no secret number how much of your IP to share publicly, but the criteria is simple – if you share insufficiently, then you will not reach critical mass to trigger consumer interest to it; and if you share too much, your competitors could take it all. The rule of thumb is about technological innovativeness. If you are very innovative, let’s say leader, then you could share less. Examples of technologically innovative businesses are Google, Apple. If you are technologically not so innovative then you might need to share more.

The production or assembly should be as optimal as possible. It’s all about transaction optimization via new ways of doing the same things. Here you could think about Coase Law upside down – outsource to external patterns, don’t try to do everything in-house. Shrink until internal transaction cost equals to external. Specialization of work brings [external] costs down. Your organization structure should reduce while the network of partners should grow. In the modern Internet the cost of external transactions could be significantly lower than the cost of your same internal transactions, while the quality remains high, up to the standards. It’s known phenomenon of outsourcing. Just Coase upside down, as Eric Schmidt mentioned recently.

Think about individual customization. There could be mass customization too, by segments of consumers… but it’s not so exciting as individual. Even if it is such simple selection of the colors for your phones or sneakers or furniture or car trim. It should take place as late as possible, because it’s difficult to forecast far ahead with high confidence. So try to squeeze useful information from your data graphs as closer to the production/assembly/customization moment as possible, to be sure you made as adequate decisions as could be made at that time. Optimize inventory and supply chains to have right parts for customized products.

Then try to keep the customer within experience you created. Customers will return to you to repeat the experience. You should not sit and wait while customer comes back. Instead you need to evolve the experience, think about ecosystem. Invent more, costs may raise, but the price will raise even more, so don’t push onto cost reduction, instead push onto innovativeness towards better personal experiences. We all live within experiences [BTW more and more digitized products, services and experiences]. The more consumer stays within ecosystem, the more she pays. It’s experience economy now, and it’s powered by Internet of Things. May be it will rock… and we will avoid Fight Club.




Tagged , , , , , , , , , , , , , , , , , , , , , , , ,

Big Data Graphs Revisited

Some time ago I’ve outlined Six Graphs of Big Data as a pathway to the individual user experience. Then I’ve did the same for Five Sources of Big Data. But what’s between them remained untold. Today I am going to give my vision how different data sources allow to build different data graphs. To make it less dependent on those older posts, let’s start from the real-life situation, business needs, then bind to data streams and data graphs.


Context is a King

Same data in different contexts has different value. When you are late to the flight, and you got message your flight was delayed, then it is valuable. In comparison to receiving same message two days ahead, when you are not late at all. Such message might be useless if you are not traveling, but airline company has your contacts and sends such message on the flight you don’t care about. There was only one dimension – time to flight. That was friendly description of the context, to warm you up.

Some professional contexts are difficult to grasp by the unprepared. Let’s take situation from the office of some corporation. Some department manager intensified his email communication with CFO, started to use a phone more frequently (also calling CFO, and other department managers), went to CFO office multiple times, skipped few lunches during a day, remained at work till 10PM several days. Here we got multiple dimensions (five), which could be analyzed together to define the context. Most probably that department manager and CFO were doing some budgeting: planning or analysis/reporting. Knowing that, it is possible to build and deliver individual prescriptive analytics to the department manager, focused and helping to handle budget. Even if that department has other escalated issues, such as release schedule or so. But severity of the budgeting is much higher right away, hence the context belongs to the budgeting for now.

By having data streams for each dimension we are capable to build run-time individual/personal context. Data streams for that department manager were kind of time series, events with attributes. Email is a dimension we are tracking; peers, timestamps, type of the letter, size of the letter, types and number of attachments are attributes. Phone is a dimension; names, times, durations, number of people etc. are attributes. Location is a dimension; own office, CFO’s office, lunch place, timestamps, durations, sequence are attributes. And so on. We defined potentially useful data streams. It is possible to build an exclusive context out of them, from their dynamics and patterns. That was more complicated description of the context.


Interpreting Context

Well, well, but how to interpret those data streams, how to interpret the context? What we have: multiple data streams. What we need: identify the run-time context. So, the pipeline is straightforward.

First, we have to log the Data, from each interested dimension. It could be done via software or hardware sensors. Software sensors are usually plugins, but could be more sophisticated, such as object recognition from surveillance cameras. Hardware sensors are GPS, Wi-Fi, turnstiles. There could be combinations, like check-in somewhere. So, think that it could be done a lot with software sensors. For the department manager case, it’s plugin to Exchange Server or Outlook to listen to emails, plugin to ATS to listen to the phone calls and so on.

Second, it’s time for low-level analysis of the data. It’s Statistics, then Data Science. Brute force to ensure what is credible or not, then looking for the emerging patterns. Bottleneck with Data Science is a human factor. Somebody has to look at the patterns to decrease false positives or false negatives. This step is more about discovery, probing and trying to prepare foundation to more intelligent next step. More or less everything clear with this step. Businesses already started to bring up their data science teams, but they still don’t have enough data for the science:)

Third, it’s Data Intelligence. As MS said some time ago “Data Intelligence is creating the path from data to information to knowledge”. This should be described in more details, to avoid ambiguity. From Technopedia: “Data intelligence is the analysis of various forms of data in such a way that it can be used by companies to expand their services or investments. Data intelligence can also refer to companies’ use of internal data to analyze their own operations or workforce to make better decisions in the future. Business performance, data mining, online analytics, and event processing are all types of data that companies gather and use for data intelligence purposes.” Some data models need to be designed, calibrated and used at this level. Those models should work almost in real-time.

Fourth, is Business Intelligence. Probably the first step familiar to the reader:) But we look further here: past data and real-time data meet together. Past data is individual for business entity. Real-time data is individual for the person. Of course there could be something in the middle. Go find comparison between stats, data science, business intelligence.

Fifth, finally it is Analytics. Here we are within individual context for the person. There worth to be a snapshot of ‘AS-IS’ and recommendations of ‘TODO’, if the individual wants, there should be reasoning ‘WHY’ and ‘HOW’. I have described it in details in previous posts. Final destination is the individual context. I’ve described it in the series of Advanced Analytics posts, link for Part I.

Data Streams

Data streams come from data sources. Same source could produce multiple streams. Some ideas below, the list is unordered. Remember that special Data Intelligence must be put on top of the data from those streams.

In-door positioning via Wi-Fi hotspots contributing to mobile/mobility/motion data stream. Where the person spent most time (at working place, in meeting rooms, on the kitchen, in the smoking room), when the person changed location frequently, directions, durations and sequence etc.

Corporate communication via email, phone, chat, meeting rooms, peer to peer, source control, process tools, productivity tools. It all makes sense for analysis, e.g. because at the time of release there should be no creation of new user stories. Or the volumes and frequency of check-ins to source control…

Biometric wearable gadgets like BodyMedia to log intensity of mental (or physical) work. If there is low calories burn during long bad meetings, then that could be revealed. If there is not enough physical workload, then for the sake of better emotional productivity, it could be suggested to take a walk.


Data Graphs from Data Streams

Ok, but how to build something tangible from all those data streams? The relation between Data Graphs and Data Streams is many to many. Look, it is possible to build Mobile Graph from the very different data sources, such as face recognition from the camera, authentication at the access point, IP address, GPS, Wi-Fi, Bluetooth, check-in, post etc. Hence when designing the data streams for some graph, you should think about one to many relations. One graph can use multiple data streams from corresponding data sources.

To bring more clarity into relations between graphs and streams, here is another example: Intention Graph. How could we build Intention Graph? The intentions of somebody could be totally different in different contexts. Is it week day or weekend? Is person static in the office or driving the car? Who are those peers that the person communicates a lot recently? What is a type of communication? What is a time of the day? What are person’s interests? What were previous intentions? As you see there could be data logged from machines, devices, comms, people, profiles etc. As a result we will build the Intention Graph and will be able to predict or prescribe what to do next.


Context from Data Graphs

Finally, having multiple data graphs we could work on the individual context, personal UX. Technically, it is hardly possible to deal with all those graphs easily. It’s not possible to overlay two graphs. It is called modality (as one PhD taught me). Hence you must split and work with single modality. Select which graph is most important for your needs, use it as skeleton. Convert relations from other graphs into other things, which you could apply to the primary graph. Build intelligence model for single modality graph with plenty of attributes from other graphs. Obtain personal/individual UX at the end.

Tagged , , , , , , , , , , , , , , , , , , , , , ,

Advanced Analytics, Part V

This post is related to the details of visualization of information for executives and operational managers on the mobile front-end. What is descriptive, what is predictive, what is prescriptive, how it looks like, and why. The scope of this post is a cap of the information pyramid. Even if I start about smth detailed I still remain at the very top, at the level of most important information without details on the underlying data. Previous posts contains introduction (Part I) and pathway (Part II) of the information to the user, especially executives.

Perception pipeline

The user’s perception pipeline is: RECOGNITION –> QUALIFICATION –> QUANTIFICATION –> OPTIMIZATION. During recognition the user just grasps the entire thing, starts to take it as a whole, in the ideal we should deliver personal experience, hence information will be valuable but probably delivered slightly different from the previous context. More on personal experience  in next chapter below. So as soon as user grasps/recognizes she is capable to classify or qualify by commonality. User operates with categories and scores within those categories. The scores are qualitative and very friendly for understanding, such as poor, so-so, good, great. Then user is ready to reduce subjectivity and turn to the numeric measurements/scoring. It’s quantification, converting good & great into numbers (absolute and relative). As soon as user all set with numeric measurements, she is capable to improve/optimize the biz or process or whatever the subject is.

Default screen

What should be rendered on the default screen? I bet it is combination of the descriptive, predictive and prescriptive, with large portion of space dedicated to descriptive. Why descriptive is so important? Because until we build AI the trust and confidence to those computer generated suggestions is not at the level. That’s why we have to show ‘AS IS’ picture, to deliver how everything works and what happens without any decorations or translations. If we deliver such snapshot of the business/process/unit/etc. the issue of trust between human and machine might be resolved. We used to believe that machines are pretty good at tracking tons of measurements, so let them track it and visualize it.

There must be an attempt from the machines to try to advice the human user. It’s could be done in the form of the personalized sentence, on the same screen, along with descriptive analytics. So putting some TODOs are absolutely OK. While believing that user will trust them and follow them is naive. The user will definitely dig into the details why such prescription is proposed. It’s normal that user is curious on root-cause chain. Hence be ready to provide almost the same information with additional details on the reasons/roots, trends/predictions, classifications & patterns recognition within KPI control charts, and additional details on prescriptions. If we visualize [on top of the inverted pyramid] with text message and stack of vital signs, then we have to prepare additional screen to answer that list of mentioned details. We will still remain on top of the pyramid.



Next screen

If we got ‘AS IS’ then there must be ‘TO BE’, at least for the symmetry:) User starts on default screen (recognition and qualification) and continues to the next screen (qualification and quantification). Next screen should have more details. What kind of information would be logically relevant for the user who got default screen and looks for more? Or it’s better to say – looks for ‘why’? May be it’s time to list them as bullets for more clarity:

  • dynamic pattern recognition (with highlight on the corresponding chart or charts) what is going on; it could be one from seven performance signals, it should be three essential signals
  • highlight the area of the significant event [dynamic pattern/signal] to the other charts to juxtapose what is going on there, to foster thinking on potential relations; it’s still human who thinks, while machine assists
  • parameters & decorations for the same control charts, such as min/max/avg values, identifications of the quarters or months or sprints or weeks or so
  • normal range (also applicable to the default screen) or even ranges, because they could be different for different quarters or years
  • trend line, using most applicable method for approximation/prediction of future values; e.g. release forecast
  • its parts should be clickable for digging from relative values/charts into the absolute values/charts for even more detailed analysis; from qualitative to quantitative
  • your ideas here


Recognition of signals as dynamic patterns is identification of the roots/reasons for smth. Predictions and prescriptions could be driven by those signals. Prescriptions could be generic, but it’s better to make personalized prescriptions. Explanations could be designed for the personal needs/perception/experience.


Personal experience

We consume information in various contexts. If it is release of the project or product then the context is different from the start of the zero sprint. If it’s merger & acquisition then expected information is different from the quarterly review. It all depends on the user (from CEO to CxOs to VPs to middle management to team management and leadership), on the activity, on the device (Moto 360 or iPhone or iPad or car or TV or laptop). It matters where the user is physically, location does matter. Empathy does matter. But how to reach it?

We could build users interests from social networks and from the interaction with other services. Interests are relatively static in time. It is possible to figure out intentions. Intentions are dynamic and useful only when they are hot. Business intentions are observable from business comms. We could sense the intensity of communication between the user and CFO and classify it as a context related to the budgeting or budget review. If we use sensors on corporate mail system (or mail clients), combine with GPS or Wi-Fi location sensors/services, or with manual check-in somewhere, we could figure out that the user indeed intensified comms with CFO and worked together face-to-face. Having such dynamic context we are capable to deliver the information in that context.

The concept of personal experience (or personal UX) is similar to the braid (type of hairstyle). Each graph of data helps to filter relevant data. Together those graphs allows to locate the real-time context. Having such personal context we could build and deliver most valuable information to the user. More details how to handle interest graph, intention graph, mobile graph, social graph and which sensors could bring the modern new data available in my older posts. So far I propose to personalize the text message for default screen and next screen, because it’s easier than vital signs, and it’s fittable into wrist sized gadgets like Moto 360.

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Transformation of Consumption

Some time ago I’ve posted on Six Graphs of Big Data and mentioned Consumption Graph there. Then I presented Five Sources of Big Data on the data-aware conference, mentioned how retailers track people (time, movement, sex, age, goods etc.) and felt the keen interest from the audience about Consumption Data Source. Since that time I’ve thought a lot about consumption ‘as is’. Recently I’ve paid attention to the glimpses of the impact onto old model from micro-entrepreneurs, who 3D-prints at home and sell on Etsy. Today I want to reveal more about all that as consumption and its transformation. It will be much less about Big Data but much more about mid term future of Economics.

The Experience Economy

It was mentioned 15 years ago. The experience economy was identified as the next economy following the agrarian economy, the industrial economy, and the most recent service economy. Here is a link to 1998 Harvard Business Review, “Welcome to the Experience Economy”. Guys did an excellent job by predicting the progression of economic value. The experience is a real thing, like hard goods. Recall your feelings when you are back to the favorite restaurant where you order without looking into the menu. You got there to repeat the experience. Hence modern consumption is staged as experience, from the services and goods. Personal experience is even better. Services and goods without staging are getting weaker… Below is a diagram of the progression of economic value.


It would be useful to compare the transformation by multiple parameters such as model, function, offering, supply, seller, buyer and demand. The credit goes to HBR. I have improved the readability of the table in comparison to their. There is a clear trend towards experience and personalization. Pay attention to the rightmost column, because it will be addressed in more details later in this post.  To make it more familiar and friendly for you, I’ll appeal to your memories again: recall your visits to Starbucks or McDonalds. What is a driving force behind your will? How have you gained that internal feeling over past periods? Multiple other samples are available, especially from the leisure and hospitality industry. Pioneers of new economics are there already, others are joining the league. And yes… people are moving towards those fat guys from WALL-E movie…


The Invisible Economy

Staging experience is not enough. Starbucks provides multiple coffee blends, Apple provides multiple gadgets and even colors. But it is not enough. I am an example. I need curved phone (suitable for my butt shape, because I keep it in the back pocket). Furthermore, I need a bendable phone, friendly for sitting whet it’s in the pocket. While majority of manufacturers-providers are ignoring it, LG is planning something. Let’s see what it will be, there is evidence of curved and flexible one. But I am not alone with my personal [strange?] wills. Others are dreaming of other things. Big guys may not be nimble enough to catch the pace of transforming and accelerated demand. It’s cool to be able to select colors for New Balance 993 or 574, but it’s not enough. My foot is different that yours, I need more exclusivity (towards usability and sustainability) than just colors. Why not to use some kind of digitizer to scan my foot and then deliver my personal shoes?

“The holy place is never empty” is my free word translation of Ukrainian proverb. It means that opportunity overlooked by current guys is fulfilled by others, new comers. There is a rising army of craftsmen and artists producing at home (manually of on 3D printers) and selling on Etsy. Fast Company has a great insight on that: “… Micro-entrepreneurs are doing something so nontraditional we don’t even know how to measure it…” There are bigger communities, like Ponoko. It is new breed of doers, called fabricators. And Ponoko is a new breed of the environment, where they meet, design, make, sell, buy and interact. The conclusion here is straightforward – our demand is fulfilled by new guys and in different way we used to. You can preview 3D model or simulation being thousand miles away and your thing will be delivered to your door. You can design your own thing. They can design for you and so on. And this economy is growing. Hey, big guys, it’s a threat for you!

The most existing in economy transformation is a foreseen death of banks. Sometimes banks are good, but in majority of modern cases they are bad. We don’t need Wells Fargo and similar dinosaurs. Amazon, Google, Apple, PayPal could perform the same functions more efficiently and make less evil to the people. There are emerging alternatives [to banks] how to fund initiatives, exchange funds between each other. Kickstarter and JumpStartFund are on the rise. Even for very serious projects like Hyperloop. Those things are still small (that’s why the section is called Invisible), but they are gaining the momentum and will hit the overall economy quite soon and heavy, less than in five years.

3D Printing

Here we are, taking digital goods and printing them into hard goods. Still early stage, but very promising and accelerating. MakerBot Replicator costs $2,199 which is affordable for personal use. There is a model priced at $2,799, which is still qualified for personal use. What does it mean for consumption? The World is being digitized. We are creating a digital copy of our world, everything is digitized and virtualized. Then digital can be implemented in the physical (hard good) on 3D printer. There are very serious 3D printers by Solid Concepts, that are capable to print the metal gun, which survives 500 round torture test. As soon as internal structure at molecular level is recreated and we achieve identical material characteristics, the question left is about cost reduction for the technology. As soon as 3D printing is cheap, we are there, in new exciting economy.

Let’s review other, more useful application of technology than guns. We eat to live, entertain to live good, and we cure diseases (which sometimes happen because of lifestyle and food). So, food first. 3D printed meat is already a reality. Meat is printed on bioprinter. Guess who funded the research? Sergey Brin, the googler. Modern Meadow creates leather and meat without slaughtering the animals. Next is health. The problem of waiting lists for organ exchange is ending. Your organs will be 3D printed. It is better than transplant because of no immune risks anymore. And finally, drugs. Recall pandemic situations with flue. Why you have to wait for vaccine for a week? You can 3D print your drugs from the digital model instantly, as soon as you download the digital model over the Internet. Downloaded and printed drugs is additional argument for Personalized Medicine in my recent post on the Next Five Years of Healthcare. I assume that answering essential application of technology to the basic aspects of life such as food, lifestyle and healthcare is sufficient to start taking it [technology] seriously. You can guess for other less life-critical applications yourself.

4D Printing

3D printing is on the rise, but there is even more powerful technology, called 4D printing. Fourth dimension is delayed in time and is related to the common environment characteristics such as temperature, water or some more specific like chemical. When external impact is applied, the 3D-printed strand folds into new structure, hence it uses its 4th dimension. It is very similar to the protein folding. There are tools for design of 4D things. One of them is cadnano for three-dimensional DNA origami nanostructures. It gives certainty of the stability of the designed structures. Another tool is Cyborg by Autodesk. It’s set of tools for modeling, simulation and multi-objective design optimization. Cyborg allows creation of specialized design platforms specific for the domains, from nanoparticle design to tissue engineering, to self-assembling human-scale manufacturing. Check out this excellent introduction into self-assembly and 4D printing by Skylar Tibbits from MIT Media Lab:

Forecast [on Consumption]

We will complete digitization of everything. This should be obvious for you at this stage. If not, then check out slightly different view on what Kevin Kelly called The One. No bits will live outside of the one distributed self-healing digital environment. Actually it will be us, digital copy of us. Data-wise it will be All Data together. Second reference will be to James Burke, who predicted the rise of PCs, in-vitro fertilization and cheap air travel in far 1973. Recently Burke admitted: “…The hardest factor to incorporate into my prediction, however, is that the future is no longer what it has always been: more of the same, but faster. This time: faster, yes, but unrecognisably different…” And I see it in same way, we are facing different future than we used to. It’s a bit scary but on the other hand it is very exciting. In 30 years we will have nano-fabricators, which manipulate at the level of atoms and molecules, to produce anything you want, from dirt, air, water and cheap carbon-rich acetylene gas. As you may already feel, those ingredients are virtually free, hence production of the goods by fabricator is almost free. Probably food will be a bit more expensive, but also cheap. By the way, each fabber will be able to copy itself… from the same cheap ingredients. We will not need plenty of wood, coal, oil, gas for nanofabrication. This is good for ecology. But I think we will invent other ways how to spoil Earth.

The value will shift from equipment to the digital models of the goods. Advanced 3D (and 4D models) will be not free; the rest will be crowdsourced and available for free. Autodesk, not a new company, but one of those serious, is a pioneer there with 123D apps platform. They are moving together with MakerBot. You can buy MakerBot Replicator on Autodesk site and vice versa, you will get Autodesk software together with MakerBot you bought elsewhere. It’s how it all is starting. In few years it will take off at large scale. Then we will get different economy, with much personal, sustainable and sensational consumption.

It would be interesting to draw parallels with the creation of Artificial Intelligence, because in 2030 we should have human brain simulated on non-biological carrier. Or may be we will be able to 4D or 5D-print more powerful brains than human on biological, but non-human carrier? Stay tuned.

Tagged , , , , , , , , , , , , , ,

Next Five Years of Healthcare

This insight is related to all of you and your children and relatives. It is about the health and healthcare. I feel confident to envision the progress for five years, but cautious to guess for longer. Even next five years seem pretty exciting and revolutionary. Hope you will enjoy they pathway.

We have problems today

I will not bind this to any country, hence American readers will not find Obamacare, ACO or HIE here. I will go globally as I like to do.

The old industry of healthcare still sucks. It sucks everywhere in the world. The problem is in uncertainty of our [human] nature. It’s a paradox: the medicine is one of the oldest practices and sciences, but nowadays it is one of least mature. We still don’t know for sure why and how are bodies and souls operate. The reverse engineering should continue until we gain the complete knowledge.

I believe there were civilisations tens of thousands years ago… but let’s concentrate on ours. It took many years to start in-depth studying ourselves. Leonardo da Vinci did breakthrough into anatomy in early 1500s. The accuracy of his anatomical sketches are amazing. Why didn’t others draw at the same level of perfection? The first heart transplant was performed only in 1967 in Cape Town by Christiaan Barnard. Today we are still weak at brain surgeries, even the knowledge how brain works and what is it. Paul Allen significantly contributed to the mapping of the brain. The ambitious Human Genome project was performed only in early 2000s, with 92% of sampling at 99.99% accuracy. Today, there is no clear vision or understanding what majority of DNA is for. I personally do not believe into Junk DNA, and ENCODE project confirmed it might be related to the protein regulation. Hence there is still plenty of work to complete…

But even with the current medical knowledge the healthcare could be better. Very often the patient is admitted from the scratch as a new one. Almost always the patient is discharged without proper monitoring of the medication, nutrition, behaviour and lifestyle. There are no mechanisms, practices or regulations to make it possible. For sure there are some post-discharge recommendations, assignments to the aftercare professionals, but it is immature and very inaccurate in comparison to what it could be. There are glimpses of telemedicine, but it is still very immature.

And finally, the healthcare industry in comparison to other industries such as retail, media, leisure and tourism is far behind in terms of consumer orientation. Even automotive industry is more consumer oriented than healthcare today. Economically speaking, there must be transformation to the consumer centric model. It is the same winning pattern across the industries. It [consumerism] should emerge in healthcare too. Enough about the current problems, let’s switch to the positive things – technology available!

There could be Care Anywhere

We need care anywhere. Either it is underground in the diamond mine, or in the ocean on-board of Queen Mary 2, or in the medical center or at home, at secluded places, or in the car, bus, train or plane.

There is wireless network (from cell providers), there are wearable medical devices, there is a smartphone as a man-in-the-middle to connect with the back-end. It is obvious that diagnostics and prevention, especially for the chronical diseases and emergency cases (first aid, paramedics) could be improved.

care anywhere

I personally experienced two emergency landings, once by being on-board of the six hour flight, second time by driving for the colleague to another airport. The impact is significant. Imagine that 300+ people landed in Canada, then according to the Canadian law all luggage was unloaded, moved to X-ray, then loaded again; we all lost few hours because of somebody’s heart attack.

It could be prevented it the passenger had heart monitor, blood pressure monitor, other devices and they would trigger the alarm to take the pill or ask the crew for the pill in time. The best case is that all wearable devices are linked to the smartphone [it is often allowed to turn on Bluetooth or Wi-Fi in airplane mode]. Then the app would ring and display recommendations to the passenger.

4P aka Four P’s

The medicine should go Personal, Predictive, Preventive and Participatory. It will become so in five years.

Personal is already partially explained above. Besides consumerism, which is a social or economic aspect, there should be really biological personal aspect. We all are different by ~6 million genes. That biological difference does matter. It defines the carrier status for illnesses, it is related to risks of the illnesses, it is related to individual drug response and it uncovers other health-related traits [such as Lactose Intolerance or Alcohol Addiction].

Personal medicine is an equivalent to the Mobile Health. Because you are in motion and you are unique. The single sufficiently smart device you carry with you everywhere is a smartphone. Other wearable devices are still not connected [directly into the Internet of Things]. Hence you have to use them all with the smartphone in the middle.

The shift is from volume to value. From pay to procedures to pay for performance. The model becomes outcome based. The challenge is how to measure performance: good treatment vs. poor bedside, poor treatment vs. good bedside and so on.

Predictive is a pathway to the healthcare transformation. As healthcare experts say: “the providers are flying blind”. There is no good integration and interoperability between providers and even within a single provider. The only rationale way to “open the eyes” is analytics. Descriptive analytics to get a snapshot of what is going on, predictive analytics to foresee the near future and make right decisions, and prescriptive analytics to know even better the reasoning of the future things.

Why there is still no good interoperability? Why there is no wide HL7 adoption? How many years have gone since those initiatives and standards? My personal opinion is that the current [and former] interoperability efforts are the dead end. The rationale is simple: if it worth to be done, it would be already done. There might be something in the middle – the providers will implement interoperability within themselves, but not at the scale of the state or country or globally.

Two reasons for “dead interop”. First is business related. Why should I share my stuff with others? I spent on expensive labs or scans, I don’t want others to benefit from my investments into this patient treatment. Second is breakthrough in genomics and proteomics. Only 20 minutes needed to purify the DNA from the body liquids with Zymo Research DNA Kit. Genome in 15 minutes under $100 has been planned by Pacific Biosciences by this year. Intel invested 100 million dollars into Pacific Biosciences in 2008. Besides gene mechanisms, there are others, not related to DNA change. They are also useful for analysis, predicting and decision making per individual patient. [Read about epigenetics for more details]. There is a third reason – Artificial Intelligence. We already classify with AI, very soon will put much more responsibility onto AI.

Preventive is very interesting transformation, because it is blurring the boarders between treatment and behaviour/lifestyle/wellness and between drugs and nutrition. It is directly related to the chronic diseases and to post-discharge aftercare, even self aftercare. To prevent from readmission the patient should take proper medication, adjust her behaviour and lifestyle, consume special nutrition. E.g. diabetes patients should eat special sugar-free meal. There is a question where drug ends and where nutrition starts? What Coca Cola Diet is? First step towards the drugs?

Pharmacogenomics is on the rise to do proactive steps into the future, with known individual’s response to the drugs. It is both predictive and preventive. It will be normal that mass universal drugs will start to disappear, while narrowly targeted drugs will be designed. Personal drugs is a next step, when the patient is a foundation for almost exclusive treatment.

Participatory is interesting in the way that non-healthcare organisations become related to the healthcare. P&G produce sun screens, designed by skin type [at molecular level], for older people and for children. Nestle produces dietary food. And recall there are Johnson & Johnson, Unilever and even Coca Cola. I strongly recommend to investigate PWC Health practice for the insights and analysis.

Personal Starts from Wearable

The most important driver for the adoption of wearable medical devices is ageing population. The average age of the population increases, while the mobility of the population decreases. People need access to healthcare from everywhere, and at lower cost [for those who retired]. Chronic diseases are related to the ageing population too. Chronic diseases require constant control, interventions of physician in case of high or low measurements. Such control is possible via multiple medical devices. Many of them are smartphone-enabled, where corresponding application runs and “decides” what to tell to the user.

Glucose meter is much smaller now, here is a slick one from iBGStar. Heart rate monitors are available in plenty of choices. Fitness trackers and dietary apps are present as vast majority of [mobile health] apps in the stores. Wrist bands are becoming the element of lifestyle, especially with fashionably designed Jawbone Up. Triceps band BodyMedia is good for calories tracking. Add here wireless weight… I’ve described gadgets and principles in previous posts Wearable Technology and Wearable Technology, Part II. Here I’d like to distinguish Scanadu Scout, measuring vitals like temperature, heart rate, oxymetry [saturation of your hemoglobin], ECG, HRV, PWTT, UA [urine analysis] and mood/stress. Just put appropriate gadgets onto your body, gather data, analyse and apply predictive analytics to react or to prevent.


Personal is a Future of Medicine

If you think about all those personal gadgets and brick mobile phones as sub-niche within medicine, then you are deeply mistaken. Because the medicine itself will become personal as a whole. It is a five year transition from what we have to what should be [and will be]. Computer disappears, into the pocket and into the cloud. All pocket sized and wearable gadgets will miniaturise, while cloud farms of servers will grow and run much smarter AI.

Everybody of us will become a “thing” within the Internet of Things. IoT is not a Facebook [it’s too primitive], but it is quantified and connected you, to the intelligent health cloud, and sometimes to the physicians and other people [patients like you]. This will happen within next 5-10 years, I think rather sooner or later. The technology changes within few years. There were no tablets 3.5 years ago, now we have plenty of them and even new bendable prototypes. Today we experience first wearable breakthroughs, imagine how it will advance within next 3 years. Remember we are accelerating, the technology is accelerating. Much more to come and it will change out lives. I hope it will transform the healthcare dramatically. Many current problems will become obsolete via new emerging alternatives.

Predictive & Preventive is AI

Both are AI. Period. Providers must employ strong mathematicians and physicists and other scientists to create smarter AI. Google works on duplication of the human brain on non-biological carrier. Qualcomm designs neuro chips. IBM demonstrated brainlike computing. Their new computing architecture is called TrueNorth.

Other healthcare participatory providers [technology companies, ISVs, food and beverage companies, consumer goods companies, pharma and life sciences] must adopt strong AI discipline, because all future solutions will deal with extreme data [even All Data], which is impossible to tame with usual tools. Forget simple business logic of if/else/loop. Get ready for the massive grid computing by AI engines. You might need to recall all math you was taught and multiply it 100x. [In case of poor math background get ready to 1000x efforts]

Education is a Pathway

Both patients and providers must learn genetics, epigenetics, genomics, proteomics, pharmacogenomics. Right now we don’t have enough physicians to translate your voluntarily made DNA analysis [by 23andme] to personal treatment. There are advanced genetic labs that takes your genealogy and markers to calculate the risks of diseases. It should be simpler in the future. And it will go through the education.

Five years is a time frame for the new student to become a new physician. Actually slightly more needed [for residency and fellowship], but we could consider first observable changes in five years from today. You should start learning it all for your own needs right now, because you also must be educated to bring better healthcare to ourselves!


Tagged , , , , , , , , , , , , , , , ,