Tag Archives: prescriptive

Advanced Analytics, Part V

This post is related to the details of visualization of information for executives and operational managers on the mobile front-end. What is descriptive, what is predictive, what is prescriptive, how it looks like, and why. The scope of this post is a cap of the information pyramid. Even if I start about smth detailed I still remain at the very top, at the level of most important information without details on the underlying data. Previous posts contains introduction (Part I) and pathway (Part II) of the information to the user, especially executives.

Perception pipeline

The user’s perception pipeline is: RECOGNITION –> QUALIFICATION –> QUANTIFICATION –> OPTIMIZATION. During recognition the user just grasps the entire thing, starts to take it as a whole, in the ideal we should deliver personal experience, hence information will be valuable but probably delivered slightly different from the previous context. More on personal experience  in next chapter below. So as soon as user grasps/recognizes she is capable to classify or qualify by commonality. User operates with categories and scores within those categories. The scores are qualitative and very friendly for understanding, such as poor, so-so, good, great. Then user is ready to reduce subjectivity and turn to the numeric measurements/scoring. It’s quantification, converting good & great into numbers (absolute and relative). As soon as user all set with numeric measurements, she is capable to improve/optimize the biz or process or whatever the subject is.

Default screen

What should be rendered on the default screen? I bet it is combination of the descriptive, predictive and prescriptive, with large portion of space dedicated to descriptive. Why descriptive is so important? Because until we build AI the trust and confidence to those computer generated suggestions is not at the level. That’s why we have to show ‘AS IS’ picture, to deliver how everything works and what happens without any decorations or translations. If we deliver such snapshot of the business/process/unit/etc. the issue of trust between human and machine might be resolved. We used to believe that machines are pretty good at tracking tons of measurements, so let them track it and visualize it.

There must be an attempt from the machines to try to advice the human user. It’s could be done in the form of the personalized sentence, on the same screen, along with descriptive analytics. So putting some TODOs are absolutely OK. While believing that user will trust them and follow them is naive. The user will definitely dig into the details why such prescription is proposed. It’s normal that user is curious on root-cause chain. Hence be ready to provide almost the same information with additional details on the reasons/roots, trends/predictions, classifications & patterns recognition within KPI control charts, and additional details on prescriptions. If we visualize [on top of the inverted pyramid] with text message and stack of vital signs, then we have to prepare additional screen to answer that list of mentioned details. We will still remain on top of the pyramid.

default_screen

 

Next screen

If we got ‘AS IS’ then there must be ‘TO BE’, at least for the symmetry:) User starts on default screen (recognition and qualification) and continues to the next screen (qualification and quantification). Next screen should have more details. What kind of information would be logically relevant for the user who got default screen and looks for more? Or it’s better to say – looks for ‘why’? May be it’s time to list them as bullets for more clarity:

  • dynamic pattern recognition (with highlight on the corresponding chart or charts) what is going on; it could be one from seven performance signals, it should be three essential signals
  • highlight the area of the significant event [dynamic pattern/signal] to the other charts to juxtapose what is going on there, to foster thinking on potential relations; it’s still human who thinks, while machine assists
  • parameters & decorations for the same control charts, such as min/max/avg values, identifications of the quarters or months or sprints or weeks or so
  • normal range (also applicable to the default screen) or even ranges, because they could be different for different quarters or years
  • trend line, using most applicable method for approximation/prediction of future values; e.g. release forecast
  • its parts should be clickable for digging from relative values/charts into the absolute values/charts for even more detailed analysis; from qualitative to quantitative
  • your ideas here

signal

Recognition of signals as dynamic patterns is identification of the roots/reasons for smth. Predictions and prescriptions could be driven by those signals. Prescriptions could be generic, but it’s better to make personalized prescriptions. Explanations could be designed for the personal needs/perception/experience.

 

Personal experience

We consume information in various contexts. If it is release of the project or product then the context is different from the start of the zero sprint. If it’s merger & acquisition then expected information is different from the quarterly review. It all depends on the user (from CEO to CxOs to VPs to middle management to team management and leadership), on the activity, on the device (Moto 360 or iPhone or iPad or car or TV or laptop). It matters where the user is physically, location does matter. Empathy does matter. But how to reach it?

We could build users interests from social networks and from the interaction with other services. Interests are relatively static in time. It is possible to figure out intentions. Intentions are dynamic and useful only when they are hot. Business intentions are observable from business comms. We could sense the intensity of communication between the user and CFO and classify it as a context related to the budgeting or budget review. If we use sensors on corporate mail system (or mail clients), combine with GPS or Wi-Fi location sensors/services, or with manual check-in somewhere, we could figure out that the user indeed intensified comms with CFO and worked together face-to-face. Having such dynamic context we are capable to deliver the information in that context.

The concept of personal experience (or personal UX) is similar to the braid (type of hairstyle). Each graph of data helps to filter relevant data. Together those graphs allows to locate the real-time context. Having such personal context we could build and deliver most valuable information to the user. More details how to handle interest graph, intention graph, mobile graph, social graph and which sensors could bring the modern new data available in my older posts. So far I propose to personalize the text message for default screen and next screen, because it’s easier than vital signs, and it’s fittable into wrist sized gadgets like Moto 360.

Advertisements
Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Advanced Analytics, Part II

This post will be about delivery and consumption of information, about the front-end. Big picture introduction has been described in previous post Advanced Analytics, Part I.

The Ideal

It would be neither UI of current gadgets nor new gadgets. The ideal would be HCI leveled up to the human-human communication with visual consistent live look, speech, motions and some other aspects of real-life comms. There will be AI built finally, probably by 2030. Whenever it happens, the machines will try to mimic humans and humans will be able to communicate in really natural way with machines. The machines will deliver information. Imagine boss asking his/her assistant how the things are going and she says: “Perfectly!” and then adds portion of summaries and exceptions in few sentences. If the answers are empathic and proactive enough, then there may probably be no next questions like “So what?”

First such humanized comms will be asynchronous messaging and semi-synchronous chats. If the peer on the other end is indistinguishable (human vs. machine) , and the value and quality of information is high, delivered onto mobile & wearable gadgets in real-time, then it’s first good implementation of the front-end for advanced analytics. The interaction interface is writing & reading. Second leveling up is speech. It’s technically more complicated to switch from writing-reading to listening-talking. But as soon as same valuable information is delivered that way, it would mean advanced analytics got a phase shift. Such speaking humanized assistants would be everywhere around us, in business and life. Third leveling up is visual. As soon as we can see and perceive the peer as a human, with look, speech, motion, then we are almost there. Further leveling up is related to touch, smell and other aspects to mimic real-life. That’s Turing test, with shift towards information delivery for business performance and decision making.

What to communicate?

As highlighted in a books on dashboard design and taught by renown professionals, most important are personalized short message, supported with summaries and exceptions. Today we are able to deliver such kind of information in text, chart, table, map, animation, audio, video form onto mobile phone, wristband gadget, glasses, car infotainment unit, TV panel and to the number of other non-humanized devices. With present technologies it’s possible to cover first and partially second levels described in “The Ideal” section earlier. Third – visual – is still premature,  but there are interesting and promising experiments with 3D holograms. As it’s gets cheaper we would be able to project whatever look of business assistant we need.

Most challenging is a personalization of ad-hoc real-time answer to the inquiry. Empathy is important to tune to the biological specifics. Context and continuity according to the previous comms is important to add value, on top of previously delivered information. Interests, current intentions, recent connections and real-time motion could help to shape the context properly. That data could be abstracted into the data and knowledge graphs, for further processing. Some details on those graphs are present in Six Graphs of Big Data.

Summary is an art to fit a big picture into single pager. Somebody still don’t understand why single pager does matter (even UX Magazine guys). Here is a tip – anthropologically we’ve got a body and two arms, and the length of the arms, the distance between the arms, distance between the eyes and what we hold in the arms is predefined. There is simply no way to change those anthropological restrictions. Hence a single page (A4 or Letter size) is a most ergonomic and proven size of the artifact to be used for the hands. Remember, we are talking about the summaries now, hence some space assets are needed to represent them [summaries]. Summaries should be structured into Inverted pyramid information architecture, to optimize the process of information consumption by decision maker.

Exceptions are important to be proactively communicated, because they mean we’ve got issue with predictability and expectations. There could be positive exceptions for sure, but if they were not expected, they must be addressed descriptively, explanatory (reason, root-cause, consequences, repeatability and further expectations). Both summaries and exceptions shall fit into single pager or even smaller space.

What exactly to communicate?

On one hand main message, summaries and exceptions are too generic and high-level guidelines. On the other hand, prescriptive, predictive and descriptive analytics is too technical classification. Let’s add some soul. For software projects we could introduce more understandable categories of classification. “Projects exist only in two states: either too-early-to-tell or too-late-to-change.” It was said by Edward Tufte during discussion of executive dashboards. Other and more detailed recommendations on information organization are listed below, they are based on Edward Tufte and Peter Drucker experience and vision, reused from Tuftes forum.

  • The point of information displays is to assist thinking; therefore, ask first of all: What are the thinking tasks that the displays are supposed to help with?
  • Build in systematic checks of data quality into the display and analysis system. For example, good checks of the data on revenue recognition must be made, given the strong incentives for premature recognition. Beware, in management data, of what statisticians call “sampling to please”.
  • Avoid heavy-breathing metaphors such as the mission control center, the strategic air command, the cockpit, the dashboard, or Star Trek. As Peter Drucker once said, good management is boring. If you want excitement, don’t go to a good management information system. Simple designs showing high-resolution data, well-labelled information in tables and graphics will do just fine. One model might be the medical interface in Visual Explanations (pages 110-111) and the articles by Seth Powsner and me cited there. You could check out research with those medical summaries for iPad and iPhone in my previous posts. Mobile EMR Part I, Part II, Part III, Part IV, Part V.
  • Watch the actual data collection involved in describing the process. Watch the observations being made and recorded; chances are you will learn a lot about the meaning and quality of the numbers and about the actual process itself. Talk to the people making the actual measurements.
  • Measurement itself (and the apparent review of the numbers) can govern a process. No jargon about an Executive Decision Protocol Monitoring Support Dashboard System is needed. In fact, such jargon would be an impediment to thinking.
  • Too many resources were devoted to collecting data. It is worth thinking about why employees are filling out forms for management busybody bureaucrats rather than doing something real, useful, productive…

Closer to the executive information

Everything clear with single-sentence personalized real-time message. Interest Graph, Intention Graph, Mobile Graph, Social Graph might help to compile such message.

Summaries could be presented as Vital Signs. Like we measure medical patient temperature, blood pressure, heart rate and other parameters, the similar way we could measure vital signs of the business: cache flow, liquidity projections, sales, receivables, ratios.

Other indicators of the business performance could be productivity, innovations in core competency, ABC, human factor, value and value-add. Productivity should go together with predictability. There is excellent blog post by Neil Fox, named The Two Agile Programming Metrics that Matter. Activity-based costing (aka ABC) could show where there is a fat that could be cut out. Very often ABC is bound to the human factor. Another interesting relation exists between productivity and human factor too, which is called emotional intelligence or engagement. Hence we’ve got an interdependent graph of measurements. Core competency defines the future of the particular business, hence innovations shall take place within core competency. It’s possible to track and measure innovation rate, but it’s vital to do it for the right competency, not for multiple ones. And finally – value and value-add. In transforming economy we are switching from wealth orientation towards happiness of users/consumers. In the Experience Economy we must measure and project delivery of happiness to every individual. More details are available in my older post Transformation of Consumption.

Finally in this post, we have to distinguish between executive and operational information. They should be designed/architectured differently. More in next posts. It’s recommended to read Peter Drucker’s work “The Essential Drucker” to unlock the wisdom what executives really need, what is absent on the market, and how to design it for the modern perfect storm of technologies and growing business intelligence needs.

Tagged , , , , , , , , , , , , , , , , , , ,

Advanced Analytics, Part I

I’m writing this post in very difficult times. Russia declared war to Ukraine, annexed Crimea, significant part of Ukrainian territory. God bless Ukraine!

This post will melt together multiple things, partially described in my previous posts. There will be references for details by distinguished topics.

From Data to Information

We need information. Information is a “knowledge communicated or received concerning a particular fact or circumstance“, it’s citation from the Wikipedia. Modern useful information technique [in business] is called analytics. There is descriptive analytics or AS-IS snapshot of the things. There is predictive analytics or WHAT-WILL. Predictive analytics is already on Gartner’s Plateau of Productivity. Prescriptive analytics goes further, answering WHAT & WHY. Decision makers [in business and life] need information in Inverted Pyramid manner. Most important information on top, then major facts & reasons, and so on downstairs…

But we have data at the beginning. Tons of data bits which are generated by wide variety of data sources. There are big volumes of classical enterprise data in ERPs, CRMs, legacy apps. That data is primarily relational, SQL friendly. There are big volumes of relatively new social data, as public and private user profiles, user timelines in social networks, mixed content of text, imagery, video, locations, emotions, relations, statuses and so forth. There are growing volumes of machine data, starting from access control systems with turnstiles in the office or parking to M2M sensors on transport fleet or quantified-self individuals. Social and machine data is not necessarily SQL friendly. Check out Five Sources of Big Data for more details.

Processing Pipeline

Everything starts from proper abstraction & design. Old school methods still works, but modern methods unlocks even more potential towards creation of information out of the raw data. Abstraction [of the business models or life models] leads to design of data models which are often some kinds of graphs. It is absolutely normal to have multiple graphs within a solution/product. E.g. people relations are straightforward abstracted to Social Graph, while machine data might be represented into Network Graphs, Mobile Graph. There are other common abstractions, such as Logistic Graph, Recommendations Graph and so on. More details could be found in Six Graphs of Big Data.

The key concept of the processing could be abstracted to a funnel. On the left you got raw data, you feed it into the funnel, and gets kind of information of the right. This is depicted at high-level on the diagram.

funnel

What makes it Advanced?

An interesting question… Modern does not always mean advanced. What makes it advanced is another technology, related to the user experience – mobiles and wearables. As soon as predictive and prescriptive analytics is delivered in real-time at your fingertips, it could be considered to be advanced.

There are several technological limitations and challenges. Let’s start from the mobiles and wearables. The biggest issue is a screen size. Entire visualization must be designed from the scratch. Reuse for big screens does not work, despite of our browsing of full blown web sites on those screens… The issue with wearables is related to their emergence. Nobody simply isn’t aware enough how to design for them. The paradigms will emerge as soon as adoption rate starts to decelerate. Right now we are observing the boom of wearables. There is insight on wearables: Wearable Technology and Wearable Technology, Part II. A lot to change there!

The requirement of real-time or near-real-time information delivery assumes high-performance computing at the backend, some data massage and pre-processing must be done in advance; then bits must be served out from the memory. It is client-cloud architecture, where client is mobile or wearable gadget, cloud is backend with plenty of RAM with ready-made bits. This is depicted on the diagram, read it from left to right.

funnel3

So what?

This is a new era of the tools and technologies to enable and accelerate the processing pipeline from data to information into your pocket/hand/eyes. There is a lack of tools and frameworks to melt old and modern data together. Hadoop is doing well there, but things are not so smooth as install & run. There is a lack of data platform tools. There is a lack of integration, aggregation tools. Visualization is totally absent, there are still no good executive dashboards even for PC screens, not mentioning smartphones. I will address those opportunities in more details in next posts.

Tagged , , ,

Next Five Years of Healthcare

This insight is related to all of you and your children and relatives. It is about the health and healthcare. I feel confident to envision the progress for five years, but cautious to guess for longer. Even next five years seem pretty exciting and revolutionary. Hope you will enjoy they pathway.

We have problems today

I will not bind this to any country, hence American readers will not find Obamacare, ACO or HIE here. I will go globally as I like to do.

The old industry of healthcare still sucks. It sucks everywhere in the world. The problem is in uncertainty of our [human] nature. It’s a paradox: the medicine is one of the oldest practices and sciences, but nowadays it is one of least mature. We still don’t know for sure why and how are bodies and souls operate. The reverse engineering should continue until we gain the complete knowledge.

I believe there were civilisations tens of thousands years ago… but let’s concentrate on ours. It took many years to start in-depth studying ourselves. Leonardo da Vinci did breakthrough into anatomy in early 1500s. The accuracy of his anatomical sketches are amazing. Why didn’t others draw at the same level of perfection? The first heart transplant was performed only in 1967 in Cape Town by Christiaan Barnard. Today we are still weak at brain surgeries, even the knowledge how brain works and what is it. Paul Allen significantly contributed to the mapping of the brain. The ambitious Human Genome project was performed only in early 2000s, with 92% of sampling at 99.99% accuracy. Today, there is no clear vision or understanding what majority of DNA is for. I personally do not believe into Junk DNA, and ENCODE project confirmed it might be related to the protein regulation. Hence there is still plenty of work to complete…

But even with the current medical knowledge the healthcare could be better. Very often the patient is admitted from the scratch as a new one. Almost always the patient is discharged without proper monitoring of the medication, nutrition, behaviour and lifestyle. There are no mechanisms, practices or regulations to make it possible. For sure there are some post-discharge recommendations, assignments to the aftercare professionals, but it is immature and very inaccurate in comparison to what it could be. There are glimpses of telemedicine, but it is still very immature.

And finally, the healthcare industry in comparison to other industries such as retail, media, leisure and tourism is far behind in terms of consumer orientation. Even automotive industry is more consumer oriented than healthcare today. Economically speaking, there must be transformation to the consumer centric model. It is the same winning pattern across the industries. It [consumerism] should emerge in healthcare too. Enough about the current problems, let’s switch to the positive things – technology available!

There could be Care Anywhere

We need care anywhere. Either it is underground in the diamond mine, or in the ocean on-board of Queen Mary 2, or in the medical center or at home, at secluded places, or in the car, bus, train or plane.

There is wireless network (from cell providers), there are wearable medical devices, there is a smartphone as a man-in-the-middle to connect with the back-end. It is obvious that diagnostics and prevention, especially for the chronical diseases and emergency cases (first aid, paramedics) could be improved.

care anywhere

I personally experienced two emergency landings, once by being on-board of the six hour flight, second time by driving for the colleague to another airport. The impact is significant. Imagine that 300+ people landed in Canada, then according to the Canadian law all luggage was unloaded, moved to X-ray, then loaded again; we all lost few hours because of somebody’s heart attack.

It could be prevented it the passenger had heart monitor, blood pressure monitor, other devices and they would trigger the alarm to take the pill or ask the crew for the pill in time. The best case is that all wearable devices are linked to the smartphone [it is often allowed to turn on Bluetooth or Wi-Fi in airplane mode]. Then the app would ring and display recommendations to the passenger.

4P aka Four P’s

The medicine should go Personal, Predictive, Preventive and Participatory. It will become so in five years.

Personal is already partially explained above. Besides consumerism, which is a social or economic aspect, there should be really biological personal aspect. We all are different by ~6 million genes. That biological difference does matter. It defines the carrier status for illnesses, it is related to risks of the illnesses, it is related to individual drug response and it uncovers other health-related traits [such as Lactose Intolerance or Alcohol Addiction].

Personal medicine is an equivalent to the Mobile Health. Because you are in motion and you are unique. The single sufficiently smart device you carry with you everywhere is a smartphone. Other wearable devices are still not connected [directly into the Internet of Things]. Hence you have to use them all with the smartphone in the middle.

The shift is from volume to value. From pay to procedures to pay for performance. The model becomes outcome based. The challenge is how to measure performance: good treatment vs. poor bedside, poor treatment vs. good bedside and so on.

Predictive is a pathway to the healthcare transformation. As healthcare experts say: “the providers are flying blind”. There is no good integration and interoperability between providers and even within a single provider. The only rationale way to “open the eyes” is analytics. Descriptive analytics to get a snapshot of what is going on, predictive analytics to foresee the near future and make right decisions, and prescriptive analytics to know even better the reasoning of the future things.

Why there is still no good interoperability? Why there is no wide HL7 adoption? How many years have gone since those initiatives and standards? My personal opinion is that the current [and former] interoperability efforts are the dead end. The rationale is simple: if it worth to be done, it would be already done. There might be something in the middle – the providers will implement interoperability within themselves, but not at the scale of the state or country or globally.

Two reasons for “dead interop”. First is business related. Why should I share my stuff with others? I spent on expensive labs or scans, I don’t want others to benefit from my investments into this patient treatment. Second is breakthrough in genomics and proteomics. Only 20 minutes needed to purify the DNA from the body liquids with Zymo Research DNA Kit. Genome in 15 minutes under $100 has been planned by Pacific Biosciences by this year. Intel invested 100 million dollars into Pacific Biosciences in 2008. Besides gene mechanisms, there are others, not related to DNA change. They are also useful for analysis, predicting and decision making per individual patient. [Read about epigenetics for more details]. There is a third reason – Artificial Intelligence. We already classify with AI, very soon will put much more responsibility onto AI.

Preventive is very interesting transformation, because it is blurring the boarders between treatment and behaviour/lifestyle/wellness and between drugs and nutrition. It is directly related to the chronic diseases and to post-discharge aftercare, even self aftercare. To prevent from readmission the patient should take proper medication, adjust her behaviour and lifestyle, consume special nutrition. E.g. diabetes patients should eat special sugar-free meal. There is a question where drug ends and where nutrition starts? What Coca Cola Diet is? First step towards the drugs?

Pharmacogenomics is on the rise to do proactive steps into the future, with known individual’s response to the drugs. It is both predictive and preventive. It will be normal that mass universal drugs will start to disappear, while narrowly targeted drugs will be designed. Personal drugs is a next step, when the patient is a foundation for almost exclusive treatment.

Participatory is interesting in the way that non-healthcare organisations become related to the healthcare. P&G produce sun screens, designed by skin type [at molecular level], for older people and for children. Nestle produces dietary food. And recall there are Johnson & Johnson, Unilever and even Coca Cola. I strongly recommend to investigate PWC Health practice for the insights and analysis.

Personal Starts from Wearable

The most important driver for the adoption of wearable medical devices is ageing population. The average age of the population increases, while the mobility of the population decreases. People need access to healthcare from everywhere, and at lower cost [for those who retired]. Chronic diseases are related to the ageing population too. Chronic diseases require constant control, interventions of physician in case of high or low measurements. Such control is possible via multiple medical devices. Many of them are smartphone-enabled, where corresponding application runs and “decides” what to tell to the user.

Glucose meter is much smaller now, here is a slick one from iBGStar. Heart rate monitors are available in plenty of choices. Fitness trackers and dietary apps are present as vast majority of [mobile health] apps in the stores. Wrist bands are becoming the element of lifestyle, especially with fashionably designed Jawbone Up. Triceps band BodyMedia is good for calories tracking. Add here wireless weight… I’ve described gadgets and principles in previous posts Wearable Technology and Wearable Technology, Part II. Here I’d like to distinguish Scanadu Scout, measuring vitals like temperature, heart rate, oxymetry [saturation of your hemoglobin], ECG, HRV, PWTT, UA [urine analysis] and mood/stress. Just put appropriate gadgets onto your body, gather data, analyse and apply predictive analytics to react or to prevent.

anything_s

Personal is a Future of Medicine

If you think about all those personal gadgets and brick mobile phones as sub-niche within medicine, then you are deeply mistaken. Because the medicine itself will become personal as a whole. It is a five year transition from what we have to what should be [and will be]. Computer disappears, into the pocket and into the cloud. All pocket sized and wearable gadgets will miniaturise, while cloud farms of servers will grow and run much smarter AI.

Everybody of us will become a “thing” within the Internet of Things. IoT is not a Facebook [it’s too primitive], but it is quantified and connected you, to the intelligent health cloud, and sometimes to the physicians and other people [patients like you]. This will happen within next 5-10 years, I think rather sooner or later. The technology changes within few years. There were no tablets 3.5 years ago, now we have plenty of them and even new bendable prototypes. Today we experience first wearable breakthroughs, imagine how it will advance within next 3 years. Remember we are accelerating, the technology is accelerating. Much more to come and it will change out lives. I hope it will transform the healthcare dramatically. Many current problems will become obsolete via new emerging alternatives.

Predictive & Preventive is AI

Both are AI. Period. Providers must employ strong mathematicians and physicists and other scientists to create smarter AI. Google works on duplication of the human brain on non-biological carrier. Qualcomm designs neuro chips. IBM demonstrated brainlike computing. Their new computing architecture is called TrueNorth.

Other healthcare participatory providers [technology companies, ISVs, food and beverage companies, consumer goods companies, pharma and life sciences] must adopt strong AI discipline, because all future solutions will deal with extreme data [even All Data], which is impossible to tame with usual tools. Forget simple business logic of if/else/loop. Get ready for the massive grid computing by AI engines. You might need to recall all math you was taught and multiply it 100x. [In case of poor math background get ready to 1000x efforts]

Education is a Pathway

Both patients and providers must learn genetics, epigenetics, genomics, proteomics, pharmacogenomics. Right now we don’t have enough physicians to translate your voluntarily made DNA analysis [by 23andme] to personal treatment. There are advanced genetic labs that takes your genealogy and markers to calculate the risks of diseases. It should be simpler in the future. And it will go through the education.

Five years is a time frame for the new student to become a new physician. Actually slightly more needed [for residency and fellowship], but we could consider first observable changes in five years from today. You should start learning it all for your own needs right now, because you also must be educated to bring better healthcare to ourselves!

 

Tagged , , , , , , , , , , , , , , , ,

Six Graphs of Big Data

This post is about Big Data. We will talk about the value and economical benefits of Big Data, not the atoms that constitute it [Big Data]. For the atoms you can refer to Wearable Technology or Getting Ready for the Internet of Things by Alex Sukholeyster, or just logging of the click stream… and you will get plenty of data, but it will be low-level, atom level, not much useful.

The value starts at the higher levels, when we use social connections of the people, understand their interests and consumptions, know their movement, predict their intentions, and link it all together semantically. In other words, we are talking about six graphs: Social, Interest, Consumption, Intention, Mobile and Knowledge. Forbes mentions five of them in Strategic Big Data insight. Gartner provided report “The Competitive Dynamics of the Consumer Web: Five Graphs Deliver a Sustainable Advantage”, it is paid resource unfortunately. It would be fine to look inside, but we can move forward with our vision, then compare to Gartner’s and analyze the commonality and variability. I foresee that our vision is wider and more consistent!

Social Graph

This is mostly analyzed and discussed graph. It is about connections between people. There are fundamental researches about it, like Six degrees of separation. Since LiveJournal times (since 1999), the Social Graph concept has been widely adopted and implemented. Facebook and its predecessors for non-professionals, LinkedIn mainly for professionals, and then others such as Twitter, Pinterest. There is a good overview about Social Graph Concepts and Issues on ReadWrite. There is good practical review of social graph by one of its pioneers, Brad Fitzpatrick, called Thoughts on the Social Graph. Mainly he reports a problem of absence of a single graph that is comprehensive and decentralized. It is a pain for integrations because of all those heterogeneous authentications and “walled garden” related issues.

Regarding implementation of the Social Graph, there are advices from the successful implementers, such as Pinterest. Official Pinterest engineering blog revealed how to Build a Follower Model from scratch. We can look at the same thing [Social Graph] from totally different perspective – technology. The modern technology provider Redis features tutorial how to Build a Twitter clone in PHP and (of course) Redis. So situation with Social Graph is less or more established. Many build it, but nobody solved the problem of having single consistent independent graph (probably built from other graphs).

Interest Graph

It is representation of the specific things in which an individual is interested. Read more about Interest Graph on Wikipedia. This is the next hot graph after the social. Indeed, the Interest Graph complements the Social one. Social Commerce see the Interest + Social Graphs together. People provide the raw data on their public and private profiles. Crawling and parsing of that data, plus special analysis is capable of building the Interest Graph for each of you. Gravity Labs created a special technology for building the Interest Graph. They call it Interest Graph Builder. There is an overview (follow previous link) and a demo. There are ontologies, entities, entity matching etc. Interesting insight about the Future of Interest Graph is authored by Pinterest’s head of engineering. The idea is to improve the Amazon’s recommendation engine, based on the classifiers (via pins). Pinterest knows the reasoning, “why” users pinned something, while Amazon doesn’t know. We are approaching Intention Graph.

Intention Graph

Not much could be said about intentions. It is about what we do and why we do.  Social and Interests are static in comparison to Intentions. This is related to prescriptive analytics, because it deals with the reasoning and motivation, “why” it happens or will happen. It seems that other graphs together could reveal much more about intentions, than trying to figure them [Intentions] out separately.

Intention Graph is tightly bound to the personal experience, or personal UX. It was foreseen in far 1999, by Harvard Business Review, as Experience Economy. Many years were spent, but not much implemented towards personal UX. We still don’t stage a personal ad hoc experience from goods and services exclusively for each user. I predict that Social + Interest + Consumption + Mobile graphs will allow us to build useful Intention Graph and achieve capabilities to build/deliver individual experiences. When the individual is within the service, then we are ready to predict some intentions, but it is true when Service Design was done properly.

Consumption Graph

One of the most important graphs of Big Data. Some call it Payment Graph. But Consumption is a better name, because we can consume without payment, Consumption Graph is relatively easy for e-commerce giants, like Amazon and eBay, but tricky for 3rd parties, like you. What if you want to know what user consumes? There are no sources of such information. Both Amazon and eBay are “walled gardens”. Each tracks what you do (browse, buy, put into wish list etc.), how you do it (when log in, how long staying within, sequence of your activities etc.), they send you some notifications/suggestions and measure how do you react, and many other tricks how to handle descriptive, predictive and prescriptive analytics. But what if user buys from other e-stores? There is a same problem like with Social Graph. IMHO there should be a mechanism to grab user’s Consumption Graph from sub-graphs (if user identifies herself).

Well, but there is still big portion of retail consumption. How to they build your Consumption Graph? Very easy, via loyalty cards. You think about discounts by using those cards, while retailers think about your Consumption Graph and predicts what to do with all of users/client together and even individually. There is the same problem of disconnected Consumption Graphs as in e-commerce, because each store has its own card. There are aggregators like Key Ring. Theoretically, they simplify the life of consumer by shielding her from all those cards. But in reality, the back-end logic is able to build a bigger Consumption Graph for retail consumption! Another aspect: consumption of goods vs. consumption of services and experiences, is there a difference? What is a difference between hard goods and digital goods? There are other cool things about retail, like tracking clients and detecting their sex and age. It is all becoming the Consumption Graph. Think about that yourself:)

Anyway, Consumption Graph is very interesting, because we are digitizing this World. We are printing digital goods on 3D printers. So far the shape and look & feel is identical to the cloned product (e.g. cup), but internals are different. As soon as 3D printer will be able to reconstruct the crystal structure, it will be brand new way of consumption. It is thrilling and wide topic, hence I am going to discuss it separately. Keep in touch to not miss it.

Mobile Graph

This graph is built from mobile data. It does not mean the data comes from mobile phones. Today may be majority of data is still generated by the smartphones, but tomorrow it will not be the truth. Check out Wearable Technology to figure out why. Second important notion is about the views onto the understanding of the Mobile Graph. Marketing based view described on Floatpoint is indeed about the smartphones usage. It is considered that Mobile Graph is a map of interactions (with contexts how people interact) such as Web, social apps/bookmarks/sharing, native apps, GPS and location/checkins, NFC, digital wallets, media authoring, pull/push notifications. I would view the Mobile Graph as a user-in-motion. Where user resides at each moment (home, office, on the way, school, hospital, store etc.), how user relocates (fast by car, slow by bike, very slow by feet; or uniformly or not, e.g. via public transport), how user behaves on each location (static, dynamic, mixed), what other users’ motions take place around (who else traveled same route, or who also reside on same location for that time slot) and so on. I am looking at the Motion Graph more as to the Mesh Network.

Why dynamic networking view makes more sense? Consider users as people and machines. Recall about IoT and M2M. Recall the initiatives by Ford and Nokia for resolving the gridlock problems in real-time. Mobile Graphs is better related to the motion, mobility, i.e. to the essence of the word “mobile”. If we consider it from motion point of view and add/extend with the marketing point of view, we will get pretty useful model for the user and society. Mobile Graph is not for oneself. At least it is more efficient for many than for one.

Knowledge Graph

This is a monster one. It is about the semantics between all digital and physical things. Why Google rocks still? Because they built the Knowledge Graph. You can see it action here. Check out interesting tips & tricks here. Google’s Knowledge Graph is a tool to find the UnGoogleable. There is a post on Blumenthals that Google’s Local Graph is much better than Knowledge, but this probably will be eliminated with time. IMHO their Knowledge Graph is being taught iteratively.

As Larry Page said many times, Google is not a search engine or ads engine, but the company that is building the Artificial Intelligence. Ray Kurzweil joined Google to simulate the human brain and recreate kind of intelligence. Here is a nice article How Larry Page and Knowledge Graph helped to seduce Ray Kurzweil to join Google. “The Knowledge Graph knows that Santa Cruz is a place, and that this list of places are related to Santa Cruz”.

We can look at those graphs together. Social will be in the middle, because we (people) like to be in the center of the Universe:) The Knowledge Graph could be considered as meta-graph, penetrating all other graphs, or as super-graph, including multiple parts from other graphs. Even now, the Knowledge Graph is capable of handling dynamics (e.g. flight status).

Other Graphs

There are other graphs in the world of Big Data. The technology ecosystems are emerging around those graphs. The boost is expected from the Biotech. There is plenty of gene data, but lack of structured information on top of it. Brand new models (graphs) to emerge, with ease of understanding those terabytes of data. Circos was invented in the field of genomic data, to simplify understanding of data via visualization. More experiments could be found on Visual Complexity web site. We are living in the different World than a decade ago. And it is exciting. Just plan your strategies correspondingly. Consider Big Data strategically.

Tagged , , , , , , , , , , , , , , , , , , , , , , , ,