Tag Archives: analytics

Advanced Analytics, Part V

This post is related to the details of visualization of information for executives and operational managers on the mobile front-end. What is descriptive, what is predictive, what is prescriptive, how it looks like, and why. The scope of this post is a cap of the information pyramid. Even if I start about smth detailed I still remain at the very top, at the level of most important information without details on the underlying data. Previous posts contains introduction (Part I) and pathway (Part II) of the information to the user, especially executives.

Perception pipeline

The user’s perception pipeline is: RECOGNITION –> QUALIFICATION –> QUANTIFICATION –> OPTIMIZATION. During recognition the user just grasps the entire thing, starts to take it as a whole, in the ideal we should deliver personal experience, hence information will be valuable but probably delivered slightly different from the previous context. More on personal experience  in next chapter below. So as soon as user grasps/recognizes she is capable to classify or qualify by commonality. User operates with categories and scores within those categories. The scores are qualitative and very friendly for understanding, such as poor, so-so, good, great. Then user is ready to reduce subjectivity and turn to the numeric measurements/scoring. It’s quantification, converting good & great into numbers (absolute and relative). As soon as user all set with numeric measurements, she is capable to improve/optimize the biz or process or whatever the subject is.

Default screen

What should be rendered on the default screen? I bet it is combination of the descriptive, predictive and prescriptive, with large portion of space dedicated to descriptive. Why descriptive is so important? Because until we build AI the trust and confidence to those computer generated suggestions is not at the level. That’s why we have to show ‘AS IS’ picture, to deliver how everything works and what happens without any decorations or translations. If we deliver such snapshot of the business/process/unit/etc. the issue of trust between human and machine might be resolved. We used to believe that machines are pretty good at tracking tons of measurements, so let them track it and visualize it.

There must be an attempt from the machines to try to advice the human user. It’s could be done in the form of the personalized sentence, on the same screen, along with descriptive analytics. So putting some TODOs are absolutely OK. While believing that user will trust them and follow them is naive. The user will definitely dig into the details why such prescription is proposed. It’s normal that user is curious on root-cause chain. Hence be ready to provide almost the same information with additional details on the reasons/roots, trends/predictions, classifications & patterns recognition within KPI control charts, and additional details on prescriptions. If we visualize [on top of the inverted pyramid] with text message and stack of vital signs, then we have to prepare additional screen to answer that list of mentioned details. We will still remain on top of the pyramid.



Next screen

If we got ‘AS IS’ then there must be ‘TO BE’, at least for the symmetry:) User starts on default screen (recognition and qualification) and continues to the next screen (qualification and quantification). Next screen should have more details. What kind of information would be logically relevant for the user who got default screen and looks for more? Or it’s better to say – looks for ‘why’? May be it’s time to list them as bullets for more clarity:

  • dynamic pattern recognition (with highlight on the corresponding chart or charts) what is going on; it could be one from seven performance signals, it should be three essential signals
  • highlight the area of the significant event [dynamic pattern/signal] to the other charts to juxtapose what is going on there, to foster thinking on potential relations; it’s still human who thinks, while machine assists
  • parameters & decorations for the same control charts, such as min/max/avg values, identifications of the quarters or months or sprints or weeks or so
  • normal range (also applicable to the default screen) or even ranges, because they could be different for different quarters or years
  • trend line, using most applicable method for approximation/prediction of future values; e.g. release forecast
  • its parts should be clickable for digging from relative values/charts into the absolute values/charts for even more detailed analysis; from qualitative to quantitative
  • your ideas here


Recognition of signals as dynamic patterns is identification of the roots/reasons for smth. Predictions and prescriptions could be driven by those signals. Prescriptions could be generic, but it’s better to make personalized prescriptions. Explanations could be designed for the personal needs/perception/experience.


Personal experience

We consume information in various contexts. If it is release of the project or product then the context is different from the start of the zero sprint. If it’s merger & acquisition then expected information is different from the quarterly review. It all depends on the user (from CEO to CxOs to VPs to middle management to team management and leadership), on the activity, on the device (Moto 360 or iPhone or iPad or car or TV or laptop). It matters where the user is physically, location does matter. Empathy does matter. But how to reach it?

We could build users interests from social networks and from the interaction with other services. Interests are relatively static in time. It is possible to figure out intentions. Intentions are dynamic and useful only when they are hot. Business intentions are observable from business comms. We could sense the intensity of communication between the user and CFO and classify it as a context related to the budgeting or budget review. If we use sensors on corporate mail system (or mail clients), combine with GPS or Wi-Fi location sensors/services, or with manual check-in somewhere, we could figure out that the user indeed intensified comms with CFO and worked together face-to-face. Having such dynamic context we are capable to deliver the information in that context.

The concept of personal experience (or personal UX) is similar to the braid (type of hairstyle). Each graph of data helps to filter relevant data. Together those graphs allows to locate the real-time context. Having such personal context we could build and deliver most valuable information to the user. More details how to handle interest graph, intention graph, mobile graph, social graph and which sensors could bring the modern new data available in my older posts. So far I propose to personalize the text message for default screen and next screen, because it’s easier than vital signs, and it’s fittable into wrist sized gadgets like Moto 360.

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Advanced Analytics, Part IV

This post is a next on in the series on Advanced Analytics. Check out previous introduction, ruminations on conveying information, modern concepts on information and data for executives.

Dashboard or not Dashboard?

There is nothing wrong about dashboard, except it’s puts a stereotype on your consciousness and that defines your further expectations. In general dashboards are good, check out this definition from Wikipedia: “An easy to read, often single page, real-time user interface, showing a graphical presentation of the current status (snapshot) and historical trends of an organization’s key performance indicators (KPIs) to enable instantaneous and informed decisions to be made at a glance.”

Easy to read sounds exciting, at last executive or operating information will be friendly and easy to read! Single page is resonating with my description of A4 or letter size [What to communicate?] due to anthropological sizes of our bodies and body parts, and the context of information consumption, usability. Real-time user interface could be even improved with real-time or near-real-time data delivery to the user. Visualization as a graphical reps for the current status – snapshot – and dynamics/trend of the indicator is resonating with ‘All Data’. Enablement of instantaneous and informed decisions is resonating with Vital Signs.

So the conclusion is to dashboard. The open question is how to dashboard? Is there are standard for dashboard? What are the best practices? What are known pitfalls? How modern dashboards will look like? Let’s start with what are known problems, so that we know what ti overcome to make dashboarding more usable and valuable.

Gauges suck!

Dashboard gauges and dials do suck. It is wrong way to visualize information. What is a gauge? From Wikipedia: “In engineering, a gauge is a device used to make measurements or in order to display certain information, like time.” Primary task of the gauge is to perform measurement, secondary task is to display it. Furthermore, the gauge must use special principle of measurement, because same things could be measured via multiple different principles (e.g. temperature could be measured with mercury thermometer, infrared radiation, resistance thermometer and other ways). What “dashboard gauges” do? Gauges do not measure anything at all, while they do display almost everything. That’s a root of the problem. To be specific the problem lays in the ill application of the principle of analogy [skeumorphism].

What are other problems of the dials/gauges?

  • They “say little and do so poorly”. By Stephen Few.
  • They might look cute, but like empty calories they give little information for the amount of space they consume. By Aeternus Consulting.
  • Retro design is crippling innovation. By Wired. Skeuomorphs aren’t always bad; the Kindle is easy to use precisely because it behaves so much like a traditional print book.
  • Do you know how much research went into determining that idiot lights and gauges that look just like those in our cars are the best way to display information on a dashboard for monitoring your organization’s performance? The answer is zilch; none whatsoever. Back in the beginning when we started calling computer-based monitoring displays dashboards, someone had the bright idea of making display widgets that looked like those in cars. This is an example of taking a metaphor too literally. By Stephen Few, Perceptual Edge. Hence don’t be fooled by the illusion of control instead of real control.
  • And several more arguments of why dashboard dials and gauges are useless for KPIs. I will devote entire next section to those details. Keep reading.

Gauges are bad for KPIs

This section extends and continues the previous one, with more dedication to the visualization of KPIs. What are KPIs? From Wikipedia: “A key performance indicator (aka KPI) is a type of performance measurement. An organization may use KPIs to evaluate its success, or to evaluate the success of a particular activity in which it is engaged. Sometimes success is defined in terms of making progress toward strategic goals, but often success is simply the repeated, periodic achievement of some level of operational goal (e.g. zero defects, 10/10 customer satisfaction, etc.)” Just read aloud and listen to your words – activity, performance, progress, repeated, periodic. All those words mean duration in time. But what we have on the gauge? Nothing. The gauge clips and ignores everything except current value. That’s poor. This and other problems are listed below, they are partially reused for your convenience from Stacey Barr blog:

  • The purpose of performance measures is to track change toward a target through time. Performance doesn’t improve immediately – you need to allow time to change your processes so they become capable of operating at that targeted level. Performance measurement involves monitoring change over time, and looking for signals about whether it’s moving close enough and fast enough toward the target.
  • Dials and gauges don’t show change over time at all. You are flying blind. You need this [dynamic] context in your performance measures to help you priorities. Because Dials and gauges don’t use this context, they are also incapable of showing you true signals in your measures.
  • Dials and gauges highlight false signals. Dials and gauges have you knee-jerk reacting to routine variation. Check out Stacey Barr post on routine variation and other stats tips for KPIs.
  • There is a better way to show performance measures on dashboards than dials or gauges. We can provide historical context and valid rules for signals of change. Check out smartlines. You will be surprised by seeing there names of Tufte, Few and sparklines. Besides that there are other ideas.

Criteria for KPI visualization

There is a list of criteria for proper tracking, analysis and visualziation of KPIs. Having understood them it will be obvious why gauges and dials should be put into archive as weak use of skeumorphism. Proper approach would be capable to convey both detection and representation on UI of this list:

  • Chaos in performance. First as deviation from predictability, then true chaos.
  • Worsening performance. E.g. degrading productivity or quality or value.
  • Flat plateau. Everything stable and not changing, while change towards growing revenue or growing happiness expected.
  • Wrong pace. We are improving but not fast enough. The target remains out of reach.
  • Right pace. We will reach the strategic target in time.
  • We are there. We have reached the target already.
  • We exceeded expectations. The target is exceeded.

Check out for more comments and details on “The 7 Performance Signals to Look For in Your KPIs” by Stacey Barr.


The concept of dashboard is up to date, powerful and suitable for modernization. The previous posts confirm that with majority of arguments. But the use of dials or gauges is not right design solution for visualization on dashboard. Line charts, control charts, ‘All Data’ charts, smartlines, sparklines, logarithmic charts and other types of graphical representations are still elegant and powerful to conform to seven criteria for KPI visualization. On the other hand they conform to the high-level executive friendly information (see section What exactly to communicate).

Tagged , , , , , , , , , , , , , , , , , , , , ,

Advanced Analytics, Part III

This post is also about the front-end part, as a conduit for information delivery to the decision maker. Previous two posts are available, it’s recommended to check out the Introduction into the Big Picture and Ruminations on Conveying, Organization and Segmentation of Information for Executives as users.

Big Data? All Data!

It’s time to pay attention to all data available. Personally I see no reasons to limit to big data. All data matters, most recent data matters more, oldest data matters less. It is possible to visualize plenty of data on relatively small space, which is convenient for delivery onto smartphones and wrist-sized gadgets. The rationale is to depict firm details on the most recent/relevant data, the relevancy is determined by the adopted processes. In SDLC it could be a sprint or iteration; in healthcare it could be a period since current admission. The latest measured value matters a lot, hence must be clearly distinguished on top of the other values within the period. The dynamics during the period also matters, hence should be visualized to convey the dynamics.

Previous periods/cycles do matter, especially for comparison and benchmarking to enable better strategic planning. The firm details on dynamics during past cycles are not so valuable, while deviations into both positive and negative directions are very informative. Decision maker knows how to classify the current cycle exceptions, whether something brand new happened or whether business experienced even more severe deviations in the past, and recall how.

Being inspired some time ago by medical patient summaries by Tufte and Powsner I’ve tried to generalize the concept to be applicable to other non-healthcare industries. So far it fits perfectly, allows customization and flexibility, especially for the optimization of the processes, where people usually use control charts on dashboards. Below is a generalized version of the ‘All Data’ chart as a concept.


Inverted Pyramid

The principle of Inverted pyramid is partially present there, the pyramid is rotated by 90 degrees. Most important information is within the biggest part of the chart, in the center and on the right. It is rather information than data, because id conveys latest value, dynamics during recent cycle, benchmarking against the normal range, indication of deviations (in qualitative way, using only two categories: somewhat and significant). It’s rationale to stay in the range of 10 with the measurements so that they are remember-able relatively easy.

The next narrow part to the left from the sparkline is partially information and partially data. It’s used for comparison and benchmarking, analysis of exceptions, retrospective analysis. It is absolutely logical to fit there 10 times more data, so that if there is a lack of information in the biggest part, the user is able to dig deeper and obtain significantly more facts and reasons, as measurements of the same thing. Hence phase shift means at least 10x growths. With medical patient summaries the ratio was similar: one-two months between admission and discharge vs. one previous year. But 10x is not a hard ratio, it’s more indicative that we need a kind of phase shift to different data, different level of abstraction.

The leftmost narrow part is actually the all and oldest data. It is additional phase shift, relatively to the middle part, hence imagine additional 10x increase and digging to the different level of abstraction again. Only exceptions marked as min/max are comparable between all parts. Everything else constitutes the inverted pyramid of making the information out of raw data.

Cap of the pyramid: Vital Signs

I think the cap of the information pyramid requires special conceptualization. ‘All Data’ is attractive tool to deliver project/process vital signs for executives and other managers (decision makers), they could be compressed even more. Furthermore, the top five-seven measurements could be stacked and consumed all together. That increases the value of the information synergistically, because some indicators are naturally perceived together as juxtaposition of what is going on.

Specific vital signs for business performance and SDLC process optimization were listed in details in my previous post Advanced Analytics, Part II. Here I will only mention them for your convenience: productivity, predictability, value and value-add, innovation in core competency, human factor and emotional intelligence/engagement. Those are ‘the must’ for executives. They could be stacked as vital signs and consumed as integral big picture.


Of course we can introduce normal range there, ticks for the time tracking, highlight min/max… The drawing represents the idea of stacking and consumption of executive information of SDLC project/process performance in modern manner. You could critisize or improve it, I’ll be thankful for feedback.

There are two dozens of lower level operational indicators and measurements. Some of them could be naturally conveyed via ‘All Data’ concept, others require other concepts. I am going to address them in next posts. Stay tuned.

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Advanced Analytics, Part II

This post will be about delivery and consumption of information, about the front-end. Big picture introduction has been described in previous post Advanced Analytics, Part I.

The Ideal

It would be neither UI of current gadgets nor new gadgets. The ideal would be HCI leveled up to the human-human communication with visual consistent live look, speech, motions and some other aspects of real-life comms. There will be AI built finally, probably by 2030. Whenever it happens, the machines will try to mimic humans and humans will be able to communicate in really natural way with machines. The machines will deliver information. Imagine boss asking his/her assistant how the things are going and she says: “Perfectly!” and then adds portion of summaries and exceptions in few sentences. If the answers are empathic and proactive enough, then there may probably be no next questions like “So what?”

First such humanized comms will be asynchronous messaging and semi-synchronous chats. If the peer on the other end is indistinguishable (human vs. machine) , and the value and quality of information is high, delivered onto mobile & wearable gadgets in real-time, then it’s first good implementation of the front-end for advanced analytics. The interaction interface is writing & reading. Second leveling up is speech. It’s technically more complicated to switch from writing-reading to listening-talking. But as soon as same valuable information is delivered that way, it would mean advanced analytics got a phase shift. Such speaking humanized assistants would be everywhere around us, in business and life. Third leveling up is visual. As soon as we can see and perceive the peer as a human, with look, speech, motion, then we are almost there. Further leveling up is related to touch, smell and other aspects to mimic real-life. That’s Turing test, with shift towards information delivery for business performance and decision making.

What to communicate?

As highlighted in a books on dashboard design and taught by renown professionals, most important are personalized short message, supported with summaries and exceptions. Today we are able to deliver such kind of information in text, chart, table, map, animation, audio, video form onto mobile phone, wristband gadget, glasses, car infotainment unit, TV panel and to the number of other non-humanized devices. With present technologies it’s possible to cover first and partially second levels described in “The Ideal” section earlier. Third – visual – is still premature,  but there are interesting and promising experiments with 3D holograms. As it’s gets cheaper we would be able to project whatever look of business assistant we need.

Most challenging is a personalization of ad-hoc real-time answer to the inquiry. Empathy is important to tune to the biological specifics. Context and continuity according to the previous comms is important to add value, on top of previously delivered information. Interests, current intentions, recent connections and real-time motion could help to shape the context properly. That data could be abstracted into the data and knowledge graphs, for further processing. Some details on those graphs are present in Six Graphs of Big Data.

Summary is an art to fit a big picture into single pager. Somebody still don’t understand why single pager does matter (even UX Magazine guys). Here is a tip – anthropologically we’ve got a body and two arms, and the length of the arms, the distance between the arms, distance between the eyes and what we hold in the arms is predefined. There is simply no way to change those anthropological restrictions. Hence a single page (A4 or Letter size) is a most ergonomic and proven size of the artifact to be used for the hands. Remember, we are talking about the summaries now, hence some space assets are needed to represent them [summaries]. Summaries should be structured into Inverted pyramid information architecture, to optimize the process of information consumption by decision maker.

Exceptions are important to be proactively communicated, because they mean we’ve got issue with predictability and expectations. There could be positive exceptions for sure, but if they were not expected, they must be addressed descriptively, explanatory (reason, root-cause, consequences, repeatability and further expectations). Both summaries and exceptions shall fit into single pager or even smaller space.

What exactly to communicate?

On one hand main message, summaries and exceptions are too generic and high-level guidelines. On the other hand, prescriptive, predictive and descriptive analytics is too technical classification. Let’s add some soul. For software projects we could introduce more understandable categories of classification. “Projects exist only in two states: either too-early-to-tell or too-late-to-change.” It was said by Edward Tufte during discussion of executive dashboards. Other and more detailed recommendations on information organization are listed below, they are based on Edward Tufte and Peter Drucker experience and vision, reused from Tuftes forum.

  • The point of information displays is to assist thinking; therefore, ask first of all: What are the thinking tasks that the displays are supposed to help with?
  • Build in systematic checks of data quality into the display and analysis system. For example, good checks of the data on revenue recognition must be made, given the strong incentives for premature recognition. Beware, in management data, of what statisticians call “sampling to please”.
  • Avoid heavy-breathing metaphors such as the mission control center, the strategic air command, the cockpit, the dashboard, or Star Trek. As Peter Drucker once said, good management is boring. If you want excitement, don’t go to a good management information system. Simple designs showing high-resolution data, well-labelled information in tables and graphics will do just fine. One model might be the medical interface in Visual Explanations (pages 110-111) and the articles by Seth Powsner and me cited there. You could check out research with those medical summaries for iPad and iPhone in my previous posts. Mobile EMR Part I, Part II, Part III, Part IV, Part V.
  • Watch the actual data collection involved in describing the process. Watch the observations being made and recorded; chances are you will learn a lot about the meaning and quality of the numbers and about the actual process itself. Talk to the people making the actual measurements.
  • Measurement itself (and the apparent review of the numbers) can govern a process. No jargon about an Executive Decision Protocol Monitoring Support Dashboard System is needed. In fact, such jargon would be an impediment to thinking.
  • Too many resources were devoted to collecting data. It is worth thinking about why employees are filling out forms for management busybody bureaucrats rather than doing something real, useful, productive…

Closer to the executive information

Everything clear with single-sentence personalized real-time message. Interest Graph, Intention Graph, Mobile Graph, Social Graph might help to compile such message.

Summaries could be presented as Vital Signs. Like we measure medical patient temperature, blood pressure, heart rate and other parameters, the similar way we could measure vital signs of the business: cache flow, liquidity projections, sales, receivables, ratios.

Other indicators of the business performance could be productivity, innovations in core competency, ABC, human factor, value and value-add. Productivity should go together with predictability. There is excellent blog post by Neil Fox, named The Two Agile Programming Metrics that Matter. Activity-based costing (aka ABC) could show where there is a fat that could be cut out. Very often ABC is bound to the human factor. Another interesting relation exists between productivity and human factor too, which is called emotional intelligence or engagement. Hence we’ve got an interdependent graph of measurements. Core competency defines the future of the particular business, hence innovations shall take place within core competency. It’s possible to track and measure innovation rate, but it’s vital to do it for the right competency, not for multiple ones. And finally – value and value-add. In transforming economy we are switching from wealth orientation towards happiness of users/consumers. In the Experience Economy we must measure and project delivery of happiness to every individual. More details are available in my older post Transformation of Consumption.

Finally in this post, we have to distinguish between executive and operational information. They should be designed/architectured differently. More in next posts. It’s recommended to read Peter Drucker’s work “The Essential Drucker” to unlock the wisdom what executives really need, what is absent on the market, and how to design it for the modern perfect storm of technologies and growing business intelligence needs.

Tagged , , , , , , , , , , , , , , , , , , ,

Advanced Analytics, Part I

I’m writing this post in very difficult times. Russia declared war to Ukraine, annexed Crimea, significant part of Ukrainian territory. God bless Ukraine!

This post will melt together multiple things, partially described in my previous posts. There will be references for details by distinguished topics.

From Data to Information

We need information. Information is a “knowledge communicated or received concerning a particular fact or circumstance“, it’s citation from the Wikipedia. Modern useful information technique [in business] is called analytics. There is descriptive analytics or AS-IS snapshot of the things. There is predictive analytics or WHAT-WILL. Predictive analytics is already on Gartner’s Plateau of Productivity. Prescriptive analytics goes further, answering WHAT & WHY. Decision makers [in business and life] need information in Inverted Pyramid manner. Most important information on top, then major facts & reasons, and so on downstairs…

But we have data at the beginning. Tons of data bits which are generated by wide variety of data sources. There are big volumes of classical enterprise data in ERPs, CRMs, legacy apps. That data is primarily relational, SQL friendly. There are big volumes of relatively new social data, as public and private user profiles, user timelines in social networks, mixed content of text, imagery, video, locations, emotions, relations, statuses and so forth. There are growing volumes of machine data, starting from access control systems with turnstiles in the office or parking to M2M sensors on transport fleet or quantified-self individuals. Social and machine data is not necessarily SQL friendly. Check out Five Sources of Big Data for more details.

Processing Pipeline

Everything starts from proper abstraction & design. Old school methods still works, but modern methods unlocks even more potential towards creation of information out of the raw data. Abstraction [of the business models or life models] leads to design of data models which are often some kinds of graphs. It is absolutely normal to have multiple graphs within a solution/product. E.g. people relations are straightforward abstracted to Social Graph, while machine data might be represented into Network Graphs, Mobile Graph. There are other common abstractions, such as Logistic Graph, Recommendations Graph and so on. More details could be found in Six Graphs of Big Data.

The key concept of the processing could be abstracted to a funnel. On the left you got raw data, you feed it into the funnel, and gets kind of information of the right. This is depicted at high-level on the diagram.


What makes it Advanced?

An interesting question… Modern does not always mean advanced. What makes it advanced is another technology, related to the user experience – mobiles and wearables. As soon as predictive and prescriptive analytics is delivered in real-time at your fingertips, it could be considered to be advanced.

There are several technological limitations and challenges. Let’s start from the mobiles and wearables. The biggest issue is a screen size. Entire visualization must be designed from the scratch. Reuse for big screens does not work, despite of our browsing of full blown web sites on those screens… The issue with wearables is related to their emergence. Nobody simply isn’t aware enough how to design for them. The paradigms will emerge as soon as adoption rate starts to decelerate. Right now we are observing the boom of wearables. There is insight on wearables: Wearable Technology and Wearable Technology, Part II. A lot to change there!

The requirement of real-time or near-real-time information delivery assumes high-performance computing at the backend, some data massage and pre-processing must be done in advance; then bits must be served out from the memory. It is client-cloud architecture, where client is mobile or wearable gadget, cloud is backend with plenty of RAM with ready-made bits. This is depicted on the diagram, read it from left to right.


So what?

This is a new era of the tools and technologies to enable and accelerate the processing pipeline from data to information into your pocket/hand/eyes. There is a lack of tools and frameworks to melt old and modern data together. Hadoop is doing well there, but things are not so smooth as install & run. There is a lack of data platform tools. There is a lack of integration, aggregation tools. Visualization is totally absent, there are still no good executive dashboards even for PC screens, not mentioning smartphones. I will address those opportunities in more details in next posts.

Tagged , , ,

Six Graphs of Big Data

This post is about Big Data. We will talk about the value and economical benefits of Big Data, not the atoms that constitute it [Big Data]. For the atoms you can refer to Wearable Technology or Getting Ready for the Internet of Things by Alex Sukholeyster, or just logging of the click stream… and you will get plenty of data, but it will be low-level, atom level, not much useful.

The value starts at the higher levels, when we use social connections of the people, understand their interests and consumptions, know their movement, predict their intentions, and link it all together semantically. In other words, we are talking about six graphs: Social, Interest, Consumption, Intention, Mobile and Knowledge. Forbes mentions five of them in Strategic Big Data insight. Gartner provided report “The Competitive Dynamics of the Consumer Web: Five Graphs Deliver a Sustainable Advantage”, it is paid resource unfortunately. It would be fine to look inside, but we can move forward with our vision, then compare to Gartner’s and analyze the commonality and variability. I foresee that our vision is wider and more consistent!

Social Graph

This is mostly analyzed and discussed graph. It is about connections between people. There are fundamental researches about it, like Six degrees of separation. Since LiveJournal times (since 1999), the Social Graph concept has been widely adopted and implemented. Facebook and its predecessors for non-professionals, LinkedIn mainly for professionals, and then others such as Twitter, Pinterest. There is a good overview about Social Graph Concepts and Issues on ReadWrite. There is good practical review of social graph by one of its pioneers, Brad Fitzpatrick, called Thoughts on the Social Graph. Mainly he reports a problem of absence of a single graph that is comprehensive and decentralized. It is a pain for integrations because of all those heterogeneous authentications and “walled garden” related issues.

Regarding implementation of the Social Graph, there are advices from the successful implementers, such as Pinterest. Official Pinterest engineering blog revealed how to Build a Follower Model from scratch. We can look at the same thing [Social Graph] from totally different perspective – technology. The modern technology provider Redis features tutorial how to Build a Twitter clone in PHP and (of course) Redis. So situation with Social Graph is less or more established. Many build it, but nobody solved the problem of having single consistent independent graph (probably built from other graphs).

Interest Graph

It is representation of the specific things in which an individual is interested. Read more about Interest Graph on Wikipedia. This is the next hot graph after the social. Indeed, the Interest Graph complements the Social one. Social Commerce see the Interest + Social Graphs together. People provide the raw data on their public and private profiles. Crawling and parsing of that data, plus special analysis is capable of building the Interest Graph for each of you. Gravity Labs created a special technology for building the Interest Graph. They call it Interest Graph Builder. There is an overview (follow previous link) and a demo. There are ontologies, entities, entity matching etc. Interesting insight about the Future of Interest Graph is authored by Pinterest’s head of engineering. The idea is to improve the Amazon’s recommendation engine, based on the classifiers (via pins). Pinterest knows the reasoning, “why” users pinned something, while Amazon doesn’t know. We are approaching Intention Graph.

Intention Graph

Not much could be said about intentions. It is about what we do and why we do.  Social and Interests are static in comparison to Intentions. This is related to prescriptive analytics, because it deals with the reasoning and motivation, “why” it happens or will happen. It seems that other graphs together could reveal much more about intentions, than trying to figure them [Intentions] out separately.

Intention Graph is tightly bound to the personal experience, or personal UX. It was foreseen in far 1999, by Harvard Business Review, as Experience Economy. Many years were spent, but not much implemented towards personal UX. We still don’t stage a personal ad hoc experience from goods and services exclusively for each user. I predict that Social + Interest + Consumption + Mobile graphs will allow us to build useful Intention Graph and achieve capabilities to build/deliver individual experiences. When the individual is within the service, then we are ready to predict some intentions, but it is true when Service Design was done properly.

Consumption Graph

One of the most important graphs of Big Data. Some call it Payment Graph. But Consumption is a better name, because we can consume without payment, Consumption Graph is relatively easy for e-commerce giants, like Amazon and eBay, but tricky for 3rd parties, like you. What if you want to know what user consumes? There are no sources of such information. Both Amazon and eBay are “walled gardens”. Each tracks what you do (browse, buy, put into wish list etc.), how you do it (when log in, how long staying within, sequence of your activities etc.), they send you some notifications/suggestions and measure how do you react, and many other tricks how to handle descriptive, predictive and prescriptive analytics. But what if user buys from other e-stores? There is a same problem like with Social Graph. IMHO there should be a mechanism to grab user’s Consumption Graph from sub-graphs (if user identifies herself).

Well, but there is still big portion of retail consumption. How to they build your Consumption Graph? Very easy, via loyalty cards. You think about discounts by using those cards, while retailers think about your Consumption Graph and predicts what to do with all of users/client together and even individually. There is the same problem of disconnected Consumption Graphs as in e-commerce, because each store has its own card. There are aggregators like Key Ring. Theoretically, they simplify the life of consumer by shielding her from all those cards. But in reality, the back-end logic is able to build a bigger Consumption Graph for retail consumption! Another aspect: consumption of goods vs. consumption of services and experiences, is there a difference? What is a difference between hard goods and digital goods? There are other cool things about retail, like tracking clients and detecting their sex and age. It is all becoming the Consumption Graph. Think about that yourself:)

Anyway, Consumption Graph is very interesting, because we are digitizing this World. We are printing digital goods on 3D printers. So far the shape and look & feel is identical to the cloned product (e.g. cup), but internals are different. As soon as 3D printer will be able to reconstruct the crystal structure, it will be brand new way of consumption. It is thrilling and wide topic, hence I am going to discuss it separately. Keep in touch to not miss it.

Mobile Graph

This graph is built from mobile data. It does not mean the data comes from mobile phones. Today may be majority of data is still generated by the smartphones, but tomorrow it will not be the truth. Check out Wearable Technology to figure out why. Second important notion is about the views onto the understanding of the Mobile Graph. Marketing based view described on Floatpoint is indeed about the smartphones usage. It is considered that Mobile Graph is a map of interactions (with contexts how people interact) such as Web, social apps/bookmarks/sharing, native apps, GPS and location/checkins, NFC, digital wallets, media authoring, pull/push notifications. I would view the Mobile Graph as a user-in-motion. Where user resides at each moment (home, office, on the way, school, hospital, store etc.), how user relocates (fast by car, slow by bike, very slow by feet; or uniformly or not, e.g. via public transport), how user behaves on each location (static, dynamic, mixed), what other users’ motions take place around (who else traveled same route, or who also reside on same location for that time slot) and so on. I am looking at the Motion Graph more as to the Mesh Network.

Why dynamic networking view makes more sense? Consider users as people and machines. Recall about IoT and M2M. Recall the initiatives by Ford and Nokia for resolving the gridlock problems in real-time. Mobile Graphs is better related to the motion, mobility, i.e. to the essence of the word “mobile”. If we consider it from motion point of view and add/extend with the marketing point of view, we will get pretty useful model for the user and society. Mobile Graph is not for oneself. At least it is more efficient for many than for one.

Knowledge Graph

This is a monster one. It is about the semantics between all digital and physical things. Why Google rocks still? Because they built the Knowledge Graph. You can see it action here. Check out interesting tips & tricks here. Google’s Knowledge Graph is a tool to find the UnGoogleable. There is a post on Blumenthals that Google’s Local Graph is much better than Knowledge, but this probably will be eliminated with time. IMHO their Knowledge Graph is being taught iteratively.

As Larry Page said many times, Google is not a search engine or ads engine, but the company that is building the Artificial Intelligence. Ray Kurzweil joined Google to simulate the human brain and recreate kind of intelligence. Here is a nice article How Larry Page and Knowledge Graph helped to seduce Ray Kurzweil to join Google. “The Knowledge Graph knows that Santa Cruz is a place, and that this list of places are related to Santa Cruz”.

We can look at those graphs together. Social will be in the middle, because we (people) like to be in the center of the Universe:) The Knowledge Graph could be considered as meta-graph, penetrating all other graphs, or as super-graph, including multiple parts from other graphs. Even now, the Knowledge Graph is capable of handling dynamics (e.g. flight status).

Other Graphs

There are other graphs in the world of Big Data. The technology ecosystems are emerging around those graphs. The boost is expected from the Biotech. There is plenty of gene data, but lack of structured information on top of it. Brand new models (graphs) to emerge, with ease of understanding those terabytes of data. Circos was invented in the field of genomic data, to simplify understanding of data via visualization. More experiments could be found on Visual Complexity web site. We are living in the different World than a decade ago. And it is exciting. Just plan your strategies correspondingly. Consider Big Data strategically.

Tagged , , , , , , , , , , , , , , , , , , , , , , , ,

Mobile EMR, Part V

Some time ago I’ve described ideation about mobile EMR/EHR for the medical professionals. We’ve come up with tablet concept first. EMR/EHR is rendered on iPad and Android tablets. Look & feel is identical. iPad feels better than Samsung Galaxy. Read about tablet EMR from four previous posts. BTW one of them contains feedback from Edward Tufte:) Mobile EMR Part I, Part II, Part III, Part IV.

We’ve moved further and designed a concept of hand-sized version of the EMR/EHR. It is rendered on iPhone and Android phones. This post is dedicated to the phone version. As you will see, the overall UI organization is significantly different from tablet, while reuse of smaller components is feasible between tablets and phones. Phone version is totally SoftServe’s design, hence we carry responsibility for design decisions made there. For sure we tried to keep both tablet and phone concepts consistent in style and feel. You could judge how good we accomplished it by comparing yourself:)


The lack of screen space forces to introduce a list of patients. The list is vertically scrolled. The tap on the patient takes you to the patient details screen. It is possible to add very basic info for each patient at the patient list screen, but not much. Cases with long patient names simply leave no space for more info. I think that admission date, age and sex labels must be present on the patient list in any case. We will add them in next version. Red circular notification signals about availability of new information for the patient. E.g. new labs ready or important significant event has been reported. The concept of interaction design supposes that medical professional will click on the patient marked with notifications. On the other hand, the list of patients is ordered per user. MD can reorder the list via drag’n’drop.

Patient list

Patient list

MD can scan the wristband to identify the patient.

Wristband scanning

Wristband scanning

Patient details

MD goes to the patient details by tapping the patient from the list. That screen is called Patient Profile. It is long screen. There is a stack of Vital Signs right on top of the screen. Vital Signs widget is totally reused from tablets on the phones. It fits into the phone screen width perfectly. Then there is Meds section. The last section is Clinical Visits & Hospitalization chart. It is interactive (zoomable) like on iPad. Within single patient MD gets multiple options. First options is to scroll the screen down to see all information and entry points for more info available there. Notice a menu bar at the bottom of the screen. MD can prefer going directly to Labs, Charts, Imagery or Events. The interaction is organized as via tabs. Default tab is patient Profile.

Patient profile

Patient profile

Patient profile, continued

Patient profile, continued

Patient profile, continued

Patient profile, continued


There is not much space for the tables. Furthermore, labs results are clickable, hence the size of the rows should be relative to the size of the the finger tap. Most recent labs numbers are highlighted with bold. Deviation from the normal range is highlighted with red color. It is possible to have the most recent labs numbers of the left and on the right of the table. It’s configurable. The red circular notification on the Labs menu/tab informs with the number how many new results available since last view on this patient.




Here we reuse ‘All Data’ charts smoothly. They perfectly fit into the phone screen. The layout is two-column with scrolling down. The charts with notifications about new data are highlighted. MD can reorder charts as she prefers. MD can manage the list of charts too by switching them on and off from the app settings. There could be grouping of charts based on the diagnosis. We consider this for next versions. Reminder about the chart structure. Rightmost biggest part of the chart renders most recent data, since admission, with dynamics. Min/max depicted with blue dots, latest value depicted with red dot. Chart title also has the numeric value in red to be logically linked with the dot on the chart. Left thin part of the chart consist of two sections: previous year data, and old data prior last year (if such data available). Only deviations and anomalies are meaningful from those periods. Extreme measurements are comparable thru the entire timeline, while precise dynamics is shown for the current period only. More information about the ‘All Data’ concept could be found in Mobile EMR, Part I.

Measurements in 'All Data' charts

Measurements in ‘All Data’ charts

Tapping on the chart brings detailed chart.

Measurement details

Measurement details


There was no a big deal to design entry point into the imagery. Just two-column with scroll down layout, like for the Measurements. Tap on the image brings separate screen, completely dedicated to that image preview. For the huge scans (4GB or so) we reused our BigImage solution, to achieve smooth image zoom in and zoom out, like Google Maps, but for medical imagery.



Tissue scan, zoom in

Tissue scan, zoom in

Significant events & notes

Just separate screen for them…

Significant events

Significant events

Conclusion: it’s BI framework

Entire back-end logic is reused between tablet and phone versions on EMR. Vital Signs and ‘All Data’ charts are reusable as is. Clinical Visits & Hospitalization chart is cut to shorter width, but reused easily too. Security components for data encryption, compression are reused. Caching reused. Push notification reused. Wristband scanning reused. Labs partially reused. Measurements reused. BigImage reused.

Reusability is physical and logical. For the medical professional, all this stuff is technology agnostic. MD see Vital Signs on iPad, Android tablet, iPhone and Android phone as a same component. For geeks, it is obvious that reusability happens within the platform, iOS and Android. All widgets are reusable between iPad and iPhone, and between Samsung Galaxy tab and Samsung Galaxy phone. Cloud/SaaS stuff, such as BigImage is reusable on all platforms, because it Web-based and rendered in Web containers, which are already present on each technology platform.

Most important conclusion is a fact that mEMR is a proof of BI Framework, suitable for any other industry. Any professional can consume almost real-time analytics from her smartphone. Our concept demonstrated how to deliver highly condensed related data series with dynamics and synergy for proper analysis and decision making by professional; solution for huge imagery delivery on any front-end. Text delivery is simple:) We will continue with concept research at the waves of technology: BI, Mobility, UX, Cloud; and within digitizing industries: Health Care, Biotech, Pharma, Education, Manufacturing. Stay tuned to hear about Electronic Batch Record (EBR).

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

WHERE 2.0, Monday, Part III

Realtime and realworld GeoAnalytics from neurologist…

Here is his TEDx show http://www.youtube.com/watch?v=ki24i6NPic0
Bradley Voytek has started from statement that we are all in 2D space, while neurologists and neurosurgeons are in 3D.

Complicated stuff, a lot of strange words, the guys is PhD. Some more resources mentioned, that worth to be examined: Google Lightning Talks http://9to5google.com/2012/03/11/google-developers-sxsw-lightning-talks/

The goal of neuroscience is to discover the relationships between brain, behavior, and disease. Using the Brain Systems, Connections, Associations, and Network Relationships (brainSCANr) engine, you can explore the relationships between neuroscience terms in peer reviewed publications.
brainSCANr http://www.brainscanr.com
binorized associations http://www.brainscanr.com/Paper

data mashups rock, take whatever data you have and look for correlations.

Tagged , , ,