Tag Archives: mobile

Advanced Analytics, Part V

This post is related to the details of visualization of information for executives and operational managers on the mobile front-end. What is descriptive, what is predictive, what is prescriptive, how it looks like, and why. The scope of this post is a cap of the information pyramid. Even if I start about smth detailed I still remain at the very top, at the level of most important information without details on the underlying data. Previous posts contains introduction (Part I) and pathway (Part II) of the information to the user, especially executives.

Perception pipeline

The user’s perception pipeline is: RECOGNITION –> QUALIFICATION –> QUANTIFICATION –> OPTIMIZATION. During recognition the user just grasps the entire thing, starts to take it as a whole, in the ideal we should deliver personal experience, hence information will be valuable but probably delivered slightly different from the previous context. More on personal experience  in next chapter below. So as soon as user grasps/recognizes she is capable to classify or qualify by commonality. User operates with categories and scores within those categories. The scores are qualitative and very friendly for understanding, such as poor, so-so, good, great. Then user is ready to reduce subjectivity and turn to the numeric measurements/scoring. It’s quantification, converting good & great into numbers (absolute and relative). As soon as user all set with numeric measurements, she is capable to improve/optimize the biz or process or whatever the subject is.

Default screen

What should be rendered on the default screen? I bet it is combination of the descriptive, predictive and prescriptive, with large portion of space dedicated to descriptive. Why descriptive is so important? Because until we build AI the trust and confidence to those computer generated suggestions is not at the level. That’s why we have to show ‘AS IS’ picture, to deliver how everything works and what happens without any decorations or translations. If we deliver such snapshot of the business/process/unit/etc. the issue of trust between human and machine might be resolved. We used to believe that machines are pretty good at tracking tons of measurements, so let them track it and visualize it.

There must be an attempt from the machines to try to advice the human user. It’s could be done in the form of the personalized sentence, on the same screen, along with descriptive analytics. So putting some TODOs are absolutely OK. While believing that user will trust them and follow them is naive. The user will definitely dig into the details why such prescription is proposed. It’s normal that user is curious on root-cause chain. Hence be ready to provide almost the same information with additional details on the reasons/roots, trends/predictions, classifications & patterns recognition within KPI control charts, and additional details on prescriptions. If we visualize [on top of the inverted pyramid] with text message and stack of vital signs, then we have to prepare additional screen to answer that list of mentioned details. We will still remain on top of the pyramid.

default_screen

 

Next screen

If we got ‘AS IS’ then there must be ‘TO BE’, at least for the symmetry:) User starts on default screen (recognition and qualification) and continues to the next screen (qualification and quantification). Next screen should have more details. What kind of information would be logically relevant for the user who got default screen and looks for more? Or it’s better to say – looks for ‘why’? May be it’s time to list them as bullets for more clarity:

  • dynamic pattern recognition (with highlight on the corresponding chart or charts) what is going on; it could be one from seven performance signals, it should be three essential signals
  • highlight the area of the significant event [dynamic pattern/signal] to the other charts to juxtapose what is going on there, to foster thinking on potential relations; it’s still human who thinks, while machine assists
  • parameters & decorations for the same control charts, such as min/max/avg values, identifications of the quarters or months or sprints or weeks or so
  • normal range (also applicable to the default screen) or even ranges, because they could be different for different quarters or years
  • trend line, using most applicable method for approximation/prediction of future values; e.g. release forecast
  • its parts should be clickable for digging from relative values/charts into the absolute values/charts for even more detailed analysis; from qualitative to quantitative
  • your ideas here

signal

Recognition of signals as dynamic patterns is identification of the roots/reasons for smth. Predictions and prescriptions could be driven by those signals. Prescriptions could be generic, but it’s better to make personalized prescriptions. Explanations could be designed for the personal needs/perception/experience.

 

Personal experience

We consume information in various contexts. If it is release of the project or product then the context is different from the start of the zero sprint. If it’s merger & acquisition then expected information is different from the quarterly review. It all depends on the user (from CEO to CxOs to VPs to middle management to team management and leadership), on the activity, on the device (Moto 360 or iPhone or iPad or car or TV or laptop). It matters where the user is physically, location does matter. Empathy does matter. But how to reach it?

We could build users interests from social networks and from the interaction with other services. Interests are relatively static in time. It is possible to figure out intentions. Intentions are dynamic and useful only when they are hot. Business intentions are observable from business comms. We could sense the intensity of communication between the user and CFO and classify it as a context related to the budgeting or budget review. If we use sensors on corporate mail system (or mail clients), combine with GPS or Wi-Fi location sensors/services, or with manual check-in somewhere, we could figure out that the user indeed intensified comms with CFO and worked together face-to-face. Having such dynamic context we are capable to deliver the information in that context.

The concept of personal experience (or personal UX) is similar to the braid (type of hairstyle). Each graph of data helps to filter relevant data. Together those graphs allows to locate the real-time context. Having such personal context we could build and deliver most valuable information to the user. More details how to handle interest graph, intention graph, mobile graph, social graph and which sensors could bring the modern new data available in my older posts. So far I propose to personalize the text message for default screen and next screen, because it’s easier than vital signs, and it’s fittable into wrist sized gadgets like Moto 360.

Advertisements
Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Wearable Technology. Part II

This story is a logical continuation of the previously published Wearable Technology.

Calories and Workouts

Here I will show how two different wearable gadgets complement each other for Quantified Self.  For the beginning we need two devices, one is wearable on yourself, second is wearable by your bike.

First device is called BodyMedia, world’s most precise calories meter. It measures 5,000 data snapshots per minute from galvanic skin response, heat flux, skin temperature and 3-axis accelerometer. You can read more about BodyMedia’s sensors online. BodyMedia uses extensive machine learning to classify your activity as cycling, then measuring calories burned according to the cycling Big Data set used during learning. Check out this paper: Machine Learning and Sensor Fusion for Estimating Continuous Energy Expenditure for excellent description how AI works.

Second device is called Garmin Edge 500, simple and convenient bike computer. It has GPS, barometric altimeter, thermometer, motion detection and more features for workouts. You can read more about Garmin Edge 500 spec online. My gadgets are pictured herein.

04_gadgets

On the Route

The route was proposed by Mykola Hlibovych, a distinguished bike addict. So I put my gadgets on and measured it all. Below is info about the route. Summary info such as distance, time, speed, pace, temperature, elevation is provided by Garmin. it tries to guess about the calories too, but it is really poor at that. You should know there is no “silver bullet” and understand what to use for what. Garmin is one of the best GPS trackers, hence don’t try to measure calories with it.

Juxtaposition of elevation vs. speed and temperature vs. elevation is interesting for comparison. Both charts are provided by distance (rather than time). 2D route on the map is pretty standard thing. Garmin uses Bing Maps.

02_map_elev_speed_temp_dist

Burning Calories

Let’s look at BodyMedia and redraw Garmin charts of speed, elevation and temperature along the time (instead of distance) and stack them together for comparison/analysis. All three charts are aligned along the horizontal time line. Upper chart is real-time calories burn, measured also in METS. The vertical axis reflects Calories per Minute. Several times I burned at the rate of 11 cal/min with was really hot. The big downtime between 1PM and 2:30PM was a lunch.

An interesting fact is observable on Temperature chart – the Garmin was warm itself and was cooling down to the ambient temperature. After that it starter to record the temperature correctly. Another moment is a small spike in speed during downtime window. It was Zhenia Novytskyy trying my bike to compare with his.

01_calories_elev_speed_temp_time

Thorough Analysis

For detailed analysis of the performance on the route there is animated playback. It is published on Garmin Cloud. You just need to have Flash Player. Click this link if WordPress does not render the embedded route player from Garmin Cloud. There is iframe instruction below. You may experience some ads from them I think (because the service is free) …

The Mud

Wearable technology works in different conditions:)

03_mad

 

 

 

Tagged , , , , , , , , , , , , , , , , , , , , ,

Six Graphs of Big Data

This post is about Big Data. We will talk about the value and economical benefits of Big Data, not the atoms that constitute it [Big Data]. For the atoms you can refer to Wearable Technology or Getting Ready for the Internet of Things by Alex Sukholeyster, or just logging of the click stream… and you will get plenty of data, but it will be low-level, atom level, not much useful.

The value starts at the higher levels, when we use social connections of the people, understand their interests and consumptions, know their movement, predict their intentions, and link it all together semantically. In other words, we are talking about six graphs: Social, Interest, Consumption, Intention, Mobile and Knowledge. Forbes mentions five of them in Strategic Big Data insight. Gartner provided report “The Competitive Dynamics of the Consumer Web: Five Graphs Deliver a Sustainable Advantage”, it is paid resource unfortunately. It would be fine to look inside, but we can move forward with our vision, then compare to Gartner’s and analyze the commonality and variability. I foresee that our vision is wider and more consistent!

Social Graph

This is mostly analyzed and discussed graph. It is about connections between people. There are fundamental researches about it, like Six degrees of separation. Since LiveJournal times (since 1999), the Social Graph concept has been widely adopted and implemented. Facebook and its predecessors for non-professionals, LinkedIn mainly for professionals, and then others such as Twitter, Pinterest. There is a good overview about Social Graph Concepts and Issues on ReadWrite. There is good practical review of social graph by one of its pioneers, Brad Fitzpatrick, called Thoughts on the Social Graph. Mainly he reports a problem of absence of a single graph that is comprehensive and decentralized. It is a pain for integrations because of all those heterogeneous authentications and “walled garden” related issues.

Regarding implementation of the Social Graph, there are advices from the successful implementers, such as Pinterest. Official Pinterest engineering blog revealed how to Build a Follower Model from scratch. We can look at the same thing [Social Graph] from totally different perspective – technology. The modern technology provider Redis features tutorial how to Build a Twitter clone in PHP and (of course) Redis. So situation with Social Graph is less or more established. Many build it, but nobody solved the problem of having single consistent independent graph (probably built from other graphs).

Interest Graph

It is representation of the specific things in which an individual is interested. Read more about Interest Graph on Wikipedia. This is the next hot graph after the social. Indeed, the Interest Graph complements the Social one. Social Commerce see the Interest + Social Graphs together. People provide the raw data on their public and private profiles. Crawling and parsing of that data, plus special analysis is capable of building the Interest Graph for each of you. Gravity Labs created a special technology for building the Interest Graph. They call it Interest Graph Builder. There is an overview (follow previous link) and a demo. There are ontologies, entities, entity matching etc. Interesting insight about the Future of Interest Graph is authored by Pinterest’s head of engineering. The idea is to improve the Amazon’s recommendation engine, based on the classifiers (via pins). Pinterest knows the reasoning, “why” users pinned something, while Amazon doesn’t know. We are approaching Intention Graph.

Intention Graph

Not much could be said about intentions. It is about what we do and why we do.  Social and Interests are static in comparison to Intentions. This is related to prescriptive analytics, because it deals with the reasoning and motivation, “why” it happens or will happen. It seems that other graphs together could reveal much more about intentions, than trying to figure them [Intentions] out separately.

Intention Graph is tightly bound to the personal experience, or personal UX. It was foreseen in far 1999, by Harvard Business Review, as Experience Economy. Many years were spent, but not much implemented towards personal UX. We still don’t stage a personal ad hoc experience from goods and services exclusively for each user. I predict that Social + Interest + Consumption + Mobile graphs will allow us to build useful Intention Graph and achieve capabilities to build/deliver individual experiences. When the individual is within the service, then we are ready to predict some intentions, but it is true when Service Design was done properly.

Consumption Graph

One of the most important graphs of Big Data. Some call it Payment Graph. But Consumption is a better name, because we can consume without payment, Consumption Graph is relatively easy for e-commerce giants, like Amazon and eBay, but tricky for 3rd parties, like you. What if you want to know what user consumes? There are no sources of such information. Both Amazon and eBay are “walled gardens”. Each tracks what you do (browse, buy, put into wish list etc.), how you do it (when log in, how long staying within, sequence of your activities etc.), they send you some notifications/suggestions and measure how do you react, and many other tricks how to handle descriptive, predictive and prescriptive analytics. But what if user buys from other e-stores? There is a same problem like with Social Graph. IMHO there should be a mechanism to grab user’s Consumption Graph from sub-graphs (if user identifies herself).

Well, but there is still big portion of retail consumption. How to they build your Consumption Graph? Very easy, via loyalty cards. You think about discounts by using those cards, while retailers think about your Consumption Graph and predicts what to do with all of users/client together and even individually. There is the same problem of disconnected Consumption Graphs as in e-commerce, because each store has its own card. There are aggregators like Key Ring. Theoretically, they simplify the life of consumer by shielding her from all those cards. But in reality, the back-end logic is able to build a bigger Consumption Graph for retail consumption! Another aspect: consumption of goods vs. consumption of services and experiences, is there a difference? What is a difference between hard goods and digital goods? There are other cool things about retail, like tracking clients and detecting their sex and age. It is all becoming the Consumption Graph. Think about that yourself:)

Anyway, Consumption Graph is very interesting, because we are digitizing this World. We are printing digital goods on 3D printers. So far the shape and look & feel is identical to the cloned product (e.g. cup), but internals are different. As soon as 3D printer will be able to reconstruct the crystal structure, it will be brand new way of consumption. It is thrilling and wide topic, hence I am going to discuss it separately. Keep in touch to not miss it.

Mobile Graph

This graph is built from mobile data. It does not mean the data comes from mobile phones. Today may be majority of data is still generated by the smartphones, but tomorrow it will not be the truth. Check out Wearable Technology to figure out why. Second important notion is about the views onto the understanding of the Mobile Graph. Marketing based view described on Floatpoint is indeed about the smartphones usage. It is considered that Mobile Graph is a map of interactions (with contexts how people interact) such as Web, social apps/bookmarks/sharing, native apps, GPS and location/checkins, NFC, digital wallets, media authoring, pull/push notifications. I would view the Mobile Graph as a user-in-motion. Where user resides at each moment (home, office, on the way, school, hospital, store etc.), how user relocates (fast by car, slow by bike, very slow by feet; or uniformly or not, e.g. via public transport), how user behaves on each location (static, dynamic, mixed), what other users’ motions take place around (who else traveled same route, or who also reside on same location for that time slot) and so on. I am looking at the Motion Graph more as to the Mesh Network.

Why dynamic networking view makes more sense? Consider users as people and machines. Recall about IoT and M2M. Recall the initiatives by Ford and Nokia for resolving the gridlock problems in real-time. Mobile Graphs is better related to the motion, mobility, i.e. to the essence of the word “mobile”. If we consider it from motion point of view and add/extend with the marketing point of view, we will get pretty useful model for the user and society. Mobile Graph is not for oneself. At least it is more efficient for many than for one.

Knowledge Graph

This is a monster one. It is about the semantics between all digital and physical things. Why Google rocks still? Because they built the Knowledge Graph. You can see it action here. Check out interesting tips & tricks here. Google’s Knowledge Graph is a tool to find the UnGoogleable. There is a post on Blumenthals that Google’s Local Graph is much better than Knowledge, but this probably will be eliminated with time. IMHO their Knowledge Graph is being taught iteratively.

As Larry Page said many times, Google is not a search engine or ads engine, but the company that is building the Artificial Intelligence. Ray Kurzweil joined Google to simulate the human brain and recreate kind of intelligence. Here is a nice article How Larry Page and Knowledge Graph helped to seduce Ray Kurzweil to join Google. “The Knowledge Graph knows that Santa Cruz is a place, and that this list of places are related to Santa Cruz”.

We can look at those graphs together. Social will be in the middle, because we (people) like to be in the center of the Universe:) The Knowledge Graph could be considered as meta-graph, penetrating all other graphs, or as super-graph, including multiple parts from other graphs. Even now, the Knowledge Graph is capable of handling dynamics (e.g. flight status).

Other Graphs

There are other graphs in the world of Big Data. The technology ecosystems are emerging around those graphs. The boost is expected from the Biotech. There is plenty of gene data, but lack of structured information on top of it. Brand new models (graphs) to emerge, with ease of understanding those terabytes of data. Circos was invented in the field of genomic data, to simplify understanding of data via visualization. More experiments could be found on Visual Complexity web site. We are living in the different World than a decade ago. And it is exciting. Just plan your strategies correspondingly. Consider Big Data strategically.

Tagged , , , , , , , , , , , , , , , , , , , , , , , ,

Mobile EMR, Part V

Some time ago I’ve described ideation about mobile EMR/EHR for the medical professionals. We’ve come up with tablet concept first. EMR/EHR is rendered on iPad and Android tablets. Look & feel is identical. iPad feels better than Samsung Galaxy. Read about tablet EMR from four previous posts. BTW one of them contains feedback from Edward Tufte:) Mobile EMR Part I, Part II, Part III, Part IV.

We’ve moved further and designed a concept of hand-sized version of the EMR/EHR. It is rendered on iPhone and Android phones. This post is dedicated to the phone version. As you will see, the overall UI organization is significantly different from tablet, while reuse of smaller components is feasible between tablets and phones. Phone version is totally SoftServe’s design, hence we carry responsibility for design decisions made there. For sure we tried to keep both tablet and phone concepts consistent in style and feel. You could judge how good we accomplished it by comparing yourself:)

Patients

The lack of screen space forces to introduce a list of patients. The list is vertically scrolled. The tap on the patient takes you to the patient details screen. It is possible to add very basic info for each patient at the patient list screen, but not much. Cases with long patient names simply leave no space for more info. I think that admission date, age and sex labels must be present on the patient list in any case. We will add them in next version. Red circular notification signals about availability of new information for the patient. E.g. new labs ready or important significant event has been reported. The concept of interaction design supposes that medical professional will click on the patient marked with notifications. On the other hand, the list of patients is ordered per user. MD can reorder the list via drag’n’drop.

Patient list

Patient list

MD can scan the wristband to identify the patient.

Wristband scanning

Wristband scanning

Patient details

MD goes to the patient details by tapping the patient from the list. That screen is called Patient Profile. It is long screen. There is a stack of Vital Signs right on top of the screen. Vital Signs widget is totally reused from tablets on the phones. It fits into the phone screen width perfectly. Then there is Meds section. The last section is Clinical Visits & Hospitalization chart. It is interactive (zoomable) like on iPad. Within single patient MD gets multiple options. First options is to scroll the screen down to see all information and entry points for more info available there. Notice a menu bar at the bottom of the screen. MD can prefer going directly to Labs, Charts, Imagery or Events. The interaction is organized as via tabs. Default tab is patient Profile.

Patient profile

Patient profile

Patient profile, continued

Patient profile, continued

Patient profile, continued

Patient profile, continued

Labs

There is not much space for the tables. Furthermore, labs results are clickable, hence the size of the rows should be relative to the size of the the finger tap. Most recent labs numbers are highlighted with bold. Deviation from the normal range is highlighted with red color. It is possible to have the most recent labs numbers of the left and on the right of the table. It’s configurable. The red circular notification on the Labs menu/tab informs with the number how many new results available since last view on this patient.

Labs

Labs

Measurements

Here we reuse ‘All Data’ charts smoothly. They perfectly fit into the phone screen. The layout is two-column with scrolling down. The charts with notifications about new data are highlighted. MD can reorder charts as she prefers. MD can manage the list of charts too by switching them on and off from the app settings. There could be grouping of charts based on the diagnosis. We consider this for next versions. Reminder about the chart structure. Rightmost biggest part of the chart renders most recent data, since admission, with dynamics. Min/max depicted with blue dots, latest value depicted with red dot. Chart title also has the numeric value in red to be logically linked with the dot on the chart. Left thin part of the chart consist of two sections: previous year data, and old data prior last year (if such data available). Only deviations and anomalies are meaningful from those periods. Extreme measurements are comparable thru the entire timeline, while precise dynamics is shown for the current period only. More information about the ‘All Data’ concept could be found in Mobile EMR, Part I.

Measurements in 'All Data' charts

Measurements in ‘All Data’ charts

Tapping on the chart brings detailed chart.

Measurement details

Measurement details

Imagery

There was no a big deal to design entry point into the imagery. Just two-column with scroll down layout, like for the Measurements. Tap on the image brings separate screen, completely dedicated to that image preview. For the huge scans (4GB or so) we reused our BigImage solution, to achieve smooth image zoom in and zoom out, like Google Maps, but for medical imagery.

Imagery

Imagery

Tissue scan, zoom in

Tissue scan, zoom in

Significant events & notes

Just separate screen for them…

Significant events

Significant events

Conclusion: it’s BI framework

Entire back-end logic is reused between tablet and phone versions on EMR. Vital Signs and ‘All Data’ charts are reusable as is. Clinical Visits & Hospitalization chart is cut to shorter width, but reused easily too. Security components for data encryption, compression are reused. Caching reused. Push notification reused. Wristband scanning reused. Labs partially reused. Measurements reused. BigImage reused.

Reusability is physical and logical. For the medical professional, all this stuff is technology agnostic. MD see Vital Signs on iPad, Android tablet, iPhone and Android phone as a same component. For geeks, it is obvious that reusability happens within the platform, iOS and Android. All widgets are reusable between iPad and iPhone, and between Samsung Galaxy tab and Samsung Galaxy phone. Cloud/SaaS stuff, such as BigImage is reusable on all platforms, because it Web-based and rendered in Web containers, which are already present on each technology platform.

Most important conclusion is a fact that mEMR is a proof of BI Framework, suitable for any other industry. Any professional can consume almost real-time analytics from her smartphone. Our concept demonstrated how to deliver highly condensed related data series with dynamics and synergy for proper analysis and decision making by professional; solution for huge imagery delivery on any front-end. Text delivery is simple:) We will continue with concept research at the waves of technology: BI, Mobility, UX, Cloud; and within digitizing industries: Health Care, Biotech, Pharma, Education, Manufacturing. Stay tuned to hear about Electronic Batch Record (EBR).

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Android First

Very short post about obvious turn. I wanted to publish it week ago, but couldn’t catch up with thoughts till today. So here it is.

Mobile First?

Interesting times, when niche company made revolution but now is returning to the niche. I mean Apple of course, with revolutionary smartphone since 2007. They challenged SaaS. Indeed Apple boosted S+S (Software + Services) with the Web of Apps. iPhone apps became so popular, that it became a market strategy to implement mobile app first, then “main” web site. What is main finally? Seems that mobile has become the main. Right now I am not going to talk about which mobile is right: native or web. You can read it on my post ‘Native Apps beat Mobile Web’. Be sure that new wrist-sized gadgets will be better programmed in native way than mobile web. Hence, what we got? Many startups and enterprises went mobile first. There is a good insight by Luke Wroblewski about ‘Mobile First’.

Who is first?

OK, Apple was first and best. Then they were first but not best. Now they are even not the best. It is ridiculous to wait five years to switch from big connection hole to smaller mini-USB sized Lightning hole… The bugs with battery drain are Microsoft and Adobe like. It is not the Apple of 2007 anymore. So what? Those flaws allowed competitors to catch up.

What is main Apple advantage over competitors? It is design. Still no one can beat emotions from Apple gadgets. There was another advantage – first mover advantage. What is main competitors’ advantage? Openness. Open standards. Android & Java & Linux is a great advantage. Openness now beats aesthetics. Read below why.

Android First!

iPhone & iOS & iTunes is pretty close ecosystem. Either you are fanatic to expect some openness in the future, you can wait and hope. But business goes on and bypasses inconveniences. Openness of Android allowed Facebook to design brand new user experience, called Facebook Home. Such creativity is impossible on iOS. I am not telling whether Facebook Home rocks or sucks. I am insisting on the opportunity for creativity. Android is just a platform for creativity. For programming geeks. For design geeks. For other geeks. Be sure others will follow with similar concept like Facebook Home. And it will happen on Android. Because tomorrow Android is First. Align your business strategy to be sync’ed with the technology pace!

Who worries about Apple? Don’t worry, they are returning to the niche. Sure, there will be some new cool things like wrist-sized gadgets. But others are working on them as well. And others are open. New gadgets will run Android. Android, which UX is poor (to my mind), but which enabled creativity to others, who are capable of doing better UX. They have got the idea of Android First.

Tagged , , , , , , , , , , , , , , , ,

Mobile EMR, Part IV

This is continuation of Mobile EMR, Part III.

It happened to be possible to fit more information to the single pager! We’ve extended EKG slightly, reworked LABs results, reworked measurements (charts) and inserted a genogram. Probably the genogram brings majority of new information in comparison to other updates.

v4 of mEMR concept

Right now the concept of mobile EMR looks this way…

Mobile EMR v4

Mobile EMR v4

New ‘All Data’ charts

Initially the charts of measured values have been from dots. Recent analysis and reviews tended to connect the dots, but things are not so straightforward… There could be kind of sparkline for the current period (7-10 days). Applicability of sparkline technique to represent data from the entire last year is suspicious. Furthermore, if more data is available from the past, then it will be a mess rather than a visualization, because there is so narrow space allocated for old data. Sure, the section of the chart could be wider, but does it worth it?

What is most informative from the past periods? Anomalies, such as low and high values, especially in comparison with current values. Hence we’ve left old data as dots, previous year data as dots, and made current short period as line chart. We’ve added min/max points to ease the analysis of the data for MD.

Genogram

Having genogram on the default screen seems very useful. User testing needed to test the concept on real genograms, to check the sizes of genograms used most frequently. Anyhow, it is always possible to show part of the genogram as expanded diagram, while keep some parts collapsed. The genogram could be interactive. When MD clicks on it, she gets to the new screen totally devoted to the genogram with all detailed attributes present. Editing could be possible too. While default screen should represent such view onto the genogram that relates to the current or potential diagnosis the patient has.

In the future the space allocated for the genogram could be increased, based on the speed of evolution of genetic-based treatments. May be visualization of personal genotyping will be put onto the home screen very soon. There are companies providing such service and keeping such data (e.g. 23andme). Eventually all electronic data will be integrated, hence MDs will be able to see patients genotyped data from EMR app on the tablet.

DNA Sequence

This is mid term future. DNA sequencing is still a long process today. But we’ve got the technology how to deliver DNA sequence information onto the tablet. The technology is similar to BigImage(tm). Predefined levels of information deliver could be defined, such as genes, exoms and finally entire genotype. For sure additional layers overlays will be needed to simplify visual perception and navigation thru the genetic information. So technology should be advanced with that respect.

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Mobile EMR, Part II

On 27th of August I’ve published Mobile EMR, Part I. This post is a continuation.

The main output from initial implementation was feedback from users. They just needed even more information. We initially considered mEMR and All Information vs. Big Data. But it happened that some important information was missing from the concept relied on Powsner/Tufte research. Hence we have added more information and now ready to show the results of our research.

First of all, data is still slightly out of sync, so please be tolerant. It is mechanical piece of work and will be resolved as soon as we integrate with hospital’s backend. The charts on the default screen will show the data most suitable for the each diagnosis. This will be covered in Part III when we are ready with data.

Second, quick introduction for redesign of the initial concept. Vital Signs had to go together, because they deliver synergetically more information when seen relatively to each other. Vital Signs are required for all diagnosis. Hence we have designed a special kind of chart for vital signs and hosted it on top of the iPad. Medications happened to be extremely important, so that physician instantly see what meds are used right now, reaction of vital signs, diagnosis and allergy, and significant events. All other charts are specific to the diagnosis and physician should be able to drag’n’drop them as she needs. It is obvious that diabetes is cured differently than Alzheimer. Only one chart has its dedicated place there – EKG. Partially, EKG is connected to the vital signs, but historically (and technically too) the EKG chart is complemently different and should be rendered separately. Below is a snapshot of the new default screen:

Default Screen (with Notes)

Most important notes are filtered as Significant Events and could be viewed exclusively. Actually default screen can start with Significant Events. We just don’t have much data for today’s demo. Below is a screenshot with Significant Events for the same patient.

Default Screen (with Significant Events)

Charts are configurable like apps on iPad. You tap and hold the one, then move to the desired place and release it. All other charts are ordered automatically around it. This is very useful for the physician to work as she prefers. It’s a good opportunity to configure the sets according to diagnosis. Actually we embedded pre-sets, because it is obvious that hypertension disease is cured differently than cut wound. Screenshot below shows some basic charts, but we are working on its usability. More about that in Part III some time.

Charts Configuration

According to Inverted Pyramid , default screen is a cap of the information mountain. When many are hyping around Big Data, we move forward with All Information. Data is a low-level atoms. Users need information from the data. Our mEMR default screen delivers much information. It can deliver all information. It is up to MD to configure the charts that are most informative in her context. MD can dig for additional information on demand. Labs are available on separate view, groupped into the panels. Images (x-rays) are available on separate view too. MD can click onto the tab IMAGERY and switch to the view with image thumbnails, which correspond to MRIs, radiology/x-ray and other types of medical imagery. Clicking on any thumbnail leads to the image zoomed to the entire iPad screen estate. The image becomes zoomable and draggable. We use our BigImage(tm) IP to empower image delivery of any size to any front end. The interaction with the image is according to Apple HIG standard.

Imagery (empowered by BigImage)

I don’t put here a snapshot of the scan. because it looks like standard full screen picture. Additional description and demo of the BigImage(tm) technology is available at SoftServe site http://bigimage.softserveinc.com. If new labs or new PACS are available, then they are pushed to the home screen as red notifications on the tab label (like on MEASUREMENTS tab above) so that physician can notice and click to see them. It is common scenario if some complicated lab required, e.g. tissue research for cancer.

Labs are shown in tabular form. This was confirmed by user testing. We have grouped the labs by the corresponding panels (logical sets of measurements). It is possible to order labs by date in ascending (chronological) and descending (most recent result is first) orders. Snapshot below shows labs in chronological order. Physician can swipe the table to the left (and then right) to see older results.

Labs

Editing is possible via long tap of the widget, until corresponding widget goes into the edit mode. Quick single click will return the widget to preview mode. MD can edit (edit existing, delete existing and assign new) medications, enter significant sign, notes. Audit is automatic, according to HIPAA, time and identity is captured and stored together with edited data.

Continued in Mobile EMR, Part III.

Tagged , , , , , , , , , , , , , , , , , , , , , , , ,

Mobile EMR, Part I

At SoftServe we do some researches, I’m in charge of them. This time I’d like to describe one of our researches. It is called Mobile EMR. EMR stands for Electronic Medical Record. This research is intended to find what will hospitals, agencies, practitioners will use tomorrow. Mobile EMR aka mEMR is a hot intersection of booming Mobility, Big Data, Touch Interface, Cloud trends. Today we research it, tomorrow they will use it. SoftServe works with multiple ISVs which build solutions for hospitals, agencies, hence we have influence on evolution of medical solutions.

Part I will be about how we started and what we get up to date, with conclusion for next steps, which will be described later in Part II.

Beginning

It has started some time ago during UX research for medical data visualization. Then famous government initiative has been launched to transfer healthcare industry to EMR. Mixing that soup we figured out that it is possible to propose really new concept for the EMR for physicians. We found research by Seth M. Powsner and Edward R. Tufte, “Graphical Summary of Patient Status” published in far 1994. They work at Yale University. One of them is MD, another is PhD in statistics and guru of presenting data. It was oriented onto All Data paradigm that Mr.Tufte loves. I love it too. I love the idea of having All Data on single pager. As soon as Apple released iPad we understood that it is perfect one pager to put the EMR onto. In 2011 I attended E.Tufte one-day course, and found a moment to speak about healthcare data visualization. Mr.Tufte confirmed there was nothing done in the industry still! He pointed to the printed copy of mentioned research and proposed to implement it. After that we had one more face-to-face contact with Mr.Tufte on that research, we god some additional clarifications and recommendations (mainly related to low level details such as sparklines). Below is a snapshot of the proposed visualization by Powsner/Tufte:

Seth M. Powsner and Edward R. Tufte, “Graphical Summary of Patient Status”, The Lancet 344 (August 6, 1994), 386-389.

All Data has to be handled by special kind of chart, that shows three periods of data. Rightmost biggest part shows week or 10 days, middle narrow part shows previous year, and leftmost part shows all possible data prior to last year. Having such data presentation we are capable to display all anomalies and trends for the whole period that has data logs.

'All Data' chart

“All Data” chart. Seth M. Powsner and Edward R. Tufte, “Graphical Summary of Patient Status”, The Lancet 344 (August 6, 1994), 386-389.

 

We were aware there was web implementation. It was 100 percent copy of the research (charts part of it). Below is a screenshot from the browser:

Web EMR by KavaChart

First Version

We took our Mobile IP and SDK (such as authentication, cache, cryptography etc reusable blocks; Big Image(tm), SaaS SDK) and built first app for iPad. Obviously we lacked deep domain knowledge, hence first version is not perfect. But idea was to do technology feasibility rather than ready-made solution (because we work with healthcare ISVs who keep deep domain expertise). There were few cycles for visualization and layout of the charts and other UI elements. As a result, we got this “first version”:

mEMR Default Screen

All remarks and proposed improvements will be listed herein a bit later! Right now I’d like to show few more screenshots what we have got. Physicians identify the patients by MRN or by name, if the patient stays at the hospital for some time. Hence, we introduced two-way patient identification: via My Patients list, and via wristband/MRN scan. Below are screenshots: My Patients and Wristband/MRN Scan. First one is My Patients:

mEMR My Patients

This one is Wristband/MRN Scan:

mEMR Wristband Scan

User Testing and Recommendations

“First Version” has been shown to MD from New York large hospital. Impressions were mixed :-O
In general – idea of such mEMR is good. But using it with its current data is not so valuable. I’ve got recommendation for data presentation improvements. They are listed in unsorted order below. The format is the following: subheader represents an issue or proposal for improvement. The text after it describe reasoning and clarifies details.

User Info

In the left top corner next to the photo we show basic patient information. It happened that both DOB and age is required simultaneously. Year serves as additional identification for multiple Johm Smiths. Ok, the information about age should be like 56 y.o. F, where F stands for female. M stands for male. Sex must follow the age, and must be encoded into single letter F or M. It should be next to the patient name. DOB should be on second line. Look below:

Photo    John Smith  62 y.o. M

    DOB 05/24/1950

In the right top corner we shown Discharged. It happened that nobody cares about discharge, while everybody needs Admitted. Hence, right below the MRN we have to add new line. MRN and Admitted record must be kept together. Look below:

MRN: 221881-5

Admitted: 08/24/2012

Those requirements are related to the integrity of  the data, what is kept with what. There is no requirements to UI layout. Current layout was confirmed as good.

Diagnosis

There is a widely used abbreviation for diagnoses. It is OK to use smth like HTN and MDs will understand it. Writing Depression is also good, but often names are long and abbreviation works better. Important information that is mission in our “first version” is Allergy. It is “the must” that complements the diagnosis. We have to allocate some space below Diagnosis and specify there Allergy. If Allergy is present, then it must be very contrast, may be highlighted in red. Look below:

HTN, Depression

Allergy: Aspirin (rash)

Medication

The patient can be currently on some medications. They must be listed on top of our single pager. Current Medication is such important as Diagnosis. We can specify it as a simple list of what the patient takes. Look below:

Metoprolol 50 daily [sorry, I can’t put units here right now]

Norvase 5 mg daily

Paroxetine 20 mg daily

Medication can be labeled as Meds. It should be in the upper area of the screen, as well as improved User Info and Diagnosis. Other info such as hospitalization and clinic visits should be kept within the upper area of the screen. Later sections describe the middle area of the screen.

I will provide typical medications for those 4 diagnosis: HTN, CAD/HLD, CHF, Asthma. Please update patients data to those 4 diagnosis only. This is our restriction for demo purposes. Medications will take max 4 lines on text, each line up to 25-30 chars.

Vital Signs

We shown then as separate charts: chart per temperature, chart per pressure, chart per heart rate. It sucks. The reason is that it does not represent information. What MDs see as information from vital signs? The relation between all signs together! Hence we must pack all 5 vistal signs into single chart so that all peaks or dropdowns are visible relatively to each other. The labeling for vital signs is the following: t for temperature, HR for heart rate, BP for blood pressure, RR for respiratory, SpO2 for something I don’t remember right now. The order of vital signs should be as listed with t on top and SpO2 at the bottom. Chart should be from sparklines, not for dots. All mins and maxs should be visualized properly. Now I understood what Mr.Tufte explained to our UX designer about use of sparklines. We though it was for all charts, but it happened it is only for vital signs. Look here for visualization spec. The chart with vital signs must be on top. Let’s keep it alwas in left top corner. If there is insufficient space, stretch it to the width of two charts.

Notes

They suck. Usually doctors put a lot of secondary info there. Hence there is no place for such irrelevant notes on the Home Screen. All doctors need is an entry point into the Notes. Doctors read them on demand. As a secondary screen opened intentionally. Implement it if possible.

Significant Events

Those are important! Significant Event is some even that is related to the life of the patient. E.g. patient fall and got head injury. In other words, non-typical events according to the diagnosis. If vomiting is common for that diagnosis, then vomiting is not a significant event, hence not worth big attention.

Probably we can redesign the right pane to show Medication on top, then Significant Events on top and in the middle, and has Notes as entry point at the bottom. If Medication takes too much space, then we can group it with Vital Signs (increased horizontally chart) and keep them together on top as a separate section. All other charts and Significant Events should be below them. Personally I like first alternative better – to use current right pane for both Medications and Significant Events.

EKG

Electrocardiology is “the must” on home screen. Add it as a separate chart, as in Tufte’s research and in web demo (see links above). It is OK to keep EKG on same place, hence keep it hear Vital Signs for example, at same horizontal level.

Imaging (aka Radiology)

This is a section of all scans. MDs call them interchangeably Imaging and Radiolody.  They have it as secondary information. Hence we have to put probably 1 or 2 images on the Home Screen, but implement separate screen with thumbnails to all imagery available for that patient. For our demo we can keep few images on Home Page, then we will see. But they (images) should be organised in some group labelled as Imaging.

Labs

Our “first version” doesn’t have Labs data at all. How Labs look like? It is a list of text and numbers. Furthermore, MDs get used to the order how to read the labs (BTW I confirm the same for vital signs). E.g. Labs called Hepatic Panel is shown as a sequence of AST, ALT, Alkaline Phosph. Labs called Coagulation is shown as PT, PTT, INR. It is common that Labs list contains 5+ sets of results (named as Panel here). Superscripts are used to represent K+, Na+, Cl-, CO3-, Ca2+ and so on.

We can allocate the space at the bottom of the Home Screen to visualize Labs and Imaging. Labs should grab more space than Imaging. As I’ve told, having 2 thumbnails as an entry point to Big Image is sufficient. Then physician should click either title Imaging or “…” next to the title, to get to the separate screen with all other images. Re Labs, they could be shown in tab format. Look below:

              date1  date2  date3

Erythr.    4.5    4.3    2.8

HGB    140    138    100

Disease Profiles

It could be useful to have layout presets for chronic diseases like Diabetes etc.

Continued on Mobile EMR, Part II.

Tagged , , , , , , , , , , , , , , , , , , , ,

Native Apps Beat Mobile Web

Mobile Web was a good thing on feature phones, but nowadays and in the near future Native Apps rock! Interested why? Stay with me.

No More the Phone

iPhone and Android are no more the phones, they are new personal computers, just smaller than PC 20 years ago. The irony is the recent issue with iPhone 4 antenna. When you close the metal circuit with your finger during the phone talk, then antenna could stop working properly, as result you are not able to to phone conversation – the phone does not work as the phone. It is ridiculous, but it clearly represents the trend: those devices are much more than phones, the “phone function” is not even #1 (as with iPhone 4 antenna). But besides “phone” those devices can many other things, such as taking pictures, play music, play videos, and run applications. Those apps seem more important than phone talks. It was an introduction, hope you got the point that devices are evolving towards non-phone ones.

Can Apps be SaaS?

Ordinary apps can be SaaS, best apps can’t be SaaS. Big difference between best apps and the rest is the efficient usage of the device hardware, its sensors. It is possible (historically) from OS API. There were always “silver bullets” that allowed “write once, run on many”, but all of them confirmed to be limited and restricted in comparison to the native apps. There is only one way to use new iPhone most efficiently, it’s via iOS programming, with direct access to entire API and underlying sensors. It is same as it was on desktop PC programming. You could use some library (e.g. from Adobe or Trolltech) and succeed with certain things, but get restricted or non-flexible with others. Situation with multiple platforms led to even more limitations. Ok, but what is that special thing that we want to have access to and control it with great flexibility? Sensors! New sensors. It’s obvious now that HTML-powered SaaS can’t leverage the power of new sensors, because it doesn’t know about them! Hybrids go with smaller handicap than pure HTML, but they are behind the schedule. Best apps are native apps and they make the pace.

New Sensors

You probably heard about Apple’s patent to infrared camera that could be used for DRM or substitute QR codes. Figure below is put to remind about it.

Infrared sensor is a beginning. Motorola Xoom has introduced barometer. Now it is available on Galaxy Nexus too. Barometer is relatively simple sensor, hence its support by HTML and different mobile middleware is not a problem, it’s just come with some delay in comparison to OS API. Another potential sensor could be humidity. Altimeter could come to iPhone soon too. All them will be supported by HTML with some delay to native support… But more advanced things like motion sensors and 3D GUI are obviously too tricky for HTML (see figure below). It is kind of Kinect embedded into iPhone. There are Apple patents for  hover sensing, more details on 3D hovering here.

Another sample is about old good WiFi, measurement of WiFi signal strength is a problem for non-native apps. If smbd wonder why we may need such WiFi strenght measurement, then the answer is – for in-door positioning, where GPS doesn’t work. Going into advanced requires working directly at the OS level. Native apps! But now I’d like to switch to Healthcare and sensors that could be useful there. You will get even more arguments for native apps.

Health Sensors

Thermometer could be embedded into the phone, We saw simple electronic thermometer is the stores. Infrared touchless thermometer might be bigger, though as a sensor both are simple. What about UV exposure sensor? You expose your phone to the sun and it (phone app) tells you about UV level and suggests based on the result. UV sensor could be more sophisticated than thermometer, probably some calibrations will be required. This is argument that its use could be efficient from OS level. Next similar to UV could be radiation sensor. The questions is about miniaturisation, when it fits into the phone body. Non-trivial programming expected, hence no chance for HTML or hybrid at the beginning.

Sounds like Sci-Fi? Well, switching to human. Heart monitoring is pretty straightforward, that sensor could be quickly available for HTML and JavaScript. What about breath analyzer? There could be a sensor for alcohol in the phone. Similar sensor could be a smoke analyzer at home, embedded into the phone. What about perspiration sensor and analyzer? It is easy and safe to use outside of hospital. Perspiration sensor sounds advanced. Glucose measurement sensor could become small enough to be fit into the phone. It is possible to implement mood sensor, that measure excitement or frustration level, and it could fit into device you continue to call “the phone”.

Conclusion

Hardware is booming right now. It is obvious. Plenty of manufacturing is now takes place in the states, not in Asia. History taught us during PC era: there are no silver bullets and you need to be close to the hardware as possible if you need all its features and use them in flexible and reliable way. Are hybrids the solution, to wrap all new sensors and allow SaaS within? I doubt, because those man-in-the-middle always introduce limitations in comparison to what you have by using OS’es API directly. What to build best apps? Build native apps and you will not regret. For next several years guaranteed. Mobile web will catch the pace, when all sensors are designed, adopted, HTML standardised and so on. It will take several years.

Tagged , , , , , , ,

Web 3.0

What is a future of the Web?

Is it Semantic Web as long time ago smtb called it? Spend few minutes to read so diverse definitions of Web 3.0 on wiki and return back here. Nobody argues with all those predictions, all of them will happen at some point in the future. My favorite prediction is smth like Kevin Kelly made public at the end of 2007, called “Next 5000 Days of the Web”

All those devices and sensors that will suck data into the web are related to our mobile devices. From the Mobile World Congress 2012 I brought information, announced by Eric Schmidt, that soon we will have 50,000,000,000 connected devices. Only imagine that number, almost ten devices per person. It is really huge!

But what we have today?

Today we see the boom of mobile apps. It is similar to what we have with the boom of apps for PCs 20-25 years ago. The history repeats itself, slightly at different level. Now we have app boom for smaller devices than PC. Years ago we have premium vendor of the app platform – Apple, and commoditizer – Microsoft. Today we have the same, premium vendor – Apple, and new commoditizer – Google/Android. But the big picture is similar, the apps are booming, there is brand new community of developers and users of them. There are new business models emerging how to monetize on this new boom.

How is it related to the Web at all? The web is in place, it is inevitable and we are all in the web, but there are nuances;) Surfing the web with Mobile Web is not the same as using the Native App. For business applications Mobile Web is logical choice, it smoothly substitutes awkward MEAP solutions. It is not a surprise that Gartner did not identify any MEAP vendors as Leaders in is Magic Quadrant. There are niche players, visionaries, but there are no leaders. It was not easy, hence many walls were broken by Mobile Web. Enterprise love Mobile Web, it has emerged and gaining popularity. Is it Web 3.0? What is a difference between web app for desktop, tablet, phone? There is almost no difference. Just few additional features like geolocation available from the browser, camera and so on. But delivery model is the same, SaaS-like familiar from PC times. Hence it is not a revolution to be named Web 3.0.

Revolution happened.

Revolution seems to be this application boom on modern phones and tablets. It smells like revolution. This observable on apps like Instagr.am. Believe me or not, but Instagram was a threat to Facebook! Initially people published photos on Flickr or Picasa and sent link to the friends and colleagues to share them. With Facebook photo sharing feature, it got simplified, you just upload photos and there got shared automatically within your network. No need in Flickr or Picasa anymore? Then came Instagram, with opportunity to make pictures with the phone, apply some cool effect and instantly share, without connecting the device to the PC and without that annoying bulk upload. Instagram has a backend, synthesized from Facebook and Twitter, which is cool for the user. You don’t need Facebook anymore to share your pictures! Bingo!

Ok, Instagram is cool, Facebook even bought it to kill it as a competitor… But were is the web there? It is called Web Services. There is very rich and powerful web, full of clouds and web services. As Jeff Bezos once said, the future of the web was in Amazon Web Services. It is. We have got very popular S+S model, with native app on the phone/tablet and back end on AWS or so. There is good report by Vision Mobile that “Apps is a New Web“, dated 2010. We have got new ways of discovery of useful things, brand new UX, new monetization models. Enough arguments to call it New Web. May be not Web 3.0, but definitely it is no more Web 2.0.

To HTML5 Believers.

Those who hope on HTML5 as a standard, and return to old good SaaS approach could be pleased that for enterprises this works even today and will work tomorrow. But for the non-enterprise users it is not a case. First of all, all standards need few years (up to 5) to mature, after that the wide adoption happens. Second, hardware will evolve too. Web technologies will not keep the pace of hw evolution. Have you ever heard about new sensors planned for the new iPhone? E.g. infrared camera patent filled by Apple recently. It will serve for DRM, like preventing from recording the live show. It will serve to identify objects by infrared tags, instead of ugly QR tags. Infrared are invisible to the people, which means they are better, because they do not spoil the look of the object. OK, back to the infrared sensor – do you thing web tools like HTML will have support for infrared camera tomorrow? I think no. I even bet it will not. The pace of hardware is fast and web technologies will be few steps behind.

Image

We have entered Web 3.0

New sensors like infrared camera will be added to the phones, tablets in the future. Other devices will emerge in the future. Recall 50,000,000,000 connected devices. There is no easy way to apply SaaS to all of them. There is strong M2M trend, observed during recent years. It is not Web 2.0 anymore. We have started from the user apps, now we are descending to the machine apps too… It is really smth brand new. I propose to call this new era Web 3.0. For semantic web we could chose another name, when it come. So far we are within smth new, and instead of calling it New Web, let’s call it Web 3.0.

Tagged , , , , , , , , , ,