Tag Archives: UX

Consumerism via IoT @ IT2I in Munich

We buy things we don’t need
with money we don’t have
to impress people we don’t like

Tyler Durden
Fight Club

 

Consumption sucks

That sucks. That got to be changed. Fight Club changed it that violent way… Thanks God it was in book/movie only. We are changing it different way, peacefully, via consumerism. We are powerful consumers in new economy – Experience Economy. Consumers don’t need goods only, they need experiences, staged from goods, services and something else.

EE

Staging experience is difficult. Staging personal experience is a challenge for this decade. We have to gather, calculate and predict about literally each customer. The situation gets more complicated with growing Do It Yourself attitude from consumers. They want to make it, not just to buy it…

If you have not so many customers then staging of experience could be done by people, e.g. Vitsoe. They are writing on letter cards exclusively for you! To establish realistic human-human interface from the very beginning. You, as consumer, do make it, by shooting pictures of your rooms and describing the concept of your shelving system. New Balance sneakers maker directly provides “Make button”, not buy button, for number of custom models. You are involved into the making process, it takes 2 days, you are informed about the facilities [in USA] and so on; though you are just changing colors of the sneaker pieces, not a big deal for a man but the big deal for all consumers.

There are big etalons in Experience Economy to look for: Starbucks, Walt Disney. Hey, old school guys, to increase revenue and profit think of price goes up, and cost too; think of staging great experiences instead of cutting costs.

 

Webization of life

Computers disrupted our lives, lifestyle, work. Computers changed the world and still continue to change it. Internet transformed our lives tremendously. It was about connected machines, then connected people, then connected everything. The user used to sit in front of computer. Today user sits within big computer [smart house, smart ambient environment, ICU room] and wears tiny computers [wristbands, clasps, pills]. Let’s recall six orders of magnitude for human-machine interaction, as Bill Joy named them – Six Webs – Near, Hear, Far, Weird, B2B, D2D. http://video.mit.edu/embed/9110/

Nowadays we see boost for Hear, Weird, D2D. Reminder what they are: Hear is your smartphone, smartwatch [strange phablets too], wearables; Weird is voice interface [automotive infotaintent, Amazon Echo]; D2D is device to device or machine to machine [aka M2M]. Wearables are good with anatomical digital gadgets while questionable with pseudo-anatomical like Google Glass. Mobile first strategies prevail. Voice interop is available on all new smartphones and cars. M2M is rolling out, connecting “dumb” machines via small agents, which are connected to the cloud with some intelligent services there.

At the end of 2007 we experienced 5000 days of the Web. Check out what Kevin Kelly predicts for next 5000 days [actually less than 3000 days from now]. There will be only One machine, its OS is web, it encodes trillions of things, all screens look into One, no bits live outside, to share is to gain, One reads it all, One is us…

 

Webization of things

Well, next 3000 days are still to come, but where we are today? At two slightly overlapping stages: Identification of Everything and Miniaturization & Connecting of Everything. Identification difficulties delay connectivity of more things. Especially difficult is visual identification. Deep Neural Networks did not solve the problem, reached about 80% accuracy. It’s better than old perceptrons but not sufficient for wide generic application. Combinations with other approaches, such as Random Forests bring hope to higher accuracy of visual recognition.

Huge problem with neural networks is training. While breakthrough is needed for ad hoc recognition via creepy web camera. Intel released software library for computer vision OpenCV to engage community to innovate. Then most useful features are observed, improved and transferred from sw library into hw chips by Intel. Sooner or later they are going to ship small chips [for smartphones for sure] with ready-made special object recognition bits processing, so that users could identify objects via small phone camera in disconnected mode with better accuracy than 85-90%, which is less or more applicable for business cases.

5WIoT

As soon as those two IoT stages [Identification and Miniaturization] are passed, we will have ubiquitous identification of everything and everyone, and everything and everyone will be digitized and connected – in other words we will create a digital copy of us and our world. It is going to be completed somewhere by 2020-2025.

Then we will augment ourselves and our world. Then I don’t know how it will unfold… My personal vision is that humanity was a foundation for other more intelligent and capable species to complete old human dream of reverse engineering of this world. It’s interesting what will start to evolve after 2030-2040. You could think about Singularity. Phase shift.

 

Hot industries in IoT era

Well, back to today. Today we are still comfortable on the Earth and we are doing business and looking for lucrative industries. Which industries are ready to pilot and rollout IoT opportunities right away? Here is a list by Morgan Stanley since April 2014:

Utilities (smart metering and distribution)
Insurance (behavior tracking and biometrics)
Capital Goods (factory automation, autonomous mining)
Agriculture (yield improvement)
Pharma (critical trial monitoring)
Healthcare (leveraging human capital and clinical trials)
Medtech (patient monitoring)
Automotive (safety, autonomous driving)

 

What IoT is indeed?

Time to draw baseline. Everybody is sure to have true understanding of IoT. But usually people have biased models… Let’s figure out what IoT really is. IoT is synergistic phenomenon. It emerged at the interaction of Semiconductors, Telecoms and Software. There was tremendous acceleration with chips and their computing power. Moore’s Law still has not reached its limit [neither at molecular nor atomic level nor economic]. There was huge synergy from wide spread connectivity. It’s Metcalfe’s Law, and it’s still in place, initially for people, now for machines too. Software scaled globally [for entire planet, for all 7 billions of people], got Big Data and reached Law of Large Numbers.

IoT

As a result of accelerated evolution of those three domains – we created capability to go even further – to create Internet of Things at their intersection, and to try to benefit from it.

 

Reference architecture for IoT

If global and economic description is high-level for you, then here you go – 7 levels of IoT – called IoT Reference Architecture by Cisco, Intel and IBM in October 2014 at IoT World Forum. A canonical model sounds like this: devices send/receive data, interacting with network where the data is transmitted, normalized and filtered using edge computing before landing in databases/data storage, accessible by applications and services, which process it [data] and provide it to people, who will act and collaborate.

ref_arch

 

Who is IoT?

You could ask which company is IoT one. This is very useful question, because your next question could be about criteria, classifier for IoT and non-IoT. Let me ask you first: is Uber IoT or not?

Today Uber is not, but as soon as the cars are self-driven Uber will be. An only missing piece is a direct connection to the car. Check out recent essay by Tim O’Reilly. Another important aspect is to mention society, as a whole and each individual, so it is not Internet of Things, but it is Internet of Things & Humans. Check out those ruminations http://radar.oreilly.com/2014/04/ioth-the-internet-of-things-and-humans.html

Humans are consumers, just a reminder. Humans is integral part of IoT, we are creating IoT ourselves, especially via networks, from wide social to niche professional ones.

 

Software is eating the world

Chips and networks are good, let’s look at booming software, because technological process is depending vastly on software now, and it’s accelerating. Each industry opens more and more software engineering jobs. It started from office automation, then all those classical enterprise suites PLM, ERP, SCADA, CRM, SCM etc. Then everyone built web site, then added customer portal, web store, mobile apps. Then integrated with others, as business app to business app aka B2B. Then logged huge clickstreams and other logs such as search, mobile data. Now everybody is massaging the data to distill more information how to meet business goals, including consumerism shaped goals.

  1. Several examples to confirm that digitization of the world is real.
    Starting from easiest example for understanding – newspapers, music, books, photography, movies went digital. Some of you have never seen films and film cameras, but google it, they were non-digital not so long ago. Well, last example from this category is Tesla car. It is electrical and got plenty of chips with software & firmware on them.
  2. Next example is more advanced – intellectual property shifts to digital models of goods. 3D model with all related details does matter, while implementation of that model in hard good is trivial. You have to pay for the digital thing, then 3D print it at home or store. As soon as fabrication technology gets cheaper, the shift towards digital property will be complete. Follow Formula One, their technologies are transferred to our simpler lives. There is digital modeling and simulations, 3D printed into carbon, connected cars producing tons of telemetry data. As soon as consumer can’t distinguish 3D printed hard goods from produced with today’s traditional method, and as soon as technology is cheap enough – it is possible to produce as late as possible and as adequate as possible for each individual customer.
  3. All set with hard goods. What about others? Food is also 3D printed. First 3D printed burger from Modern Meadow was printed more than year ago, BTW funded by googler Sergey Brin. The price was high, about $300K, exactly the amount of his investment. Whether food will be printed or produced via biotech goo, the control and modeling will be software. You know recipes, processes, they are digital. They are applied to produce real food.
  4. Drugs and vaccines. Similar to the food and hard goods. Just great opportunity to get quick access to the brand new medications is unlocked. The vaccine could be designed in Australia and transferred as digital model to your [or nearby] 3D printer or synthesizer, your instance will be composed from the solutions and molecules exclusively, and timely.

So whatever your industry is, think about more software coding and data massage. Plenty of data, global scale, 7 billions of people and 30 billions of internet devices. Think of traditional and novel data, augmented reality and augmented virtuality are also digitizers of our lives towards real virtuality.

 

How  to design for IoT?

If you know how, then don’t read further, just go ahead with your vision, I will learn from you. For others my advice will be to design for personal experience. Just continue to ride the wave of more & more software piece in the industries, and handle new software problems to deliver personal experience to consumers.

First of all, start recognizing novel data sources, such as Search, Social, Crowdsourced, Machine. It is different from Traditional CRM, ERP data. Record data from them, filter noise, recognize motifs, find intelligence origins, build data intelligence, bind to existing business intelligence models to improve them. Check out Five Sources of Big Data.

Second, build information graphs, such as Interest, Intention, Consumption, Mobile, Social, Knowledge. Consumer has her interests, why not count on them? Despite the interests consumer’s intentions could be different, why not count on them? Despite the intentions consumer’s consumption could be different, why not count on them? And so on. Build mobility graph, communication graph and other specific graphs for your industry. Try to build a knowledge graph around every individual. Then use it to meet that’s individual expectations or bring individualized unexpected innovations to her. Check out Six Graphs of Big Data.

As soon as you grasp this, your next problem will be handling of multi-modality. Make sure you got mathematicians into your software engineering teams, because the problem is not trivial, exactly vice versa. Good that for each industry some graph may prevail, hence everything else could be converted into the attributes attached to the primary graph.

concept

 

PLM in IoT era

Taking simplified PLM as BEFORE –> DURING –> AFTER…

BEFORE.
Design of the product should start as early as possible, and it is not isolated, instead foster co-creation and co-invention with your customers. There is no secret number how much of your IP to share publicly, but the criteria is simple – if you share insufficiently, then you will not reach critical mass to trigger consumer interest to it; and if you share too much, your competitors could take it all. The rule of thumb is about technological innovativeness. If you are very innovative, let’s say leader, then you could share less. Examples of technologically innovative businesses are Google, Apple. If you are technologically not so innovative then you might need to share more.

DURING.
The production or assembly should be as optimal as possible. It’s all about transaction optimization via new ways of doing the same things. Here you could think about Coase Law upside down – outsource to external patterns, don’t try to do everything in-house. Shrink until internal transaction cost equals to external. Specialization of work brings [external] costs down. Your organization structure should reduce while the network of partners should grow. In the modern Internet the cost of external transactions could be significantly lower than the cost of your same internal transactions, while the quality remains high, up to the standards. It’s known phenomenon of outsourcing. Just Coase upside down, as Eric Schmidt mentioned recently.

Think about individual customization. There could be mass customization too, by segments of consumers… but it’s not so exciting as individual. Even if it is such simple selection of the colors for your phones or sneakers or furniture or car trim. It should take place as late as possible, because it’s difficult to forecast far ahead with high confidence. So try to squeeze useful information from your data graphs as closer to the production/assembly/customization moment as possible, to be sure you made as adequate decisions as could be made at that time. Optimize inventory and supply chains to have right parts for customized products.

AFTER.
Then try to keep the customer within experience you created. Customers will return to you to repeat the experience. You should not sit and wait while customer comes back. Instead you need to evolve the experience, think about ecosystem. Invent more, costs may raise, but the price will raise even more, so don’t push onto cost reduction, instead push onto innovativeness towards better personal experiences. We all live within experiences [BTW more and more digitized products, services and experiences]. The more consumer stays within ecosystem, the more she pays. It’s experience economy now, and it’s powered by Internet of Things. May be it will rock… and we will avoid Fight Club.

 

PS.

Visuals…

Advertisements
Tagged , , , , , , , , , , , , , , , , , , , , , , , ,

Big Data Graphs Revisited

Some time ago I’ve outlined Six Graphs of Big Data as a pathway to the individual user experience. Then I’ve did the same for Five Sources of Big Data. But what’s between them remained untold. Today I am going to give my vision how different data sources allow to build different data graphs. To make it less dependent on those older posts, let’s start from the real-life situation, business needs, then bind to data streams and data graphs.

 

Context is a King

Same data in different contexts has different value. When you are late to the flight, and you got message your flight was delayed, then it is valuable. In comparison to receiving same message two days ahead, when you are not late at all. Such message might be useless if you are not traveling, but airline company has your contacts and sends such message on the flight you don’t care about. There was only one dimension – time to flight. That was friendly description of the context, to warm you up.

Some professional contexts are difficult to grasp by the unprepared. Let’s take situation from the office of some corporation. Some department manager intensified his email communication with CFO, started to use a phone more frequently (also calling CFO, and other department managers), went to CFO office multiple times, skipped few lunches during a day, remained at work till 10PM several days. Here we got multiple dimensions (five), which could be analyzed together to define the context. Most probably that department manager and CFO were doing some budgeting: planning or analysis/reporting. Knowing that, it is possible to build and deliver individual prescriptive analytics to the department manager, focused and helping to handle budget. Even if that department has other escalated issues, such as release schedule or so. But severity of the budgeting is much higher right away, hence the context belongs to the budgeting for now.

By having data streams for each dimension we are capable to build run-time individual/personal context. Data streams for that department manager were kind of time series, events with attributes. Email is a dimension we are tracking; peers, timestamps, type of the letter, size of the letter, types and number of attachments are attributes. Phone is a dimension; names, times, durations, number of people etc. are attributes. Location is a dimension; own office, CFO’s office, lunch place, timestamps, durations, sequence are attributes. And so on. We defined potentially useful data streams. It is possible to build an exclusive context out of them, from their dynamics and patterns. That was more complicated description of the context.

 

Interpreting Context

Well, well, but how to interpret those data streams, how to interpret the context? What we have: multiple data streams. What we need: identify the run-time context. So, the pipeline is straightforward.

First, we have to log the Data, from each interested dimension. It could be done via software or hardware sensors. Software sensors are usually plugins, but could be more sophisticated, such as object recognition from surveillance cameras. Hardware sensors are GPS, Wi-Fi, turnstiles. There could be combinations, like check-in somewhere. So, think that it could be done a lot with software sensors. For the department manager case, it’s plugin to Exchange Server or Outlook to listen to emails, plugin to ATS to listen to the phone calls and so on.

Second, it’s time for low-level analysis of the data. It’s Statistics, then Data Science. Brute force to ensure what is credible or not, then looking for the emerging patterns. Bottleneck with Data Science is a human factor. Somebody has to look at the patterns to decrease false positives or false negatives. This step is more about discovery, probing and trying to prepare foundation to more intelligent next step. More or less everything clear with this step. Businesses already started to bring up their data science teams, but they still don’t have enough data for the science:)

Third, it’s Data Intelligence. As MS said some time ago “Data Intelligence is creating the path from data to information to knowledge”. This should be described in more details, to avoid ambiguity. From Technopedia: “Data intelligence is the analysis of various forms of data in such a way that it can be used by companies to expand their services or investments. Data intelligence can also refer to companies’ use of internal data to analyze their own operations or workforce to make better decisions in the future. Business performance, data mining, online analytics, and event processing are all types of data that companies gather and use for data intelligence purposes.” Some data models need to be designed, calibrated and used at this level. Those models should work almost in real-time.

Fourth, is Business Intelligence. Probably the first step familiar to the reader:) But we look further here: past data and real-time data meet together. Past data is individual for business entity. Real-time data is individual for the person. Of course there could be something in the middle. Go find comparison between stats, data science, business intelligence.

Fifth, finally it is Analytics. Here we are within individual context for the person. There worth to be a snapshot of ‘AS-IS’ and recommendations of ‘TODO’, if the individual wants, there should be reasoning ‘WHY’ and ‘HOW’. I have described it in details in previous posts. Final destination is the individual context. I’ve described it in the series of Advanced Analytics posts, link for Part I.

Data Streams

Data streams come from data sources. Same source could produce multiple streams. Some ideas below, the list is unordered. Remember that special Data Intelligence must be put on top of the data from those streams.

In-door positioning via Wi-Fi hotspots contributing to mobile/mobility/motion data stream. Where the person spent most time (at working place, in meeting rooms, on the kitchen, in the smoking room), when the person changed location frequently, directions, durations and sequence etc.

Corporate communication via email, phone, chat, meeting rooms, peer to peer, source control, process tools, productivity tools. It all makes sense for analysis, e.g. because at the time of release there should be no creation of new user stories. Or the volumes and frequency of check-ins to source control…

Biometric wearable gadgets like BodyMedia to log intensity of mental (or physical) work. If there is low calories burn during long bad meetings, then that could be revealed. If there is not enough physical workload, then for the sake of better emotional productivity, it could be suggested to take a walk.

 

Data Graphs from Data Streams

Ok, but how to build something tangible from all those data streams? The relation between Data Graphs and Data Streams is many to many. Look, it is possible to build Mobile Graph from the very different data sources, such as face recognition from the camera, authentication at the access point, IP address, GPS, Wi-Fi, Bluetooth, check-in, post etc. Hence when designing the data streams for some graph, you should think about one to many relations. One graph can use multiple data streams from corresponding data sources.

To bring more clarity into relations between graphs and streams, here is another example: Intention Graph. How could we build Intention Graph? The intentions of somebody could be totally different in different contexts. Is it week day or weekend? Is person static in the office or driving the car? Who are those peers that the person communicates a lot recently? What is a type of communication? What is a time of the day? What are person’s interests? What were previous intentions? As you see there could be data logged from machines, devices, comms, people, profiles etc. As a result we will build the Intention Graph and will be able to predict or prescribe what to do next.

 

Context from Data Graphs

Finally, having multiple data graphs we could work on the individual context, personal UX. Technically, it is hardly possible to deal with all those graphs easily. It’s not possible to overlay two graphs. It is called modality (as one PhD taught me). Hence you must split and work with single modality. Select which graph is most important for your needs, use it as skeleton. Convert relations from other graphs into other things, which you could apply to the primary graph. Build intelligence model for single modality graph with plenty of attributes from other graphs. Obtain personal/individual UX at the end.

Tagged , , , , , , , , , , , , , , , , , , , , , ,

Advanced Analytics, Part V

This post is related to the details of visualization of information for executives and operational managers on the mobile front-end. What is descriptive, what is predictive, what is prescriptive, how it looks like, and why. The scope of this post is a cap of the information pyramid. Even if I start about smth detailed I still remain at the very top, at the level of most important information without details on the underlying data. Previous posts contains introduction (Part I) and pathway (Part II) of the information to the user, especially executives.

Perception pipeline

The user’s perception pipeline is: RECOGNITION –> QUALIFICATION –> QUANTIFICATION –> OPTIMIZATION. During recognition the user just grasps the entire thing, starts to take it as a whole, in the ideal we should deliver personal experience, hence information will be valuable but probably delivered slightly different from the previous context. More on personal experience  in next chapter below. So as soon as user grasps/recognizes she is capable to classify or qualify by commonality. User operates with categories and scores within those categories. The scores are qualitative and very friendly for understanding, such as poor, so-so, good, great. Then user is ready to reduce subjectivity and turn to the numeric measurements/scoring. It’s quantification, converting good & great into numbers (absolute and relative). As soon as user all set with numeric measurements, she is capable to improve/optimize the biz or process or whatever the subject is.

Default screen

What should be rendered on the default screen? I bet it is combination of the descriptive, predictive and prescriptive, with large portion of space dedicated to descriptive. Why descriptive is so important? Because until we build AI the trust and confidence to those computer generated suggestions is not at the level. That’s why we have to show ‘AS IS’ picture, to deliver how everything works and what happens without any decorations or translations. If we deliver such snapshot of the business/process/unit/etc. the issue of trust between human and machine might be resolved. We used to believe that machines are pretty good at tracking tons of measurements, so let them track it and visualize it.

There must be an attempt from the machines to try to advice the human user. It’s could be done in the form of the personalized sentence, on the same screen, along with descriptive analytics. So putting some TODOs are absolutely OK. While believing that user will trust them and follow them is naive. The user will definitely dig into the details why such prescription is proposed. It’s normal that user is curious on root-cause chain. Hence be ready to provide almost the same information with additional details on the reasons/roots, trends/predictions, classifications & patterns recognition within KPI control charts, and additional details on prescriptions. If we visualize [on top of the inverted pyramid] with text message and stack of vital signs, then we have to prepare additional screen to answer that list of mentioned details. We will still remain on top of the pyramid.

default_screen

 

Next screen

If we got ‘AS IS’ then there must be ‘TO BE’, at least for the symmetry:) User starts on default screen (recognition and qualification) and continues to the next screen (qualification and quantification). Next screen should have more details. What kind of information would be logically relevant for the user who got default screen and looks for more? Or it’s better to say – looks for ‘why’? May be it’s time to list them as bullets for more clarity:

  • dynamic pattern recognition (with highlight on the corresponding chart or charts) what is going on; it could be one from seven performance signals, it should be three essential signals
  • highlight the area of the significant event [dynamic pattern/signal] to the other charts to juxtapose what is going on there, to foster thinking on potential relations; it’s still human who thinks, while machine assists
  • parameters & decorations for the same control charts, such as min/max/avg values, identifications of the quarters or months or sprints or weeks or so
  • normal range (also applicable to the default screen) or even ranges, because they could be different for different quarters or years
  • trend line, using most applicable method for approximation/prediction of future values; e.g. release forecast
  • its parts should be clickable for digging from relative values/charts into the absolute values/charts for even more detailed analysis; from qualitative to quantitative
  • your ideas here

signal

Recognition of signals as dynamic patterns is identification of the roots/reasons for smth. Predictions and prescriptions could be driven by those signals. Prescriptions could be generic, but it’s better to make personalized prescriptions. Explanations could be designed for the personal needs/perception/experience.

 

Personal experience

We consume information in various contexts. If it is release of the project or product then the context is different from the start of the zero sprint. If it’s merger & acquisition then expected information is different from the quarterly review. It all depends on the user (from CEO to CxOs to VPs to middle management to team management and leadership), on the activity, on the device (Moto 360 or iPhone or iPad or car or TV or laptop). It matters where the user is physically, location does matter. Empathy does matter. But how to reach it?

We could build users interests from social networks and from the interaction with other services. Interests are relatively static in time. It is possible to figure out intentions. Intentions are dynamic and useful only when they are hot. Business intentions are observable from business comms. We could sense the intensity of communication between the user and CFO and classify it as a context related to the budgeting or budget review. If we use sensors on corporate mail system (or mail clients), combine with GPS or Wi-Fi location sensors/services, or with manual check-in somewhere, we could figure out that the user indeed intensified comms with CFO and worked together face-to-face. Having such dynamic context we are capable to deliver the information in that context.

The concept of personal experience (or personal UX) is similar to the braid (type of hairstyle). Each graph of data helps to filter relevant data. Together those graphs allows to locate the real-time context. Having such personal context we could build and deliver most valuable information to the user. More details how to handle interest graph, intention graph, mobile graph, social graph and which sensors could bring the modern new data available in my older posts. So far I propose to personalize the text message for default screen and next screen, because it’s easier than vital signs, and it’s fittable into wrist sized gadgets like Moto 360.

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Mobile Programming is Commodity

This post is about why the programming for smart phones and tablets is commodity.

Mobiles are no more novelty.

Mobiles are substituting PCs. As we programmed in VB and Delphi 15 years ago, the same way we will program in Objective-C and Java today.  Because adoption rate for cell phone as technology (in USA) is fastest from other technologies, and the scale of adoption surpassed 80% in 2005. Smart phones are being adopted at same pace, surpassing 35% in 2011, just in several years since iPhone revolution happened in 2007. Go check out the evidence from New York Times since 2008 for cell phones , evidence from Technology Review since 2010 for smart phones , more details by Harvard Business Review on accelerated technology adoption.

Visionaries look further. O’Reilly.

The list of hottest conferences by direction from visionary O’Reilly:

  • BigData
  • New Web
  • SW+HW
  • DevOps

BigData still matters, matching approach to Gartner’s “peak of inflated expectations”. Strata, Strata Rx (Healthcare flavor), Strata Hadoop. http://strataconf.com/strata2014 Tap into the collective intelligence of the leading minds in data—decision makers using the power of big data to drive business strategy, and practitioners who collect, analyze, and manipulate data. Strata gives you the skills, tools, and technologies you need to make data work today—and the insights and visionary thinking O’Reilly is known for.

JavaScript got out of the web browser and penetrated all domains of programming. Expectations and progress for HTML5 .Web 2.0 abandoned, fluent created. Emerging technologies for new Web Platform and new SaaS. http://fluentconf.com/fluent2014 O’Reilly’s Fluent Conference was created to give developers working with JavaScript a place to gather and learn from each other. As JavaScript has become a significant tool for all kinds of development, there’s a lot of new information to wrap your head around. And the best way to learn is to spend time with people who are actually working with JavaScript and related technologies, inventing ways to apply its power, scalability, and platform independence to new products and services.

“The barriers between software and physical worlds are falling”. “Hardware startups are looking like the software startups of the previous digital age”. Internet of Things has longer cycle (according to Gartner’s hype cycle), but it is coming indeed. With connected machines, machine-to-machine, smart machines, embedded programming, 3D printing and DIY to assemble them (machines). Solid. http://solidcon.com/solid2014 The programmable world is creating disruptive innovation as profound as the Internet itself. As barriers blur between software and the manufacture of physical things, industries and individuals are scrambling to turn this disruption into opportunity.

DevOps & Performance is popular. Velocity. Most companies with outward-facing dynamic websites face the same challenges: pages must load quickly, infrastructure must scale efficiently, and sites and services must be reliable, without burning out the team or breaking the budget. Velocity is the best place on the planet for web ops and performance professionals like you to learn from your peers, exchange ideas with experts, and share best practices and lessons
learned.

Open Source matters more and more. Open Source is about sharing partial IP for free according to WikinomicsOSCON. http://www.oscon.com/oscon2014 OSCON is where all of the pieces come together: developers, innovators, business people, and investors. In the early days, this trailblazing O’Reilly event was focused on changing mainstream business thinking and practices; today OSCON is about how the close partnership between business and the open source community is building the future. That future is everywhere you look.

Digitization of conent continues. TOC.

Innovation in leadership and processes. cultivate.

Visionaries look further. GigaOM.

The list of conferences by direction from GigaOM:

  • BigData
  • UX
  • IoT
  • Cloud

BigData. STRUCTURE DATA. http://events.gigaom.com/structuredata-2014/ From smarter cars to savvier healthcare, today’s data strategies are driving business in compelling new directions.

User Experience. ROADMAP. http://events.gigaom.com/roadmap-2013/ As data and connectivity shape our world, experience design is now as important as the technology itself. It covers (and will cover) ubiquitous UI, wearables and HCI with all those new smarter machines (3D printed & DIY & embedded programming).

Internet of Things. MOBILIZE. http://event.gigaom.com/mobilize/ Five years ago, Mobilize was the first conference of its kind to outline the future of mobility after Apple’s iPhone exploded onto the scene. We continue to track the hottest early adopters, the bold visionaries and those about to disrupt the ecosystem. We hope that you will join us at Mobilize and be the first in line to ride this next wave of innovation. This year we’ll cover: The internet of things and industrial internet; Mobile big data and new product alchemy; Wearable devices; BYOD and mobile security.

Cloud. STRUCTURE. http://event.gigaom.com/structure/ Structure 2013 focused on how real-time business needs are shaping IT architectures, hyper-distributed infrastructure and creating a cloud that will look completely different from everything that’s come before. Questions we answered at Structure 2013 included: Which architects are choosing open source solutions, and what are the advantages? Will to-the-minute cloud availability be an advantage for Azure? What are the lessons learned in building a customized enterprise PaaS? Where is there still space to innovate for next-generation leaders?

Conclusion.

To be strong programmer for today you have to be able to design and code for smart phones and tablets as your father and mother did 20 years ago for PC and workstations. Mobile programming is shaped by the trends, described in Mobile Trends for 2014.

To be strong programmer for tomorrow you have to tame the philosophy, technologies and tools of BigData (despite Gartners prediction of inflated expectations), Cloud,  Embedded and Internet of Things. It is much less Objective-C but probably still plenty of Java. Seems like the future is better suited for Android developers. IoT is positioned last in the list because its adoption rate is significantly lower than for cell phones (after 2000 dotcom burst).

Tagged , , , , , ,

Mobile UX: home screens compared

35K views

Some time in 2010 I’ve published my insight on the mobile home screens for four platforms: iOS, Android, Winphone and Symbian. Today I’ve noticed it got more than 35K views:)

What now?

What changed since that time? IMHO Winphone home page is the best. Because it allows to deliver multiple locuses of attention, with contextual information within. But as soon as you go off the home screen, everything else is poor there. iOS and Android remained lists of dumb icons. No context, no info at all. The maximum possible is small marker about the number of calls of text messages. And Symbian had died. RIP Symbian.

So what?

Vendors must improve the UX. Take informativeness of Winphone home screen, add aesthetics of iOS graphics, add openness & flexibility of Android (read Android First) and finally produce useful hand-sized gadget.

Winphone’s home screen provides multiple locuses of attention, as small containers of information. They are mainly of three sizes. The smallest box has enough room to deliver much more context information than number of unread text messages. By rendering the image within the box we can achieve the kind of Flipboard interaction. You decide from the image whether you interested in that or not. It is second question how efficiently the box room is used. My conclusion that it is used inefficiently. There are still number of missed calls or texts with much room left unused:( I don’t know why the concept of the small contexts has been left underutilized, but I hope it will improve in the future. Furthermore, it could improve on Android for example. Android ecosystem has great potential for creativity.

May be I visualize this when get some spare time… Keep in touch here or Slideshare.

Tagged , , , , , , , , , , , , , , , , , , , , , , ,

Mobile EMR, Part V

Some time ago I’ve described ideation about mobile EMR/EHR for the medical professionals. We’ve come up with tablet concept first. EMR/EHR is rendered on iPad and Android tablets. Look & feel is identical. iPad feels better than Samsung Galaxy. Read about tablet EMR from four previous posts. BTW one of them contains feedback from Edward Tufte:) Mobile EMR Part I, Part II, Part III, Part IV.

We’ve moved further and designed a concept of hand-sized version of the EMR/EHR. It is rendered on iPhone and Android phones. This post is dedicated to the phone version. As you will see, the overall UI organization is significantly different from tablet, while reuse of smaller components is feasible between tablets and phones. Phone version is totally SoftServe’s design, hence we carry responsibility for design decisions made there. For sure we tried to keep both tablet and phone concepts consistent in style and feel. You could judge how good we accomplished it by comparing yourself:)

Patients

The lack of screen space forces to introduce a list of patients. The list is vertically scrolled. The tap on the patient takes you to the patient details screen. It is possible to add very basic info for each patient at the patient list screen, but not much. Cases with long patient names simply leave no space for more info. I think that admission date, age and sex labels must be present on the patient list in any case. We will add them in next version. Red circular notification signals about availability of new information for the patient. E.g. new labs ready or important significant event has been reported. The concept of interaction design supposes that medical professional will click on the patient marked with notifications. On the other hand, the list of patients is ordered per user. MD can reorder the list via drag’n’drop.

Patient list

Patient list

MD can scan the wristband to identify the patient.

Wristband scanning

Wristband scanning

Patient details

MD goes to the patient details by tapping the patient from the list. That screen is called Patient Profile. It is long screen. There is a stack of Vital Signs right on top of the screen. Vital Signs widget is totally reused from tablets on the phones. It fits into the phone screen width perfectly. Then there is Meds section. The last section is Clinical Visits & Hospitalization chart. It is interactive (zoomable) like on iPad. Within single patient MD gets multiple options. First options is to scroll the screen down to see all information and entry points for more info available there. Notice a menu bar at the bottom of the screen. MD can prefer going directly to Labs, Charts, Imagery or Events. The interaction is organized as via tabs. Default tab is patient Profile.

Patient profile

Patient profile

Patient profile, continued

Patient profile, continued

Patient profile, continued

Patient profile, continued

Labs

There is not much space for the tables. Furthermore, labs results are clickable, hence the size of the rows should be relative to the size of the the finger tap. Most recent labs numbers are highlighted with bold. Deviation from the normal range is highlighted with red color. It is possible to have the most recent labs numbers of the left and on the right of the table. It’s configurable. The red circular notification on the Labs menu/tab informs with the number how many new results available since last view on this patient.

Labs

Labs

Measurements

Here we reuse ‘All Data’ charts smoothly. They perfectly fit into the phone screen. The layout is two-column with scrolling down. The charts with notifications about new data are highlighted. MD can reorder charts as she prefers. MD can manage the list of charts too by switching them on and off from the app settings. There could be grouping of charts based on the diagnosis. We consider this for next versions. Reminder about the chart structure. Rightmost biggest part of the chart renders most recent data, since admission, with dynamics. Min/max depicted with blue dots, latest value depicted with red dot. Chart title also has the numeric value in red to be logically linked with the dot on the chart. Left thin part of the chart consist of two sections: previous year data, and old data prior last year (if such data available). Only deviations and anomalies are meaningful from those periods. Extreme measurements are comparable thru the entire timeline, while precise dynamics is shown for the current period only. More information about the ‘All Data’ concept could be found in Mobile EMR, Part I.

Measurements in 'All Data' charts

Measurements in ‘All Data’ charts

Tapping on the chart brings detailed chart.

Measurement details

Measurement details

Imagery

There was no a big deal to design entry point into the imagery. Just two-column with scroll down layout, like for the Measurements. Tap on the image brings separate screen, completely dedicated to that image preview. For the huge scans (4GB or so) we reused our BigImage solution, to achieve smooth image zoom in and zoom out, like Google Maps, but for medical imagery.

Imagery

Imagery

Tissue scan, zoom in

Tissue scan, zoom in

Significant events & notes

Just separate screen for them…

Significant events

Significant events

Conclusion: it’s BI framework

Entire back-end logic is reused between tablet and phone versions on EMR. Vital Signs and ‘All Data’ charts are reusable as is. Clinical Visits & Hospitalization chart is cut to shorter width, but reused easily too. Security components for data encryption, compression are reused. Caching reused. Push notification reused. Wristband scanning reused. Labs partially reused. Measurements reused. BigImage reused.

Reusability is physical and logical. For the medical professional, all this stuff is technology agnostic. MD see Vital Signs on iPad, Android tablet, iPhone and Android phone as a same component. For geeks, it is obvious that reusability happens within the platform, iOS and Android. All widgets are reusable between iPad and iPhone, and between Samsung Galaxy tab and Samsung Galaxy phone. Cloud/SaaS stuff, such as BigImage is reusable on all platforms, because it Web-based and rendered in Web containers, which are already present on each technology platform.

Most important conclusion is a fact that mEMR is a proof of BI Framework, suitable for any other industry. Any professional can consume almost real-time analytics from her smartphone. Our concept demonstrated how to deliver highly condensed related data series with dynamics and synergy for proper analysis and decision making by professional; solution for huge imagery delivery on any front-end. Text delivery is simple:) We will continue with concept research at the waves of technology: BI, Mobility, UX, Cloud; and within digitizing industries: Health Care, Biotech, Pharma, Education, Manufacturing. Stay tuned to hear about Electronic Batch Record (EBR).

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Android First

Very short post about obvious turn. I wanted to publish it week ago, but couldn’t catch up with thoughts till today. So here it is.

Mobile First?

Interesting times, when niche company made revolution but now is returning to the niche. I mean Apple of course, with revolutionary smartphone since 2007. They challenged SaaS. Indeed Apple boosted S+S (Software + Services) with the Web of Apps. iPhone apps became so popular, that it became a market strategy to implement mobile app first, then “main” web site. What is main finally? Seems that mobile has become the main. Right now I am not going to talk about which mobile is right: native or web. You can read it on my post ‘Native Apps beat Mobile Web’. Be sure that new wrist-sized gadgets will be better programmed in native way than mobile web. Hence, what we got? Many startups and enterprises went mobile first. There is a good insight by Luke Wroblewski about ‘Mobile First’.

Who is first?

OK, Apple was first and best. Then they were first but not best. Now they are even not the best. It is ridiculous to wait five years to switch from big connection hole to smaller mini-USB sized Lightning hole… The bugs with battery drain are Microsoft and Adobe like. It is not the Apple of 2007 anymore. So what? Those flaws allowed competitors to catch up.

What is main Apple advantage over competitors? It is design. Still no one can beat emotions from Apple gadgets. There was another advantage – first mover advantage. What is main competitors’ advantage? Openness. Open standards. Android & Java & Linux is a great advantage. Openness now beats aesthetics. Read below why.

Android First!

iPhone & iOS & iTunes is pretty close ecosystem. Either you are fanatic to expect some openness in the future, you can wait and hope. But business goes on and bypasses inconveniences. Openness of Android allowed Facebook to design brand new user experience, called Facebook Home. Such creativity is impossible on iOS. I am not telling whether Facebook Home rocks or sucks. I am insisting on the opportunity for creativity. Android is just a platform for creativity. For programming geeks. For design geeks. For other geeks. Be sure others will follow with similar concept like Facebook Home. And it will happen on Android. Because tomorrow Android is First. Align your business strategy to be sync’ed with the technology pace!

Who worries about Apple? Don’t worry, they are returning to the niche. Sure, there will be some new cool things like wrist-sized gadgets. But others are working on them as well. And others are open. New gadgets will run Android. Android, which UX is poor (to my mind), but which enabled creativity to others, who are capable of doing better UX. They have got the idea of Android First.

Tagged , , , , , , , , , , , , , , , ,

Mobile EMR, Part IV

This is continuation of Mobile EMR, Part III.

It happened to be possible to fit more information to the single pager! We’ve extended EKG slightly, reworked LABs results, reworked measurements (charts) and inserted a genogram. Probably the genogram brings majority of new information in comparison to other updates.

v4 of mEMR concept

Right now the concept of mobile EMR looks this way…

Mobile EMR v4

Mobile EMR v4

New ‘All Data’ charts

Initially the charts of measured values have been from dots. Recent analysis and reviews tended to connect the dots, but things are not so straightforward… There could be kind of sparkline for the current period (7-10 days). Applicability of sparkline technique to represent data from the entire last year is suspicious. Furthermore, if more data is available from the past, then it will be a mess rather than a visualization, because there is so narrow space allocated for old data. Sure, the section of the chart could be wider, but does it worth it?

What is most informative from the past periods? Anomalies, such as low and high values, especially in comparison with current values. Hence we’ve left old data as dots, previous year data as dots, and made current short period as line chart. We’ve added min/max points to ease the analysis of the data for MD.

Genogram

Having genogram on the default screen seems very useful. User testing needed to test the concept on real genograms, to check the sizes of genograms used most frequently. Anyhow, it is always possible to show part of the genogram as expanded diagram, while keep some parts collapsed. The genogram could be interactive. When MD clicks on it, she gets to the new screen totally devoted to the genogram with all detailed attributes present. Editing could be possible too. While default screen should represent such view onto the genogram that relates to the current or potential diagnosis the patient has.

In the future the space allocated for the genogram could be increased, based on the speed of evolution of genetic-based treatments. May be visualization of personal genotyping will be put onto the home screen very soon. There are companies providing such service and keeping such data (e.g. 23andme). Eventually all electronic data will be integrated, hence MDs will be able to see patients genotyped data from EMR app on the tablet.

DNA Sequence

This is mid term future. DNA sequencing is still a long process today. But we’ve got the technology how to deliver DNA sequence information onto the tablet. The technology is similar to BigImage(tm). Predefined levels of information deliver could be defined, such as genes, exoms and finally entire genotype. For sure additional layers overlays will be needed to simplify visual perception and navigation thru the genetic information. So technology should be advanced with that respect.

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Mobile EMR, Part III

This is continuation of previous posts Mobile EMR, Part I and Mobile EMR, Part II

We’ve met with Mr.Tufte and demo’ed this EMR concept. He played with it for a while and suggested list of improvements, from his point of view.

‘All Data’ charts

Edward Tufte insists that sparklines work better than dots. It is OK that sparklines will be of different sizes. It is natural that each measurement has its own normal range. Initially we tried to switch the charts to the lines, but then we rolled back. Seems that we should make this feature configurable, and use sparklines by default. But if some MD wants dots, she can manually switch it in app settings.

Partially our EMR concept has been switched to sparklines – for display of Vital Signs. Below is a snapshot:

Vital Signs

One more thing related to the Vital Signs, we did great by separating on the widget on top, and grouping them together. It adds much value, because they are related to each other. It is important to see what happened to them at each moment. Our approach, based on user testing, appeared to be a winning one!

Space

Current use of the space could be improved even more. First reason is that biggest value of that research was keeping ‘All Information’ on single screen. Human eye recognizes perfectly which type of information is needed. All space is tessellated into multiple locuses of attention. Then human eye locks the desired locus and then focuses within that locus. Second reason is iPad resolution. We can squeeze more from retina resolution without degradation of usability (like size of labels and numbers). It is possible to scale to the newspaper typography on iPad, hence fit more information into the screen estate.

Genogram

This confirms the modern trend to genetics and genetic engineering. Genogram is a special type of diagram, visualizing patient’s family relationships and medical history. In medicine, medical genograms provide a quick and useful context in which to evaluate an individual’s health risks. Many new treatments are tailored by genotype of the patients. E.g. Steve Jobs’s cancer was periodically sequenced and brand new proteins where applied, to prevent disease spread. All cells are built from the proteins, reading other proteins as instructions. This is true for the cancer cells. Thus if they read instructions from fake proteins, then they can not build themselves properly. We like this idea immediately, because its value is instant and big, its importance is as high as allergy. Below is sample genogram, using special markers for genetically influenced diseases.

Sample Genogram

There are other cosmetic observations which will be improved shortly. We continue usability testing with medical doctors. More to come. It could be Mobile EMR on iPhone. Stay tuned.

UPDATE: Continued on Mobile EMR, Part IV.

Tagged , , , , , , , , , , , , , , , , , , , , , , ,

Mobile EMR, Part II

On 27th of August I’ve published Mobile EMR, Part I. This post is a continuation.

The main output from initial implementation was feedback from users. They just needed even more information. We initially considered mEMR and All Information vs. Big Data. But it happened that some important information was missing from the concept relied on Powsner/Tufte research. Hence we have added more information and now ready to show the results of our research.

First of all, data is still slightly out of sync, so please be tolerant. It is mechanical piece of work and will be resolved as soon as we integrate with hospital’s backend. The charts on the default screen will show the data most suitable for the each diagnosis. This will be covered in Part III when we are ready with data.

Second, quick introduction for redesign of the initial concept. Vital Signs had to go together, because they deliver synergetically more information when seen relatively to each other. Vital Signs are required for all diagnosis. Hence we have designed a special kind of chart for vital signs and hosted it on top of the iPad. Medications happened to be extremely important, so that physician instantly see what meds are used right now, reaction of vital signs, diagnosis and allergy, and significant events. All other charts are specific to the diagnosis and physician should be able to drag’n’drop them as she needs. It is obvious that diabetes is cured differently than Alzheimer. Only one chart has its dedicated place there – EKG. Partially, EKG is connected to the vital signs, but historically (and technically too) the EKG chart is complemently different and should be rendered separately. Below is a snapshot of the new default screen:

Default Screen (with Notes)

Most important notes are filtered as Significant Events and could be viewed exclusively. Actually default screen can start with Significant Events. We just don’t have much data for today’s demo. Below is a screenshot with Significant Events for the same patient.

Default Screen (with Significant Events)

Charts are configurable like apps on iPad. You tap and hold the one, then move to the desired place and release it. All other charts are ordered automatically around it. This is very useful for the physician to work as she prefers. It’s a good opportunity to configure the sets according to diagnosis. Actually we embedded pre-sets, because it is obvious that hypertension disease is cured differently than cut wound. Screenshot below shows some basic charts, but we are working on its usability. More about that in Part III some time.

Charts Configuration

According to Inverted Pyramid , default screen is a cap of the information mountain. When many are hyping around Big Data, we move forward with All Information. Data is a low-level atoms. Users need information from the data. Our mEMR default screen delivers much information. It can deliver all information. It is up to MD to configure the charts that are most informative in her context. MD can dig for additional information on demand. Labs are available on separate view, groupped into the panels. Images (x-rays) are available on separate view too. MD can click onto the tab IMAGERY and switch to the view with image thumbnails, which correspond to MRIs, radiology/x-ray and other types of medical imagery. Clicking on any thumbnail leads to the image zoomed to the entire iPad screen estate. The image becomes zoomable and draggable. We use our BigImage(tm) IP to empower image delivery of any size to any front end. The interaction with the image is according to Apple HIG standard.

Imagery (empowered by BigImage)

I don’t put here a snapshot of the scan. because it looks like standard full screen picture. Additional description and demo of the BigImage(tm) technology is available at SoftServe site http://bigimage.softserveinc.com. If new labs or new PACS are available, then they are pushed to the home screen as red notifications on the tab label (like on MEASUREMENTS tab above) so that physician can notice and click to see them. It is common scenario if some complicated lab required, e.g. tissue research for cancer.

Labs are shown in tabular form. This was confirmed by user testing. We have grouped the labs by the corresponding panels (logical sets of measurements). It is possible to order labs by date in ascending (chronological) and descending (most recent result is first) orders. Snapshot below shows labs in chronological order. Physician can swipe the table to the left (and then right) to see older results.

Labs

Editing is possible via long tap of the widget, until corresponding widget goes into the edit mode. Quick single click will return the widget to preview mode. MD can edit (edit existing, delete existing and assign new) medications, enter significant sign, notes. Audit is automatic, according to HIPAA, time and identity is captured and stored together with edited data.

Continued in Mobile EMR, Part III.

Tagged , , , , , , , , , , , , , , , , , , , , , , , ,