Tag Archives: user

Big Data Graphs Revisited

Some time ago I’ve outlined Six Graphs of Big Data as a pathway to the individual user experience. Then I’ve did the same for Five Sources of Big Data. But what’s between them remained untold. Today I am going to give my vision how different data sources allow to build different data graphs. To make it less dependent on those older posts, let’s start from the real-life situation, business needs, then bind to data streams and data graphs.

 

Context is a King

Same data in different contexts has different value. When you are late to the flight, and you got message your flight was delayed, then it is valuable. In comparison to receiving same message two days ahead, when you are not late at all. Such message might be useless if you are not traveling, but airline company has your contacts and sends such message on the flight you don’t care about. There was only one dimension – time to flight. That was friendly description of the context, to warm you up.

Some professional contexts are difficult to grasp by the unprepared. Let’s take situation from the office of some corporation. Some department manager intensified his email communication with CFO, started to use a phone more frequently (also calling CFO, and other department managers), went to CFO office multiple times, skipped few lunches during a day, remained at work till 10PM several days. Here we got multiple dimensions (five), which could be analyzed together to define the context. Most probably that department manager and CFO were doing some budgeting: planning or analysis/reporting. Knowing that, it is possible to build and deliver individual prescriptive analytics to the department manager, focused and helping to handle budget. Even if that department has other escalated issues, such as release schedule or so. But severity of the budgeting is much higher right away, hence the context belongs to the budgeting for now.

By having data streams for each dimension we are capable to build run-time individual/personal context. Data streams for that department manager were kind of time series, events with attributes. Email is a dimension we are tracking; peers, timestamps, type of the letter, size of the letter, types and number of attachments are attributes. Phone is a dimension; names, times, durations, number of people etc. are attributes. Location is a dimension; own office, CFO’s office, lunch place, timestamps, durations, sequence are attributes. And so on. We defined potentially useful data streams. It is possible to build an exclusive context out of them, from their dynamics and patterns. That was more complicated description of the context.

 

Interpreting Context

Well, well, but how to interpret those data streams, how to interpret the context? What we have: multiple data streams. What we need: identify the run-time context. So, the pipeline is straightforward.

First, we have to log the Data, from each interested dimension. It could be done via software or hardware sensors. Software sensors are usually plugins, but could be more sophisticated, such as object recognition from surveillance cameras. Hardware sensors are GPS, Wi-Fi, turnstiles. There could be combinations, like check-in somewhere. So, think that it could be done a lot with software sensors. For the department manager case, it’s plugin to Exchange Server or Outlook to listen to emails, plugin to ATS to listen to the phone calls and so on.

Second, it’s time for low-level analysis of the data. It’s Statistics, then Data Science. Brute force to ensure what is credible or not, then looking for the emerging patterns. Bottleneck with Data Science is a human factor. Somebody has to look at the patterns to decrease false positives or false negatives. This step is more about discovery, probing and trying to prepare foundation to more intelligent next step. More or less everything clear with this step. Businesses already started to bring up their data science teams, but they still don’t have enough data for the science:)

Third, it’s Data Intelligence. As MS said some time ago “Data Intelligence is creating the path from data to information to knowledge”. This should be described in more details, to avoid ambiguity. From Technopedia: “Data intelligence is the analysis of various forms of data in such a way that it can be used by companies to expand their services or investments. Data intelligence can also refer to companies’ use of internal data to analyze their own operations or workforce to make better decisions in the future. Business performance, data mining, online analytics, and event processing are all types of data that companies gather and use for data intelligence purposes.” Some data models need to be designed, calibrated and used at this level. Those models should work almost in real-time.

Fourth, is Business Intelligence. Probably the first step familiar to the reader:) But we look further here: past data and real-time data meet together. Past data is individual for business entity. Real-time data is individual for the person. Of course there could be something in the middle. Go find comparison between stats, data science, business intelligence.

Fifth, finally it is Analytics. Here we are within individual context for the person. There worth to be a snapshot of ‘AS-IS’ and recommendations of ‘TODO’, if the individual wants, there should be reasoning ‘WHY’ and ‘HOW’. I have described it in details in previous posts. Final destination is the individual context. I’ve described it in the series of Advanced Analytics posts, link for Part I.

Data Streams

Data streams come from data sources. Same source could produce multiple streams. Some ideas below, the list is unordered. Remember that special Data Intelligence must be put on top of the data from those streams.

In-door positioning via Wi-Fi hotspots contributing to mobile/mobility/motion data stream. Where the person spent most time (at working place, in meeting rooms, on the kitchen, in the smoking room), when the person changed location frequently, directions, durations and sequence etc.

Corporate communication via email, phone, chat, meeting rooms, peer to peer, source control, process tools, productivity tools. It all makes sense for analysis, e.g. because at the time of release there should be no creation of new user stories. Or the volumes and frequency of check-ins to source control…

Biometric wearable gadgets like BodyMedia to log intensity of mental (or physical) work. If there is low calories burn during long bad meetings, then that could be revealed. If there is not enough physical workload, then for the sake of better emotional productivity, it could be suggested to take a walk.

 

Data Graphs from Data Streams

Ok, but how to build something tangible from all those data streams? The relation between Data Graphs and Data Streams is many to many. Look, it is possible to build Mobile Graph from the very different data sources, such as face recognition from the camera, authentication at the access point, IP address, GPS, Wi-Fi, Bluetooth, check-in, post etc. Hence when designing the data streams for some graph, you should think about one to many relations. One graph can use multiple data streams from corresponding data sources.

To bring more clarity into relations between graphs and streams, here is another example: Intention Graph. How could we build Intention Graph? The intentions of somebody could be totally different in different contexts. Is it week day or weekend? Is person static in the office or driving the car? Who are those peers that the person communicates a lot recently? What is a type of communication? What is a time of the day? What are person’s interests? What were previous intentions? As you see there could be data logged from machines, devices, comms, people, profiles etc. As a result we will build the Intention Graph and will be able to predict or prescribe what to do next.

 

Context from Data Graphs

Finally, having multiple data graphs we could work on the individual context, personal UX. Technically, it is hardly possible to deal with all those graphs easily. It’s not possible to overlay two graphs. It is called modality (as one PhD taught me). Hence you must split and work with single modality. Select which graph is most important for your needs, use it as skeleton. Convert relations from other graphs into other things, which you could apply to the primary graph. Build intelligence model for single modality graph with plenty of attributes from other graphs. Obtain personal/individual UX at the end.

Tagged , , , , , , , , , , , , , , , , , , , , , ,

Mobile Home Screens

Mobile Home Screens

Better Home Screen?

Definitely without iPhone’ish glossy icons,
wasting potentially useful space around.
Want it or not, but glossy must stay in the past.
Near future is flat.

With aligned multi-sized widgets like Winphone.
With variety of widgets like Android.
But much more aesthetic than they are today!

With more information embedded into the icon-sized area,
like Facebook Home, three faces into small context area.
Ideally the home screen can delivery useful information in five-six different contexts even without any clicks on them.

Smaller widgets will remain and prevail,
because they are sized to our finger…

Tagged , , , , , , , , , , , , , , , , , , , , , , , ,

Mobile UX: home screens compared

35K views

Some time in 2010 I’ve published my insight on the mobile home screens for four platforms: iOS, Android, Winphone and Symbian. Today I’ve noticed it got more than 35K views:)

What now?

What changed since that time? IMHO Winphone home page is the best. Because it allows to deliver multiple locuses of attention, with contextual information within. But as soon as you go off the home screen, everything else is poor there. iOS and Android remained lists of dumb icons. No context, no info at all. The maximum possible is small marker about the number of calls of text messages. And Symbian had died. RIP Symbian.

So what?

Vendors must improve the UX. Take informativeness of Winphone home screen, add aesthetics of iOS graphics, add openness & flexibility of Android (read Android First) and finally produce useful hand-sized gadget.

Winphone’s home screen provides multiple locuses of attention, as small containers of information. They are mainly of three sizes. The smallest box has enough room to deliver much more context information than number of unread text messages. By rendering the image within the box we can achieve the kind of Flipboard interaction. You decide from the image whether you interested in that or not. It is second question how efficiently the box room is used. My conclusion that it is used inefficiently. There are still number of missed calls or texts with much room left unused:( I don’t know why the concept of the small contexts has been left underutilized, but I hope it will improve in the future. Furthermore, it could improve on Android for example. Android ecosystem has great potential for creativity.

May be I visualize this when get some spare time… Keep in touch here or Slideshare.

Tagged , , , , , , , , , , , , , , , , , , , , , , ,

Android First

Very short post about obvious turn. I wanted to publish it week ago, but couldn’t catch up with thoughts till today. So here it is.

Mobile First?

Interesting times, when niche company made revolution but now is returning to the niche. I mean Apple of course, with revolutionary smartphone since 2007. They challenged SaaS. Indeed Apple boosted S+S (Software + Services) with the Web of Apps. iPhone apps became so popular, that it became a market strategy to implement mobile app first, then “main” web site. What is main finally? Seems that mobile has become the main. Right now I am not going to talk about which mobile is right: native or web. You can read it on my post ‘Native Apps beat Mobile Web’. Be sure that new wrist-sized gadgets will be better programmed in native way than mobile web. Hence, what we got? Many startups and enterprises went mobile first. There is a good insight by Luke Wroblewski about ‘Mobile First’.

Who is first?

OK, Apple was first and best. Then they were first but not best. Now they are even not the best. It is ridiculous to wait five years to switch from big connection hole to smaller mini-USB sized Lightning hole… The bugs with battery drain are Microsoft and Adobe like. It is not the Apple of 2007 anymore. So what? Those flaws allowed competitors to catch up.

What is main Apple advantage over competitors? It is design. Still no one can beat emotions from Apple gadgets. There was another advantage – first mover advantage. What is main competitors’ advantage? Openness. Open standards. Android & Java & Linux is a great advantage. Openness now beats aesthetics. Read below why.

Android First!

iPhone & iOS & iTunes is pretty close ecosystem. Either you are fanatic to expect some openness in the future, you can wait and hope. But business goes on and bypasses inconveniences. Openness of Android allowed Facebook to design brand new user experience, called Facebook Home. Such creativity is impossible on iOS. I am not telling whether Facebook Home rocks or sucks. I am insisting on the opportunity for creativity. Android is just a platform for creativity. For programming geeks. For design geeks. For other geeks. Be sure others will follow with similar concept like Facebook Home. And it will happen on Android. Because tomorrow Android is First. Align your business strategy to be sync’ed with the technology pace!

Who worries about Apple? Don’t worry, they are returning to the niche. Sure, there will be some new cool things like wrist-sized gadgets. But others are working on them as well. And others are open. New gadgets will run Android. Android, which UX is poor (to my mind), but which enabled creativity to others, who are capable of doing better UX. They have got the idea of Android First.

Tagged , , , , , , , , , , , , , , , ,

Usability of Google Currents

This post is about digital publishing, about its usability. I used Flipboard for some time and it is good. Not excellent because of some mess on lower nesting levels, but it is a base line for other similar apps and services. Let’s look at Google Currents, I will assess them on Google Nexus S smart phone. So we have native Google device and native Google app. What result can we expect? Of course we want excellent, best of the breed! What we have in reality? Read on.

Level 0: Home Screen

We see sexy image and some icons on the home screen. Navigation is horizontal. I.e. to move between the pages we have to flip right, then left or right. Standard think on touch screens. One big surface and moving it with the finger. From aesthetics point of view home screen of Flipboard on iPad is way better. Ok, we are at the Level 0, Home Screen. And it has horizontal navigation. Take a look below:

Level 0, Home screen with horizontal navigation

Level 1: Fast Company and TechCrunch

I like to read Fast Company stuff, so I am clicking its icon. By doing that I descend to the nesting level, let’s name it Level 1. What we see at that level? Fast Company expands its content horizontally. We can navigate it as we did on Level 0, intuitively, everything the same and it is good. Looks at the snapshots below, demonstrating horizontal navigation for Fast Company:

Level 1, Fast Company with horizontal navigation

So far everything cool and we love Google Currents as an alternative to Flipboard:) But it is time to launch TechCrunch now. I love TechCrunch too, it is on my Home Screen as well as Fast Company. So, clicking TechCrunch icon! Starting to read, donw with first screen, trying to flip right and here comes an issue:( TechCrunch does not scroll horizontally. Surprise? I would call it a flaw in Interaction Design of the Google Currents. At the same level as Fast Company – Level 1 – TechCrunch does scrol vertically. How could I know it? I used to flip horizontally on Home Screen and on Fast Company. Look below at the vertical navigation of Tech Crunch at Level 1:

Level 1, TechCrunch with vertical navigation

There is strong inconsistency with content navigation at the Level 1. It is difficult to pay special attention to some widgets that serve as a hints how to scroll. Much better way is to implement same navigation direction and show no widgets as all. I could finish this post at this point, though we can go further if you like.

Level 2: Fast Company and TechCrunch

Let’s dive deeper into the content by those providers and write what we feel. At the Level 2 Fast Company resembles TechCrunch. It has vertical navigation. Below is a snapshot:

Level 2, Fast Company with vertical navigation

What we have with TechCrunch at the Level 2? Of course something different, we have got horizontal navigation. Look below:

Level 2, TechCrunch with horizontal navigation

Seems Fast Company uses more levels to structure its content. There is whole Level 3…

Level 3: Fast Company alone

There is content at this level. There are multiple entry points to it. You can get test content, or rich video content. Navigation is horizontal for both. Look below for text content:

Level 3, Fast Company with horizontal navigation

Here is video content below:

Level 3, Fast Company with horizontal navigation

Conclusion

Google Currents on Google Nexus S has severe usability flaws. There had to be some rules for content providers to stick to. As a user I do not want to care on the structures, I want content instantly, as I had it of Flipboard on iPad. With Currents things are not so obvious, which inspired me to draw a joke. IMHO the quality of Google products degrade. They repeat Microsoft’s way with issues, bugs and … market penetration. It’s sad.

Tagged , , , , , , , , , , , , , , , , , , , , , ,