This post is related to the details of visualization of information for executives and operational managers on the mobile front-end. What is descriptive, what is predictive, what is prescriptive, how it looks like, and why. The scope of this post is a cap of the information pyramid. Even if I start about smth detailed I still remain at the very top, at the level of most important information without details on the underlying data. Previous posts contains introduction (Part I) and pathway (Part II) of the information to the user, especially executives.
The user’s perception pipeline is: RECOGNITION –> QUALIFICATION –> QUANTIFICATION –> OPTIMIZATION. During recognition the user just grasps the entire thing, starts to take it as a whole, in the ideal we should deliver personal experience, hence information will be valuable but probably delivered slightly different from the previous context. More on personal experience in next chapter below. So as soon as user grasps/recognizes she is capable to classify or qualify by commonality. User operates with categories and scores within those categories. The scores are qualitative and very friendly for understanding, such as poor, so-so, good, great. Then user is ready to reduce subjectivity and turn to the numeric measurements/scoring. It’s quantification, converting good & great into numbers (absolute and relative). As soon as user all set with numeric measurements, she is capable to improve/optimize the biz or process or whatever the subject is.
What should be rendered on the default screen? I bet it is combination of the descriptive, predictive and prescriptive, with large portion of space dedicated to descriptive. Why descriptive is so important? Because until we build AI the trust and confidence to those computer generated suggestions is not at the level. That’s why we have to show ‘AS IS’ picture, to deliver how everything works and what happens without any decorations or translations. If we deliver such snapshot of the business/process/unit/etc. the issue of trust between human and machine might be resolved. We used to believe that machines are pretty good at tracking tons of measurements, so let them track it and visualize it.
There must be an attempt from the machines to try to advice the human user. It’s could be done in the form of the personalized sentence, on the same screen, along with descriptive analytics. So putting some TODOs are absolutely OK. While believing that user will trust them and follow them is naive. The user will definitely dig into the details why such prescription is proposed. It’s normal that user is curious on root-cause chain. Hence be ready to provide almost the same information with additional details on the reasons/roots, trends/predictions, classifications & patterns recognition within KPI control charts, and additional details on prescriptions. If we visualize [on top of the inverted pyramid] with text message and stack of vital signs, then we have to prepare additional screen to answer that list of mentioned details. We will still remain on top of the pyramid.
If we got ‘AS IS’ then there must be ‘TO BE’, at least for the symmetry:) User starts on default screen (recognition and qualification) and continues to the next screen (qualification and quantification). Next screen should have more details. What kind of information would be logically relevant for the user who got default screen and looks for more? Or it’s better to say – looks for ‘why’? May be it’s time to list them as bullets for more clarity:
Recognition of signals as dynamic patterns is identification of the roots/reasons for smth. Predictions and prescriptions could be driven by those signals. Prescriptions could be generic, but it’s better to make personalized prescriptions. Explanations could be designed for the personal needs/perception/experience.
We consume information in various contexts. If it is release of the project or product then the context is different from the start of the zero sprint. If it’s merger & acquisition then expected information is different from the quarterly review. It all depends on the user (from CEO to CxOs to VPs to middle management to team management and leadership), on the activity, on the device (Moto 360 or iPhone or iPad or car or TV or laptop). It matters where the user is physically, location does matter. Empathy does matter. But how to reach it?
We could build users interests from social networks and from the interaction with other services. Interests are relatively static in time. It is possible to figure out intentions. Intentions are dynamic and useful only when they are hot. Business intentions are observable from business comms. We could sense the intensity of communication between the user and CFO and classify it as a context related to the budgeting or budget review. If we use sensors on corporate mail system (or mail clients), combine with GPS or Wi-Fi location sensors/services, or with manual check-in somewhere, we could figure out that the user indeed intensified comms with CFO and worked together face-to-face. Having such dynamic context we are capable to deliver the information in that context.
The concept of personal experience (or personal UX) is similar to the braid (type of hairstyle). Each graph of data helps to filter relevant data. Together those graphs allows to locate the real-time context. Having such personal context we could build and deliver most valuable information to the user. More details how to handle interest graph, intention graph, mobile graph, social graph and which sensors could bring the modern new data available in my older posts. So far I propose to personalize the text message for default screen and next screen, because it’s easier than vital signs, and it’s fittable into wrist sized gadgets like Moto 360.
Very short post about obvious turn. I wanted to publish it week ago, but couldn’t catch up with thoughts till today. So here it is.
Interesting times, when niche company made revolution but now is returning to the niche. I mean Apple of course, with revolutionary smartphone since 2007. They challenged SaaS. Indeed Apple boosted S+S (Software + Services) with the Web of Apps. iPhone apps became so popular, that it became a market strategy to implement mobile app first, then “main” web site. What is main finally? Seems that mobile has become the main. Right now I am not going to talk about which mobile is right: native or web. You can read it on my post ‘Native Apps beat Mobile Web’. Be sure that new wrist-sized gadgets will be better programmed in native way than mobile web. Hence, what we got? Many startups and enterprises went mobile first. There is a good insight by Luke Wroblewski about ‘Mobile First’.
OK, Apple was first and best. Then they were first but not best. Now they are even not the best. It is ridiculous to wait five years to switch from big connection hole to smaller mini-USB sized Lightning hole… The bugs with battery drain are Microsoft and Adobe like. It is not the Apple of 2007 anymore. So what? Those flaws allowed competitors to catch up.
What is main Apple advantage over competitors? It is design. Still no one can beat emotions from Apple gadgets. There was another advantage – first mover advantage. What is main competitors’ advantage? Openness. Open standards. Android & Java & Linux is a great advantage. Openness now beats aesthetics. Read below why.
iPhone & iOS & iTunes is pretty close ecosystem. Either you are fanatic to expect some openness in the future, you can wait and hope. But business goes on and bypasses inconveniences. Openness of Android allowed Facebook to design brand new user experience, called Facebook Home. Such creativity is impossible on iOS. I am not telling whether Facebook Home rocks or sucks. I am insisting on the opportunity for creativity. Android is just a platform for creativity. For programming geeks. For design geeks. For other geeks. Be sure others will follow with similar concept like Facebook Home. And it will happen on Android. Because tomorrow Android is First. Align your business strategy to be sync’ed with the technology pace!
Who worries about Apple? Don’t worry, they are returning to the niche. Sure, there will be some new cool things like wrist-sized gadgets. But others are working on them as well. And others are open. New gadgets will run Android. Android, which UX is poor (to my mind), but which enabled creativity to others, who are capable of doing better UX. They have got the idea of Android First.
Enjoyed Fermi tasks? They are present around us, plenty of cases to get trained on. After we know how to observe the Unknown from many perspectives, convert observations into the numbers, classify observations into uniform sets, then applying Fermi calculations within the sets, and assemble the estimate for entire scope – time to pay more attention to the accuracy of the numbers. Probabilities really help. But in this Part III we will pay attention to other hidden tools, that gave us more wisdom. They will be described below, unordered, just section by section. Within every section I will bring the practical application, so that we are all set by the end of this post.
How many alternative estimations did you do?
The principle of least effort postulates that animals, people, even well designed machines will naturally choose the path of least resistance or “effort”. There is a whole theory that covers diverse fields from evolutionary biology to webpage design. Direct relation to the estimation is that in most cases there is only one alternative. Even if few of them, then they are for sure variations of the primary one. Because it was easier for people who did it. It was easier for you too. People are lazy, hence estimations suck. There was not enough independent alternatives, sufficient for comparison at the end, when you judge for final numbers (via various expert judgement methods). Hence, always force at least 5+ really independent paths of analysis and calculations to ensure your stuff does matter at the end. Use managerial power to make people produce more independent alternatives if you are manager. Force yourself to do so, if you are working solo virtuoso. One more trick, is to put different start point and let guys follow the least resistance path:) Your goal is to get alternative estimations, so apply such tricks to achieve it.
Paths of least resistance.
The magic of One.
First-digit law states that in lists of numbers from many real-life sources of data, the leading digit is distributed in a specific, non-uniform way. According to this law, the first digit is 1 about 30% of the time, and larger digits occur as the leading digit with lower and lower frequency, to the point where 9 as a first digit occurs less than 5% of the time.It is known as Benford’s law. It is observed pretty often, though not all real-life sources obey to it. Partially it is related to dynamics, because somewhere many year ago it could work for some certain data, or will work in the future, when humanity adopts (or augments) the numeric system.
Examining a list of the heights of the 60 tallest structures in the world by category shows that 1 is by far the most common leading digit, irrespective of the unit of measurement. Same could be told about the car weight (in metric system). People’s height is 1m and less than 2m (in metric system). Ticket price for trans Atlantic flight is often 1K and smth. So, it is ubiquitous.
Does it emerge in programming? For sure yes! When you calculate the volumes of code, data, classes, methods, number of people in organization etc. But we are interested in estimation. So, in estimation it is also observed, when you have first digit “1” prevailed, either in man/days or man/hours or cost expressed in money (e.g. 10,000+, or 1,000,000+ in dollars). Hence, always check what is your first digit, and if not “1”, then do double-check why not. Probably it is OK. But you will know it for sure only after validation. Otherwise, if you applied sufficient number of orthogonal calculations and your numbers confirm Banford’s law, then consider it as additional argument that you did right!
Distribution of first digits (in %, red bars) in the population of the 237 countries of the world. Black dots indicate the distribution predicted by Benford’s law. More details and diagrams on wiki.
Sounds familiar? Overhead 2x in comparison to initial estimation? This pattern has emerged in computer world, at least it became popular in recent 25 years. “The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.” Tom Cargill, Bell Labs. The pattern is called 90/90 or Ninety-ninety rule. It expresses both the rough allocation of time to easy and hard portions of a programming project and the cause of the lateness of many projects (that is, failure to anticipate the hard parts). In other words, it takes both more time and more coding than expected to make a project work. How it helps you? Just multiply your number by the factor of 1.8. It increases probability of successful execution within estimation (efforts/cost/budget, schedule). It could be applied at the stage, when you obtained estimations for subsystems. Especially when you get estimate with probability p50, use 90/90 to increase probability to p80 and beyond. The rule could be applied on top of all as well. It can’t be distilled to the exact reasons, it it just observable, hence respect it and use as a tool for fine-tuning your numbers.
This is my favorite. As aircraft constructor Antonov said: “Beautiful aircraft flies perfectly”. It is my translation, but the essence is unchanged. Beautiful things work better. If some thing works pretty well but looks ugly, it means there is an opportunity to improve its design. If some beautiful thing works good, than probably it is it, the thing is at its evolution end. There is no way to improve its design. Back to estimation and computer systems. If the architecture (component, modules, network, database, deployment etc) visually sucks, then it means two things: representation sucks or the subject sucks itself and must be improved. Elegant solution wakes up positive emotions. Your clients and partners will prefer elegant and beautiful solutions, even if they clam they need smth quick and dirty. It is bullshit. We all people, and when we go to the store, we do not buy quick and dirty shit, neither pants nor shoes. We select smth better, according to our taste. When we go to the launch, we do not eat dirty… we probably use to eat quickly, but definitely not dirty meal. You’ve got the idea. That Unknown that you got for estimation should be perfectly beautiful now, because you dissected it with your analysis, saw the true picture, understand the goals, desiderata, limitations, alternatives etc, build elegant solution and estimated it properly. If not, then your analysis did not succeed, you overlooked smth important. Return back and look and the unknown again. Until you see the beauty. After that you could estimate.
It is possible to estimate quickly. Just learn from the mother Nature, respect emergent pattens, recognize them in your computerized business, work, being. We can not explain those patterns, neither prove them nor decline. We are able to observe them and agree they were, they are, they will be around and within us. As you probably understood, they are applicable not solely to the estimation, but to everything. Estimation binding was done to easy your estimation exercises.
This is Part II, continued. Part I available here.
Divide and Conquer
Hope, by this time it is clear that we are capable to discover as many facets of the unknown as possible. Next step is to follow Caesar’s principle, that is as simple as “divide and conquer”, apply more attention to the uniform sets [it’s Alexander The Great on the picture]. Circles are separated from squares. Within circles you comfortably deal with all circles, because they are all similar. Within squares you are good with analysis of entire set too. Because sets are uniform, by knowing one item you could think about entire set with good probability that your conclusions will be correct. In the analysis of some software project or entire solution (software + hardware + everything else) you will come up with many sets of uniform items. The picture below shows only circles and squares.
In real-life you could apply different classifiers, how to separate objects of on type from objects of another type. This multiple choice of classification is depicted as separation lines. They are schematic, but they speak for themselves, there are options how to isolate uniform things and group them together. In our industry those items are UI forms, function points, workflows, use cases, files, tables, users, B2B interfaces, coding, management, UX design etc. Everything that takes place. Independent on categories. We apply independent views, orthogonal, that’s why UX design activity is as good as number of tables or so. Recall how you looked at Saturn, you have discovered it from all possible sides and did not overlook anything.
What to do with those sets of uniform items?
First, check what constitute the scope of the project. Select those sets that correspond to 100 percent scope. If you did good analysis before, you will have alternatives how to build the scope. Some sets will be reused, but others could be interchangeable. E.g. Set of B2B interfaces could be reused, while some core functionality could be hidden within set of use cases or set of workflows or set of forms. And the more options you have, the better you did before! It will pay off.
Now we are ready to count the items within every set, then sum all together and get the scope. Repeat the exercise for other alternatives of the scope, come up with other results. Look at them and feel the numbers. If you did good dissection of uncertainty, your numbers will look similar. If not, return back and analyze more. You overlooked smth important. This check is very easy, but it is extremely important for you regarding trustworthy of your numbers. At this stage it is important to get the range. E.g. are we dealing with tens, or hundreds, or thousands, or millions and so on. Select 10^2, 10^3, 10^4 etc as checkpoints, figure out in what range your initial estimate falls. All refinements will be after that.
Who are you Dr. Fermi?
Is it even possible to calculate firmly?! Yes it is. Dr. Fermi was a physicist. This is a wiki page of Enrico Fermi. He worked on the first nuclear reactor, nuclear bombs and other dangerous stuff. Fermi received the Nobel Prize in Physics at the age of 37. I would like to highlight his excellent quick calculation of the strength of the nuclear explosion. Fermi was present as an observer of the Trinity test on July 16, 1945. As the shock wave hit Base Camp, Aeby [engineer] saw Enrico Fermi with a handful of torn paper. “He was dribbling it in the air. When the shock wave came it moved the confetti. He thought for a moment.” Fermi had just estimated the yield of the first nuclear explosion. It was in the ball park. Fermi’s strips-of-paper estimate was 10 kilotons of TNT; the actual yield was about 19 kilotons. Almost 2x difference, but how fast he did it! You could do the same. Think like Fermi on your “paper pieces” within uniform bins and produce a good enough calculation in very fast manner. Below are situations that are not so unknown as you could thing.
There are many variations. Questions about Tarzan movie is simple, because you are instantly dealing with uniform set of objects – frames. You simple take typical duration of the movie, e.g. 1 hour; then number of frames per second, which is either 25 or 30 (in different TV standards); multiply and you are all set. With Boeing it is more complicated, because full scope is not from one set of uniform objects. You will have to calculate people (which are uniform), luggage (which is uniform), fuel (also uniform), aircraft itself etc. More complicated than Tarzan movie but doable. With yellow wheel and blue car situation is even more different. You could apply Google Image Search to find smth similar and reverse engineer the car. It could be Subaru Impreza STI, crazy rally monster car. Remember, looks from all possible dimensions. Do not limit yourself within few of them. This is a methodology. I have just reminded it again.
How can I improve the numbers?
Apply Pareto principle within your sets! Check for Pareto principle at the 100 percent scope level. It is a Nature in everything, because everything is a product of evolution. Computers and software systems are also products of our evolution. Same laws work everywhere. Including the law of Normal distribution. If the number of items within your sets is sufficient (it is likely that # is bigger than 50), then you could apply Pareto rule and get better results. When you are OK with calculation of every scope alternative, you could play with your final numbers. Look at the differences and judge, why they happened. Include other people into the estimation, give same methodology, compare results. Pay big attention to the biggest deviations and ignore similar numbers.
Other ways for refining the numbers are in the methods of judgement. You could drop min and max estimations and avg the remaining. This is too simple. I propose to plug probabilities here. For every alternative estimate you should provide probability that it is correct. The method how to judge the probability is simple, modify the number until you get unsure that project is doable or undoable within give efforts. In other words always find the p50, probability that project will succeed within those efforts or fail, with probabilities 50/50 for each. Then move to the higher probability for success. Stop where you think it worth to stop. You will have various estimations with different probabilities, like p60, p90. Melt the numbers together again, give more weight to more probable numbers.
How can I get even better numbers?
Make sure you looked for analogous project done within your organization or in your previous experience. Use those numbers (adjusted) as one additional estimate. Use estimation tools, such as QSM SLIM if you have a license. Melt those numbers together with other alternatives you calculated. There are more ways how to improve numbers even more, without use of expensive tools like COCOMO or SLIM. I will tell about them in next post. So far, try to calculate the number of apples in the world:)
Decided to share my research here, published for the first time somewhere at the beginning of 2010. Motivation is to teach others how to do ballpark estimations very quickly. This is extremely important in consulting job, when you have to calculate really quickly and accurately.
Table of Content
What is Unknown? What Unknown is?
Some mechanisms in our brains classify things as Unknown. It happens because of multiple reasons. Brain classifies as Unknown everything that we call Unclear, Invisible, Complicated, Scary and so on. Below are visualization of what I mean.
Unclear. What do you see on this Marsian picture: face or pyramid? Unclear again. What sign do you see?
Invisible. What do you see below? Nude woman or elephant?
Human perception works in such a way that we see in details what is in front of our nose, then almost without details what is aside, and very blurry what is a distant to us. It is related to all objects that we look at. People below see three modules, the structure of the first module in details, some high-level structure of the second module, and almost nothing in the distant third module. In out case, this is an abstract representation of some real-life objects, e.g. airport with multiple terminals, enterprise software system etc.
Complicated. Are you comfortable with understanding of the diagram?
Scary. Scary leads to unknown, because it blocks the brain from creative thinking. Unpleasant, non-aesthetic, negative is processed less efficiently than positive things. I could be wrong with biological terminology, but the fact that positive things works better is widely researched in UX (User Experience) field, and is confirmed by such gurus as Don Norman. It is Marylin Manson on the picture below, not Don:) And many people consider it as scary. Though the music is good. I played “Rock is Dead” and “Sweet Dreams” with a pleasure. The life is strange indeed. Nevertheless, scary things lead or nurture the unknown.
Why Unknown is a Top Problem in Estimation?
There are obvious quick answers. Incomplete list is here. IT industry evolves so fast. Industry people hardly keep the pace of evolution. Those things that passed by are falling into Unknown automatically. Many youngsters are involved into the work. As in any other industry, all youngsters are inexperienced. What is pretty straightforward to mature wolves happen to be unknown for the newbie. Answers required for yesterday. Sounds similar? Or another situation, everything almost understandable, but there are nuances – it is for different business domain. That domain specifics redefine the whole picture and known things are falling into unknown category very quickly. The match of skills and problems (or mismatch of them) leads to more unknowns. Add your reasons from your experience here! Come on! I would conclude this section that unknown is relative, but permanent. Relative means that for somebody it is unknown, for somebody known. Permanent means the persistence of the problem in IT industry. It was yesterday, it is present today and tomorrow we will face it again. C’est la vie.
How to Look at the Unknown?
If we concluded that the problem is recurring, hence we need a solution how to deal with it. How to tame the unknown? You should look at it as you look at the diamond. Look at unknown from all possible facets. Rotate it and look again. Grab it close to you eyes and look. Put it one meter from you and look. Look in the morning and at the afternoon. Look together. Look, look and look! This is a solution. Actually, it leads to the solution. Let’s do an exercise, a small workshop for you.
Workshop for You. Saturn!
If you are not an astronomer, this should be good unknown thing for you. Its name is Saturn. Saturn planet. To make some decisions about Saturn you have to know at least something about it. What could you tell me about Saturn? Quickly and reliably? Look.
OK, some things you really see. Some things you know from your school memories. It is sphere and there is a ring around it. Most people would stop here. But we go further! The sphere is not uniform. We should notice it and remember, because it might affect some decision making in the future. It is always important to classify uniform things and deal with them separately. Do not mix apples and dogs. Divide and conquer, isolate apples into one bin and deal with them, isolate dogs into another bin and deal with them there. Well, back to Saturn. There are many rings, not just one! This is also result of just slightly more deep observation of the same image. Invest slightly more time to analysis and it will pay off. But look differently from how majority of people look. Dig into all possible details that may lead (or may not) to better context for the decision making afterwards.
Good, we came up with some info. What else? How to get even more info? Solution is to look. Look again. Look in the different spectrum! Use infrared spectrum for example. Probably you would see smth new?
Wow, the temperature is different across the planet. Almost like on our Earth! Notice this fact. It is sphere, like Earth. It has temperature distribution, like Earth. The principle of analogy could be applied here. It is the easiest principle that could help you a lot, but there is a big risk, if you apply the principle where there is no analogy. Hence, you need more details to get sure there is opportunity to apply the analogy.
What else? As we mentioned our mother Earth, let’s benefit from it. Look again, look so that you see Earth and Saturn together. What could you tell now about Saturn?
Mamma mia! Saturn is huge! This observation brings numbers about the diameter, mass, density etc. Just take known Earth numbers and calculate Saturn numbers based on the ratios you observe. The temperature from the infrared spectrum also could be converted into the numbers. So, after few different looks you’ve got plenty of numbers. And numbers are good, we could calculate using them. Exactly what we need during estimations. We need numbers and ranges of them to play with them. What is an interim conclusion here that should be mentioned is orthogonality. We have used three orthogonal views onto the same thing. Orthogonal means independence. We applied independent looks and grabbed some information form them, and converted it into the numbers.
Good so far. Moving forward. You could guess what else possible? A lot of hidden views are still available. Let’s use them to get more information in the same context. Look closer. Zoom in.
It is alive! Saturn is not a dead stone. There is some spiral movement. Kind of cyclone? Something is going on there. For us it means dynamics. We might count on it later during some decision making. Several more looks at Saturn…
Now you should catch the idea. It is really possible to squeeze much more information from the given context. It is possible to See the Unknown. It is possible to convert new information into the numbers and use them in your further calculations. You could observe a lot. Then you should divide and conquer the observations. Then work within homogeneous sets. It makes analysis easier. Substitute the word “Saturn” by the word “Project” or “Prospect” and apply in your work. You will see a lot more useful info about your Projects and Prospects immediately. For sure, you could use that info properly. Ain’t fun?
Who are you, Dr. Fermi?
Look forward to the next post. It will be Part II. See you here soon!
Presenter from Bitly announced as “she will share pure geo porn”
bitly by Hilary Mason Chief Scientist
time == f(space) or more precisely, time == f(distance) now, with Twitter, time == f(attention)
2h of Twitter to be clicked. 7h on Youtube to be clicked. Egypt stats shown Internet down, revolution clearly. Bitly has clicks. All theis data is generated by clicks. They have time too, hence they analyze real-time clicks. Social data and Location data are noisy filters. New geographies of social data. Plus time.
from WHERE conferences, the trend was from ‘how to collect data’ to ‘how to make impact’. what is a biggest surprise at WHERE this year? or biggest unsurprise? people are changing behavior based on biolocation data. 2-3 next years. by Waze. Smart cities. Sensors. M2M between the phone and building. Cities will have API. Interplay of different kinds of sensors is exciting. By Bitly. Voice interface to reduce cognitive load during interaction. By VC. greater and smarter context. Time to build new innovation. People assuming that current model would stay (e.g. search). Current search paradigm makes any sense in my device. Need ‘smartness’. BodyMedia mentioned when talked about smart things. realtime exchange. what you context will be along the way in real-time? mobile phone is trusted identity. but losable, stealable. solve the problems for the real people, make them come back. foursquare, yelp, facebook, twitter – generates data from what users put on top of it. Apple has a clear biz model of selling devices. Do you own your own data? facebook is not a geopower house. next geopowerhouse will be social. twitter is on the list. eBay is interesting with recent acquisitions of RedLaser, Milo.
my conclusion. WHERE 2011 was stronger than WHERE 2012.
5,000 year old idea.
Post a request and you will be connected with someone who can fulfill it. Pay only when you get the goods. Tell us what you want to sell you will be notified at anytime someone asks for it. Now building an environment for safe buying and selling with the people. It is meta-market place. Crazy doesn’t work, but if you ask smth sane, it works. Ask what you want, how much you can pay and the work will be done by somebody near you. Some similarity to TaskRabbit + craigslist
In few days 10K dollars came in transactions on daily basis. Product has been changed since that time. Product became ‘on-purpose’ instead of being ‘the neccesity’. It was native iOS app. After 2 weeks such questions as HTML5 raised! After year HTML5 won in terms of deployment, supporting iPhone and Android.
first 3 months of decision making. Why we did native deploy? why now we do HTML5?
who uses iPhone native Facebook app today?!
Issue with UX exists. There is same Tab Bar on top of the screen on both iPhone and Android. For Android it is OK, as conformant to the Android UI guidelines. But for the iPhone it is a style problem. Apple HIG specifies Tab Bar to be at the bottom. Hence, Mobile Web doesn’t work is you want to be conformant with every platform 100%. Having two codebases (one for Android, one for iPhone) is a solution, it is known as Dedicated Web. Almost same code, but different positioning of the Tab Bar (and other specific elements of UI that are religiously different). Or, brand you bar, use it always at same place and f..k guidelines.
INTRO highlights from four panelists.
Geomedicine is emerging. Location data needed “to make medical records more enriched”. Environment is a critical player, environment is related to the geography significantly. New medical framework needed.
Place information to be brought into the story. Bring mobility into the story, as a location traces. Location over time (physical activity) matters. How I communicate chronical disease. Traces of daily lives. Pulling features out of daily traces. Looking for standardised platforms to use. Looking for mobilized innovations in healthcare.
Intersection of mobile activity data to deconstruct human genome why people do and how they do. Get data from telecoms, twitter, facebook and analyze the data. Track diseases from social and location networks. Incorporate into doctors’ diagnosis tools.
Tools for healthy behavior changes. How tools change daily behavior patterns. Places, environment, time etc matters. How to utilize technology in healthcare. Location data seems very attractive for unlocking hidden patterns in disease and people data.
APPLICATIONS of new approaches to healthcare.
Eating. If smbd prohibited to eat fast food, the app will discover smbd goes to Taco Bell and warns to not eat there. May be app will tell the doctor about that patient violating the rules of nutrition. Therapy tools. Tools keep track and make recommendations. Tool knows you spent 5h in shopping mall and how it affects insomnia. There are 20,000 toxic places in the states. 10M pounds chemicals per day produced… it is issue for environment. Public health. You are informed, you make decisions. Further: how we bring other specific points of data into the framework?
BARRIERS in geomedicine
Privacy in healthcare is a huge barrier. Overwhelm with information. He-he, BigData comes to healthcare:) Sensemaking out of the data. Profession of data scientist comes to healthcare. Ecosystem needed.
Location is an opportunity. In long term – understanding human activity at the level of context to encourage doing smth. Change people awareness, people attitude. Autonomous tracking of behavior patterns. Sensors do not really working. Where do you go? Whom you contact with? You can understand people, their health related behavior. E.g. if you eat in front of TV, how often do you eat home, how
fast do you eat, time of eating, gym facilities. Interventions are bound to the context – what exactly to push to the patient. In-door locations within hospitals and clinics. Geodistance of your day. Those things impact medication. analysis of daily and weekly traces give answers. Parkinson disease is location specific. Self-diagnosis via summarizing information. For medical tools: “one size does not fit all anymore”. Different personalities, research done.
conclusion for mHealth – Public Health, Disease Management, Mobile EMR confirmed and distinguished categories (from other ~10 categories) as mHealth solutions.