Monthly Archives: April 2012

Abstraction in Design

The problem.

To design better software (and software-hardware solutions) one needs to properly abstract the business. Good and adequate abstraction of real-life leads to good system design. And vice verse, bad design is an origin of the failure to computerize smth. Who is guilty? The one who took responsibility for design, because others follow the first guy.

What is most important in software design?

Many bright people will fail to answer this question properly. It is a burden of bad education in computer courses. I’ve measured hundreds of people around the world, how they understand the design discipline. 98% understand it too shallow. Many will talk for hours about inheritance or encapsulation, templates and whatever imagined, but almost nobody will tell you a word abstraction.

Abstraction is most important in design. How we abstract the real life, real business. What is a proper model abstracted from the physical (hard) word? My observations suggested that the best vision for right abstraction is owned by graduates from physics, chemistry specialties, but not from computer science. Paradox, but it is a reality. If you have a physicist who designs the software system – you are lucky one, your project will not fail:) Physicists has an integral vision onto the world, they apply better abstractions than others. Sorry guys, but I can confirm this with my intentional observations in the industry for ~15 years.

So, abstraction is the most important word. Abstraction is the most important skill. Everything starts with abstraction. Abstraction is not dependent on your favorite design method, because it is present in Object-Oriented Design, Design by Contract, Test-Driven Design, Model- Driven Design, Structural Design, Procedural, Functional, Entity-Relationship, etc. Just take the list from Wikipedia and ensure that abstraction is the first step for design. You could abstract into a structure, into a function, into a compilation unit, into a namespace, into a family of algorithms, etc. This post is more high-level, I would like to switch to the dilemma, what are you dealing with during the abstraction process?

Abstraction levels.

Abstraction leads to different approaches (and decisions). Hence you should be sure you are applying your abstraction at the correct level. Doing that at an inappropriate level may lead to severe design flaws. Let me visualize those abstraction levels for you.

You might notice that many things could be abstracted into some kind of hierarchy, that resembles a tree. It is normal, we have a tree in nature. We have a tree as an organizational structure. We have hierarchy of classes and so on. Below is a picture of the tree.


But is a tree is always a tree? At the abstraction level, as we looked at it, the answer was yes. We can confirm trees in real life, we all occupy some positions in some organizations. There is a tree. No doubt. OK, let’s look at the tree closer, zoom in and look again.


It is mesh! It is not a tree structure anymore. In has changed its structure from tree to mesh or grid. We are dealing with the same wood but at a different abstraction level. At this level, a different model will be designed to reflect the specifics of the thing. Mesh has its own patterns and rationales. If your task is better solved at this level, then forget the tree and design it as a mesh. All you need is to double-check you are really at this abstraction level. I just open the eyes for you that there was a different level, just FYI. It is observed in real life too. Take Enterprise 2.0. It is all about new connectivity within the organization. The static tree of the organizational structure remains, but new relations are established, which resemble the mesh. Even if they look like horizontal trees, overlapping with tree of positions gives you the mesh. Hence if you work on Enterprise 2.0 abstraction, then you have to consider the mesh instead of the tree. Think about other samples yourself, I just confirm there are many of them around you. Well, what about looking even more closely at the same thing? Zoom in again.


Oops, it is again the tree! What is going on? You just did abstraction at one more level and got a different picture than on the previous level. Yes, there is a tree again. Small but tree (sounds like Sad but True by Metallica). Sample from real life? Take the same sample, about Enterprise 2.0 initiative within the organization. There could be a mash between departments or groups, but pure tree within a small group. And it is a common situation. OK, just for fun, to demonstrate you even more abstraction levels:) Return to the initial tree and zoom out.


Wow, it looks again like a grid. We switched context from the tree to the grid one more time. We got the same metamorphose while went in the opposite direction, we zoomed out. Everything cool, no signs for panic. It is just one more abstraction level. Still dealing with the wood, but how different it is! What else? Let’s look below the ground level!


It looks like a second tree below the ground. Tree if deal at this abstraction level. We all know that roots could be the mesh if analyzed closer, and again the tree and so forth. I’d like to mention here the relation to another pattern called symmetry. We are not talking about precise or non-precise symmetry today. We are playing with the trees and the meshes. There are some other combinations remained. You could play yourself. Enjoy.


Remember that a single abstraction level is insufficient. Always look from different abstraction levels to your real thing for which you are going to design some model. The better you dissect the views into different abstraction levels, the better design you can produce. The best design patterns you will apply at every abstraction level, then better your design will be. After understanding this fundamental concept, your designs will definitely get better. Happy designing! And sorry if you are not a physicist.

Tagged , , , , , , , , , , , , , ,

Web 3.0

What is a future of the Web?

Is it Semantic Web as long time ago smtb called it? Spend few minutes to read so diverse definitions of Web 3.0 on wiki and return back here. Nobody argues with all those predictions, all of them will happen at some point in the future. My favorite prediction is smth like Kevin Kelly made public at the end of 2007, called “Next 5000 Days of the Web”

All those devices and sensors that will suck data into the web are related to our mobile devices. From the Mobile World Congress 2012 I brought information, announced by Eric Schmidt, that soon we will have 50,000,000,000 connected devices. Only imagine that number, almost ten devices per person. It is really huge!

But what we have today?

Today we see the boom of mobile apps. It is similar to what we have with the boom of apps for PCs 20-25 years ago. The history repeats itself, slightly at different level. Now we have app boom for smaller devices than PC. Years ago we have premium vendor of the app platform – Apple, and commoditizer – Microsoft. Today we have the same, premium vendor – Apple, and new commoditizer – Google/Android. But the big picture is similar, the apps are booming, there is brand new community of developers and users of them. There are new business models emerging how to monetize on this new boom.

How is it related to the Web at all? The web is in place, it is inevitable and we are all in the web, but there are nuances;) Surfing the web with Mobile Web is not the same as using the Native App. For business applications Mobile Web is logical choice, it smoothly substitutes awkward MEAP solutions. It is not a surprise that Gartner did not identify any MEAP vendors as Leaders in is Magic Quadrant. There are niche players, visionaries, but there are no leaders. It was not easy, hence many walls were broken by Mobile Web. Enterprise love Mobile Web, it has emerged and gaining popularity. Is it Web 3.0? What is a difference between web app for desktop, tablet, phone? There is almost no difference. Just few additional features like geolocation available from the browser, camera and so on. But delivery model is the same, SaaS-like familiar from PC times. Hence it is not a revolution to be named Web 3.0.

Revolution happened.

Revolution seems to be this application boom on modern phones and tablets. It smells like revolution. This observable on apps like Believe me or not, but Instagram was a threat to Facebook! Initially people published photos on Flickr or Picasa and sent link to the friends and colleagues to share them. With Facebook photo sharing feature, it got simplified, you just upload photos and there got shared automatically within your network. No need in Flickr or Picasa anymore? Then came Instagram, with opportunity to make pictures with the phone, apply some cool effect and instantly share, without connecting the device to the PC and without that annoying bulk upload. Instagram has a backend, synthesized from Facebook and Twitter, which is cool for the user. You don’t need Facebook anymore to share your pictures! Bingo!

Ok, Instagram is cool, Facebook even bought it to kill it as a competitor… But were is the web there? It is called Web Services. There is very rich and powerful web, full of clouds and web services. As Jeff Bezos once said, the future of the web was in Amazon Web Services. It is. We have got very popular S+S model, with native app on the phone/tablet and back end on AWS or so. There is good report by Vision Mobile that “Apps is a New Web“, dated 2010. We have got new ways of discovery of useful things, brand new UX, new monetization models. Enough arguments to call it New Web. May be not Web 3.0, but definitely it is no more Web 2.0.

To HTML5 Believers.

Those who hope on HTML5 as a standard, and return to old good SaaS approach could be pleased that for enterprises this works even today and will work tomorrow. But for the non-enterprise users it is not a case. First of all, all standards need few years (up to 5) to mature, after that the wide adoption happens. Second, hardware will evolve too. Web technologies will not keep the pace of hw evolution. Have you ever heard about new sensors planned for the new iPhone? E.g. infrared camera patent filled by Apple recently. It will serve for DRM, like preventing from recording the live show. It will serve to identify objects by infrared tags, instead of ugly QR tags. Infrared are invisible to the people, which means they are better, because they do not spoil the look of the object. OK, back to the infrared sensor – do you thing web tools like HTML will have support for infrared camera tomorrow? I think no. I even bet it will not. The pace of hardware is fast and web technologies will be few steps behind.


We have entered Web 3.0

New sensors like infrared camera will be added to the phones, tablets in the future. Other devices will emerge in the future. Recall 50,000,000,000 connected devices. There is no easy way to apply SaaS to all of them. There is strong M2M trend, observed during recent years. It is not Web 2.0 anymore. We have started from the user apps, now we are descending to the machine apps too… It is really smth brand new. I propose to call this new era Web 3.0. For semantic web we could chose another name, when it come. So far we are within smth new, and instead of calling it New Web, let’s call it Web 3.0.

Tagged , , , , , , , , , ,


Discovered that my quick assessment of modern mobile UX, published in early 2010 on SlideShare got 33681 views.
Decided to do two things:

  1. Share it with you [embedded below]
  2. Design and publish up-to-date one [don’t promise to do it tomorrow]

Estimate @ Speed of Thought, Part III

This is Part III, continued. If you landed right here, it is recommended that you read Part I and Part II to grasp the entire concept.

Enjoyed Fermi tasks? They are present around us, plenty of cases to get trained on. After we know how to observe the Unknown from many perspectives, convert observations into the numbers, classify observations into uniform sets, then applying Fermi calculations within the sets, and assemble the estimate for entire scope – time to pay more attention to the accuracy of the numbers. Probabilities really help. But in this Part III we will pay attention to other hidden tools, that gave us more wisdom. They will be described below, unordered, just section by section. Within every section I will bring the practical application, so that we are all set by the end of this post.

How many alternative estimations did you do?

The principle of least effort postulates that animals, people, even well designed machines will naturally choose the path of least resistance or “effort”. There is a whole theory that covers diverse fields from evolutionary biology to webpage design. Direct relation to the estimation is that in most cases there is only one alternative. Even if few of them, then they are for sure variations of the primary one. Because it was easier for people who did it. It was easier for you too. People are lazy, hence estimations suck. There was not enough independent alternatives, sufficient for comparison at the end, when you judge for final numbers (via various expert judgement methods). Hence, always force at least 5+ really independent paths of analysis and calculations to ensure your stuff does matter at the end. Use managerial power to make people produce more independent alternatives if you are manager. Force yourself to do so, if you are working solo virtuoso. One more trick, is to put different start point and let guys follow the least resistance path:) Your goal is to get alternative estimations, so apply such tricks to achieve it.


Paths of least resistance.

The magic of One.

First-digit law states that in lists of numbers from many real-life sources of data, the leading digit is distributed in a specific, non-uniform way. According to this law, the first digit is 1 about 30% of the time, and larger digits occur as the leading digit with lower and lower frequency, to the point where 9 as a first digit occurs less than 5% of the time.It is known as Benford’s law. It is observed pretty often, though not all real-life sources obey to it. Partially it is related to dynamics, because somewhere many year ago it could work for some certain data, or will work in the future, when humanity adopts (or augments) the numeric system.

Examining a list of the heights of the 60 tallest structures in the world by category shows that 1 is by far the most common leading digit, irrespective of the unit of measurement. Same could be told about the car weight (in metric system). People’s height is 1m and less than 2m (in metric system). Ticket price for trans Atlantic flight is often 1K and smth. So, it is ubiquitous.

Does it emerge in programming? For sure yes! When you calculate the volumes of code, data, classes, methods, number of people in organization etc. But we are interested in estimation. So, in estimation it is also observed, when you have first digit “1” prevailed, either in man/days or man/hours or cost expressed in money (e.g. 10,000+, or 1,000,000+ in dollars). Hence, always check what is your first digit, and if not “1”, then do double-check why not. Probably it is OK. But you will know it for sure only after validation. Otherwise, if you applied sufficient number of orthogonal calculations and your numbers confirm Banford’s law, then consider it as additional argument that you did right!


Distribution of first digits (in %, red bars) in the population of the 237 countries of the world. Black dots indicate the distribution predicted by Benford’s law. More details and diagrams on wiki.

1.8x more

Sounds familiar? Overhead 2x in comparison to initial estimation? This pattern has emerged in computer world, at least it became popular in recent 25 years. “The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.” Tom Cargill, Bell Labs. The pattern is called 90/90 or Ninety-ninety rule. It expresses both the rough allocation of time to easy and hard portions of a programming project and the cause of the lateness of many projects (that is, failure to anticipate the hard parts). In other words, it takes both more time and more coding than expected to make a project work. How it helps you? Just multiply your number by the factor of 1.8. It increases probability of successful execution within estimation (efforts/cost/budget, schedule). It could be applied at the stage, when you obtained estimations for subsystems. Especially when you get estimate with probability p50, use 90/90 to increase probability to p80 and beyond. The rule could be applied on top of all as well. It can’t be distilled to the exact reasons, it it just observable, hence respect it and use as a tool for fine-tuning your numbers.

Elegance, Beauty

This is my favorite. As aircraft constructor Antonov said: “Beautiful aircraft flies perfectly”. It is my translation, but the essence is unchanged. Beautiful things work better. If some thing works pretty well but looks ugly, it means there is an opportunity to improve its design. If some beautiful thing works good, than probably it is it, the thing is at its evolution end. There is no way to improve its design. Back to estimation and computer systems. If the architecture (component, modules, network, database, deployment etc) visually sucks, then it means two things: representation sucks or the subject sucks itself and must be improved. Elegant solution wakes up positive emotions. Your clients and partners will prefer elegant and beautiful solutions, even if they clam they need smth quick and dirty. It is bullshit. We all people, and when we go to the store, we do not buy quick and dirty shit, neither pants nor shoes. We select smth better, according to our taste. When we go to the launch, we do not eat dirty… we probably use to eat quickly, but definitely not dirty meal. You’ve got the idea. That Unknown that you got for estimation should be perfectly beautiful now, because you dissected it with your analysis, saw the true picture, understand the goals, desiderata, limitations, alternatives etc, build elegant solution and estimated it properly. If not, then your analysis did not succeed, you overlooked smth important. Return back and look and the unknown again. Until you see the beauty. After that you could estimate.



It is possible to estimate quickly. Just learn from the mother Nature, respect emergent pattens, recognize them in your computerized business, work, being. We can not explain those patterns, neither prove them nor decline. We are able to observe them and agree they were, they are, they will be around and within us. As you probably understood, they are applicable not solely to the estimation, but to everything. Estimation binding was done to easy your estimation exercises.

Tagged , , , , , , , , , ,

Estimate @ Speed of Thought, Part II

This is Part II, continued. Part I available here.

Divide and Conquer

Hope, by this time it is clear that we are capable to discover as many facets of the unknown as possible. Next step is to follow Caesar’s principle, that is as simple as “divide and conquer”, apply more attention to the uniform sets [it’s Alexander The Great on the picture]. Circles are separated from squares. Within circles you comfortably deal with all circles, because they are all similar. Within squares you are good with analysis of entire set too. Because sets are uniform, by knowing one item you could think about entire set with good probability that your conclusions will be correct.  In the analysis of some software project or entire solution (software + hardware + everything else) you will come up with many sets of uniform items. The picture below shows only circles and squares.

In real-life you could apply different classifiers, how to separate objects of on type from objects of another type. This multiple choice of classification is depicted as separation lines. They are schematic, but they speak for themselves, there are options how to isolate uniform things and group them together. In our industry those items are UI forms, function points, workflows, use cases, files, tables, users, B2B interfaces, coding, management, UX design etc. Everything that takes place. Independent on categories. We apply independent views, orthogonal, that’s why UX design activity is as good as number of tables or so. Recall how you looked at Saturn, you have discovered it from all possible sides and did not overlook anything.

What to do with those sets of uniform items?

First, check what constitute the scope of the project. Select those sets that correspond to 100 percent scope. If you did good analysis before, you will have alternatives how to build the scope. Some sets will be reused, but others could be interchangeable. E.g. Set of B2B interfaces could be reused, while some core functionality could be hidden within set of use cases or set of workflows or set of forms. And the more options you have, the better you did before! It will pay off.

Now we are ready to count the items within every set, then sum all together and get the scope. Repeat the exercise for other alternatives of the scope, come up with other results. Look at them and feel the numbers. If you did good dissection of uncertainty, your numbers will look similar. If not, return back and analyze more. You overlooked smth important. This check is very easy, but it is extremely important for you regarding trustworthy of your numbers. At this stage it is important to get the range. E.g. are we dealing with tens, or hundreds, or thousands, or millions and so on. Select 10^2, 10^3, 10^4 etc as checkpoints, figure out in what range your initial estimate falls. All refinements will be after that.

Who are you Dr. Fermi?

Is it even possible to calculate firmly?! Yes it is. Dr. Fermi was a physicist. This is a wiki page of Enrico Fermi. He worked on the first nuclear reactor, nuclear bombs and other dangerous stuff. Fermi received the Nobel Prize in Physics at the age of 37. I would like to highlight his excellent quick calculation of the strength of the nuclear explosion. Fermi was present as an observer of the Trinity test on July 16, 1945. As the shock wave hit Base Camp, Aeby [engineer] saw Enrico Fermi with a handful of torn paper. “He was dribbling it in the air. When the shock wave came it moved the confetti. He thought for a moment.” Fermi had just estimated the yield of the first nuclear explosion. It was in the ball park. Fermi’s strips-of-paper estimate was 10 kilotons of TNT; the actual yield was about 19 kilotons. Almost 2x difference, but how fast he did it! You could do the same. Think like Fermi on your “paper pieces” within uniform bins and produce a good enough calculation in very fast manner. Below are situations that are not so unknown as you could thing.

  • how many frames are in Tarzan movie?
  • what is a weight of the Boeing 747?
  • to what car does that wheel belong?
  • how many leaves are on that tree? or where that tree grows?

There are many variations. Questions about Tarzan movie is simple, because you are instantly dealing with uniform set of objects – frames. You simple take typical duration of the movie, e.g. 1 hour; then number of frames per second, which is either 25 or 30 (in different TV standards); multiply and you are all set. With Boeing it is more complicated, because full scope is not from one set of uniform objects. You will have to calculate people (which are uniform), luggage (which is uniform), fuel (also uniform), aircraft itself etc. More complicated than Tarzan movie but doable. With yellow wheel and blue car situation is even more different. You could apply Google Image Search to find smth similar and reverse engineer the car. It could be Subaru Impreza STI, crazy rally monster car. Remember, looks from all possible dimensions. Do not limit yourself within few of them. This is a methodology. I have just reminded it again.

How can I improve the numbers?

Apply Pareto principle within your sets! Check for Pareto principle at the 100 percent scope level. It is a Nature in everything, because everything is a product of evolution. Computers and software systems are also products of our evolution. Same laws work everywhere. Including the law of Normal distribution. If the number of items within your sets is sufficient (it is likely that # is bigger than 50), then you could apply Pareto rule and get better results. When you are OK with calculation of every scope alternative, you could play with your final numbers. Look at the differences and judge, why they happened. Include other people into the estimation, give same methodology, compare results. Pay big attention to the biggest deviations and ignore similar numbers.

Other ways for refining the numbers are in the methods of judgement. You could drop min and max estimations and avg the remaining. This is too simple. I propose to plug probabilities here. For every alternative estimate you should provide probability that it is correct. The method how to judge the probability is simple, modify the number until you get unsure that project is doable or undoable within give efforts. In other words always find the p50, probability that project will succeed within those efforts or fail, with probabilities 50/50 for each. Then move to the higher probability for success. Stop where you think it worth to stop. You will have various estimations with different probabilities, like p60, p90. Melt the numbers together again, give more weight to more probable numbers.

How can I get even better numbers?

Make sure you looked for analogous project done within your organization or in your previous experience. Use those numbers (adjusted) as one additional estimate. Use estimation tools, such as QSM SLIM if you have a license. Melt those numbers together with other alternatives you calculated. There are more ways how to improve numbers even more, without use of expensive tools like COCOMO or SLIM. I will tell about them in next post. So far, try to calculate the number of apples in the world:)

Tagged , , , , , , , , ,

Estimate @ Speed of Thought

Decided to share my research here, published for the first time somewhere at the beginning of 2010. Motivation is to teach others how to do ballpark estimations very quickly. This is extremely important in consulting job, when you have to calculate really quickly and accurately.

Table of Content

  • How to see the unknown
  • Who are you, Dr. Fermi?
  • Emergence of patterns

What is Unknown? What Unknown is?

Some mechanisms in our brains classify things as Unknown. It happens because of multiple reasons. Brain classifies as Unknown everything that we call Unclear, Invisible, Complicated, Scary and so on. Below are visualization of what I mean.

Unclear. What do you see on this Marsian picture: face or pyramid? Unclear again. What sign do you see?


Invisible. What do you see below? Nude woman or elephant?


Human perception works in such a way that we see in details what is in front of our nose, then almost without details what is aside, and very blurry what is a distant to us. It is related to all objects that we look at. People below see three modules, the structure of the first module in details, some high-level structure of the second module, and almost nothing in the distant third module. In out case, this is an abstract representation of some real-life objects, e.g. airport with multiple terminals, enterprise software system etc.


Complicated. Are you comfortable with understanding of the diagram?


Scary. Scary leads to unknown, because it blocks the brain from creative thinking. Unpleasant, non-aesthetic, negative is processed less efficiently than positive things. I could be wrong with biological terminology, but the fact that positive things works better is widely researched in UX (User Experience) field, and is confirmed by such gurus as Don Norman. It is Marylin Manson on the picture below, not Don:) And many people consider it as scary. Though the music is good. I played “Rock is Dead” and “Sweet Dreams” with a pleasure. The life is strange indeed. Nevertheless, scary things lead or nurture the unknown.


Why Unknown is a Top Problem in Estimation?

There are obvious quick answers. Incomplete list is here. IT industry evolves so fast. Industry people hardly keep the pace of evolution. Those things that passed by are falling into Unknown automatically. Many youngsters are involved into the work. As in any other industry, all youngsters are inexperienced. What is pretty straightforward to mature wolves happen to be unknown for the newbie. Answers required for yesterday. Sounds similar? Or another situation, everything almost understandable, but there are nuances – it is for different business domain. That domain specifics redefine the whole picture and known things are falling into unknown category very quickly. The match of skills and problems (or mismatch of them) leads to more unknowns. Add your reasons from your experience here! Come on! I would conclude this section that unknown is relative, but permanent. Relative means that for somebody it is unknown, for somebody known. Permanent means the persistence of the problem in IT industry. It was yesterday, it is present today and tomorrow we will face it again. C’est la vie.

How to Look at the Unknown?

If we concluded that the problem is recurring, hence we need a solution how to deal with it. How to tame the unknown? You should look at it as you look at the diamond. Look at unknown from all possible facets. Rotate it and look again. Grab it close to you eyes and look. Put it one meter from you and look. Look in the morning and at the afternoon. Look together. Look, look and look! This is a solution. Actually, it leads to the solution. Let’s do an exercise, a small workshop for you.

Workshop for You. Saturn!

If you are not an astronomer, this should be good unknown thing for you. Its name is Saturn. Saturn planet. To make some decisions about Saturn you have to know at least something about it. What could you tell me about Saturn? Quickly and reliably? Look.


OK, some things you really see. Some things you know from your school memories. It is sphere and there is a ring around it. Most people would stop here. But we go further! The sphere is not uniform. We should notice it and remember, because it might affect some decision making in the future. It is always important to classify uniform things and deal with them separately. Do not mix apples and dogs. Divide and conquer, isolate apples into one bin and deal with them, isolate dogs into another bin and deal with them there. Well, back to Saturn. There are many rings, not just one! This is also result of just slightly more deep observation of the same image. Invest slightly more time to analysis and it will pay off. But look differently from how majority of people look. Dig into all possible details that may lead (or may not) to better context for the decision making afterwards.

Good, we came up with some info. What else? How to get even more info? Solution is to look. Look again. Look in the different spectrum! Use infrared spectrum for example. Probably you would see smth new?


Wow, the temperature is different across the planet. Almost like on our Earth! Notice this fact. It is sphere, like Earth. It has temperature distribution, like Earth. The principle of analogy could be applied here. It is the easiest principle that could help you a lot, but there is a big risk, if you apply the principle where there is no analogy. Hence, you need more details to get sure there is opportunity to apply the analogy.

What else? As we mentioned our mother Earth, let’s benefit from it. Look again, look so that you see Earth and Saturn together. What could you tell now about Saturn?


Mamma mia! Saturn is huge! This observation brings numbers about the diameter, mass, density etc. Just take known Earth numbers and calculate Saturn numbers based on the ratios you observe. The temperature from the infrared spectrum also could be converted into the numbers. So, after few different looks you’ve got plenty of numbers. And numbers are good, we could calculate using them. Exactly what we need during estimations. We need numbers and ranges of them to play with them. What is an interim conclusion here that should be mentioned is orthogonality. We have used three orthogonal views onto the same thing. Orthogonal means independence. We applied independent looks and grabbed some information form them, and converted it into the numbers.

Good so far. Moving forward. You could guess what else possible? A lot of hidden views are still available. Let’s use them to get more information in the same context. Look closer. Zoom in.


It is alive! Saturn is not a dead stone. There is some spiral movement. Kind of cyclone? Something is going on there. For us it means dynamics. We might count on it later during some decision making. Several more looks at Saturn…


Now you should catch the idea. It is really possible to squeeze much more information from the given context. It is possible to See the Unknown. It is possible to convert new information into the numbers and use them in your further calculations. You could observe a lot. Then you should divide and conquer the observations. Then work within homogeneous sets. It makes analysis easier. Substitute the word “Saturn” by the word “Project” or “Prospect” and apply in your work. You will see a lot more useful info about your Projects and Prospects immediately. For sure, you could use that info properly. Ain’t fun?

Who are you, Dr. Fermi?

Look forward to the next post. It will be Part II. See you here soon!

Tagged , , , , , , , , ,

WHERE 2.0, Wednesday, Final

Presenter from Bitly announced  as “she will share pure geo porn”

bitly by Hilary Mason Chief Scientist

time == f(space) or more precisely, time == f(distance) now, with Twitter, time == f(attention)

2h of Twitter to be clicked. 7h on Youtube to be clicked. Egypt stats shown Internet down, revolution clearly. Bitly has clicks. All theis data is generated by clicks. They have time too, hence they analyze real-time clicks. Social data and Location data are noisy filters. New geographies of social data. Plus time.


Closure panel.

from WHERE conferences, the trend was from ‘how to collect data’ to ‘how to make impact’. what is a biggest surprise at WHERE this year? or biggest unsurprise? people are changing behavior based on biolocation data. 2-3 next years. by Waze. Smart cities. Sensors. M2M between the phone and building. Cities will have API. Interplay of different kinds of sensors is exciting. By Bitly. Voice interface to reduce cognitive load during interaction. By VC. greater and smarter context. Time to build new innovation. People assuming that current model would stay (e.g. search). Current search paradigm makes any sense in my device. Need ‘smartness’. BodyMedia mentioned when talked about smart things. realtime exchange. what you context will be along the way in real-time? mobile phone is trusted identity. but losable, stealable. solve the problems for the real people, make them come back. foursquare, yelp, facebook, twitter – generates data from what users put on top of it. Apple has a clear biz model of selling devices. Do you own your own data? facebook is not a geopower house. next geopowerhouse will be social. twitter is on the list. eBay is interesting with recent acquisitions of RedLaser, Milo.


my conclusion. WHERE 2011 was stronger than WHERE 2012.


Tagged , , , ,

WHERE 2.0, Wednesday, Part III – Native vs. HTML5


5,000 year old idea.
Post a request and you will be connected with someone who can fulfill it. Pay only when you get the goods. Tell us what you want to sell you will be notified at anytime 
someone asks for it. Now building an environment for safe buying and selling with the people. It is meta-market place. Crazy doesn’t work, but if you ask smth sane, it works. Ask what you want, how much you can pay and the work will be done by somebody near you. Some similarity to TaskRabbit + craigslist

In few days 10K dollars came in transactions on daily basis. Product has been changed since that time. Product became ‘on-purpose’ instead of being ‘the neccesity’. It was native iOS app. After 2 weeks such questions as HTML5 raised! After year HTML5 won in terms of deployment, supporting iPhone and Android.

first 3 months of decision making. Why we did native deploy? why now we do HTML5?
who uses iPhone native Facebook app today?!

Issue with UX exists. There is same Tab Bar on top of the screen on both iPhone and Android. For Android it is OK, as conformant to the Android UI guidelines. But for the iPhone it is a style problem. Apple HIG specifies Tab Bar to be at the bottom. Hence, Mobile Web doesn’t work is you want to be conformant with every platform 100%. Having two codebases (one for Android, one for iPhone) is a solution, it is known as Dedicated Web. Almost same code, but different positioning of the Tab Bar (and other specific elements of UI that are religiously different). Or, brand you bar, use it always at same place and f..k guidelines.

Tagged , , , , , , ,

WHERE 2.0, Wednesday, Part II – Healthcare & Location

Healthcare session

INTRO highlights from four panelists.

Geomedicine is emerging. Location data needed “to make medical records more enriched”. Environment is a critical player, environment is related to the geography significantly. New medical framework needed.

Place information to be brought into the story. Bring mobility into the story, as a location traces. Location over time (physical activity) matters. How I communicate chronical disease. Traces of daily lives. Pulling features out of daily traces. Looking for standardised platforms to use. Looking for mobilized innovations in healthcare.

Intersection of mobile activity data to deconstruct human genome why people do and how they do. Get data from telecoms, twitter, facebook and analyze the data. Track diseases from social and location networks. Incorporate into doctors’ diagnosis tools.

Tools for healthy behavior changes. How tools change daily behavior patterns. Places, environment, time etc matters. How to utilize technology in healthcare. Location data seems very attractive for unlocking hidden patterns in disease and people data.

APPLICATIONS of new approaches to healthcare.

Eating. If smbd prohibited to eat fast food, the app will discover smbd goes to Taco Bell and warns to not eat there. May be app will tell the doctor about that patient violating the rules of nutrition. Therapy tools. Tools keep track and make recommendations. Tool knows you spent 5h in shopping mall and how it affects insomnia. There are 20,000 toxic places in the states. 10M pounds chemicals per day produced… it is issue for environment. Public health. You are informed, you make decisions. Further: how we bring other specific points of data into the framework?

BARRIERS in geomedicine

Privacy in healthcare is a huge barrier. Overwhelm with information. He-he, BigData comes to healthcare:) Sensemaking out of the data. Profession of data scientist comes to healthcare. Ecosystem needed.


Location is an opportunity. In long term – understanding human activity at the level of context to encourage doing smth. Change people awareness, people attitude. Autonomous tracking of behavior patterns. Sensors do not really working. Where do you go? Whom you contact with? You can understand people, their health related behavior. E.g. if you eat in front of TV, how often do you eat home, how
fast do you eat, time of eating, gym facilities. Interventions are bound to the context – what exactly to push to the patient. In-door locations within hospitals and clinics. Geodistance of your day. Those things impact medication. analysis of daily and weekly traces give answers. Parkinson disease is location specific. Self-diagnosis via summarizing information. For medical tools: “one size does not fit all anymore”. Different personalities, research done.


conclusion for mHealth – Public Health, Disease Management, Mobile EMR confirmed and distinguished categories (from other ~10 categories) as mHealth solutions.


Tagged , , , , , , , ,

WHERE 2.0, Wednesday, Part I

So, second day of the conference. Looking for some interesting keynotes.


News through Data by Jer Thorp
Data taken from Kepler project by NASA, search for exo-planets.
Cool video demo
Back to the GPS path, shown last year, from hidden file on iPhone
you can propose a project there


by most beautiful speaker so far Leila Janah
Photographers woke up when she came on to the stage.
Idea is that data projects devided into small pieces. Sample is handling of POI data tasks. Project management in San Francisco. Work in Africa @ $1-3 per hour.

her conclusion: “take messy data, address poverty”
my experienced friend’s conclusion: “too expensive dress for that presentation”


Platform for real-time location, messaging, and analytics
Geofencings, location based analytics, location tracking. Geofencing solution is battery safe
Brand new game, powered by Geoloqi – play on real territory Two teams compete to capture the most points on the gameboard. The gameboard, in this case, is the city streets of the neighborhood the players are in. To play, all you need is a phone, a city street, park or college campus, and some friends.


Capture things that interest and inspire you from the real world. Organize your posts into “kullections” based on topics. Discover what your friends are sharing and what’s nearby. Share and discover. Creators, curators. 1st place from ~10 startups.


Gaming Reality
by Will Wright.
A lot of slides, a lot of words.  Conclusion is to build unique experiense for person and environment. One more confirmation for personal UX.


~14,000-year history of mapmaking
inventions and discoveries of new maps, artifacts etc.


Remember “Minority Report” movie? The technology has advanced. MIT Media Lab “Minotirty Report” –> Boeing Saudi Aramco –> Mezzanine g-speak SDK. The g-speak SOE (spatial operating environment) is Oblong’s radically new platform. The SOE made its public debut in the film Minority Report, whose bellwether interface one of Oblong’s founders designed. But its full history extends backward
to three decades of research at the MIT Media Lab. video demo

Mezzanine demo. Move objects thru the meeting room (space). Seems the objects then could be drug’n’dropped between ALL devices in the room (thru the space) like iPap, laptop, phone.

live demo with Kinect, by Pete and Tom. Tracking hands and FINGERS by Kinect. Shown in Mac screen put vertically:) Control by the phone as by remote control unit to the TV. If there were other screens (or projection devices), they are all within the “space” and content from one screen could be thrown to another screen smoothly.
and “we are hiring” at the end…


Wireless Stars
buiding Egypt 2.0

LBS used to build Egypt 2.0
IntaFeen is a foursquare for Arab world. Culture translation. IntaFeen was used during revolution and after the revolution. crowdsourcing everything to plant trees, police, repairing, cleaning. OneYad – connects people based on interest & location. OneYad means one hand. From social networking to social working

BTW OneYad shows why Facebook sucks for everything else except sharing.


by Brian McClendon, VP of Google Geo, former Keyhole founder

Ingredients of Modern Map Service
* sub-meter imagery (see smbd’s house)
* aerial imagery
* increase in OBLIQUE (what is that?)
* historical imagery (before vs. after)
* street view imagery (only 5 cities in 2007. now 35 contries.)
* apls and amazon rainforest (camera on top of the train and the boat:))
* unique problems (frog on the camera lens)
* art project (151 museums, indoors rock)
* Japan, tsunami… the only way to see how city looked before tsunami is Google
* Maps older images…
* vector data (with plenty hidden data). Internal tool for editing and debugging
* 20 countries with streetview
* map maker, empower users
* users share: “there is a field of marijuana here” and pin on the map
* 1.6M QPS peak tile serving, 75ms avg tile serving latency, 99,995% avail over 3y
* API, geocoding, directions, places, street view
* Dynamic tiles, for enabling dynamic features
* future is 3D

Brian’s conclusion for future: “it will be 3D”.
my conclusion: “all those free maps are far away from Google… Google is a growing monster”

Tagged , ,