Category Archives: Mobile

Internet of Things @ Stockholm, Copenhagen

This is an IoT story combined from what was delivered during Q1’15 in Stockholm, Copenhagen and Bad Homburg (Frankfurt).

We stuck in the past

When I first heard Peter Thiel about our technological deceleration, it collided in my head with technological acceleration by Ray Kurzveil. It seems that both gentlemen are right and we [humanity] follow multi-dimensional spiral pathway. Kurzveil reveals shorter intervals between spiral cycles. Thiel reveals we are moving in negative direction within single current spiral cycle. Let’s drill down within the current cycle.

Cars are getting more and more powerful (in terms of horse power), but we don’t move faster with cars. Instead we move slower, because of so many traffic lights, speed limits, traffic jams, grid locks. It is definitely not cool to stare at red light and wait. It is not cool either to break because your green light ended. In Copenhagen majority of people use bikes. It means they move at the speed of 20 kph or so… Way more slower than our modern cars would have allowed. Ain’t strange?

Aircrafts are faster than cars, but air travel is slow either. We have strange connections. A trip from Copenhagen to Stockholm takes one full day because you got to fly Copenhagen-Frankfurt, wait and then fly Frankfurt-Stockholm. That’s how airlines work and suck money from your pocket for each air mile. Now add long security lines, weather precautions and weather cancellations of flights. Add union strikes. Dream about decommissioned Concord… 12 years ago already.

Smartphone computing power equals to Apollo mission levels, so what? Smartphone is used to browse people and play games mainly. At some moment we will start using it as a hub, to connect tens of devices, to process tons of data before submitting into the Cloud (because Data will soon not fit into the Cloud). But for now we under-use smartphones. I am sick of charging every day. I am sick for all those wires and adapters. That’s ridiculous.

Cancer, Alzheimer and HIV still not defeated. And there is not optimistic mid term forecast yet.

We [thinking people] admit that we have stuck in the past. We admit our old tools are not capable to bring us into the future. We admit that we need to build new tools to break into the future. Internet of Things is such a macro trend – building those new tools what would breakthrough us into the future.

Where exactly we stuck?

We are within 3rd wave of IoT called Identification and at the beginning of 4th wave of IoT called Miniaturization. Those two slightly overlap.

Miniaturization is evidence of Moore’s Law still working. Pretty small devices are capable of running same calculations as not so old desktops. Connecting industrial machinery via man-in-the-middle small device is on the rise. It is known as Machine-to-Machine (M2M). Two common scenarios here: wire protocol – to break into dumb machine’s wires and hook it there for readings and control; optical protocol – read from analog or digital screens and do optical recognition of the information.

More words about optical protocol in M2M. Imagine you are running biotech lab. You have good old centrifuges, doing their layering job perfectly. But they are not connected, so you need to read from display and push the buttons manually. You don’t want to break into the good working machines and decide to read from their screens or panels optically, hence doing optical identification for useful information. The centrifuges become connected. M2M without wires.

Identification is also on the rise in manufacturing. Just put a small device to identify something like vibration, smoke, volume, motion, proximity, temperature etc. Just attach a small device with right sensors to dumb machine and identify/measure what you are interested in. Identification is on the rise in life style. It is Wearables we put onto ourselves for measure various aspects of our activity. Have you ever wondered how many methods exist to measure temperature? Probably more than 10. Your (or my?) favorite wearables usually have thermistors (BodyMedia) and IR sensors (Scanadu).

Optical identification as powerful field of entire Internet of Things Identification requires special section. Continue reading.

Optical identification

Why optical is so important? Guess: at what bandwidth our eyes transmit data into the brain?
It is definitely more than 1 megabit per second and may be (may be not) slightly less than 10 megabit per second. For you geeks, it is not so old Ethernet speed. With 100 video sensors we end up with 1+ Terabyte during business hours (~10 hours). That’s hell a lot of data. It’s better to extract useful information out of those data streams and continue with information rather than with data. Volume reduction could be 1000x and much more if we deal with relevant information only. Real-time identification is vital for self-driving cars.

Even for already accumulated media archives this all is very relevant. How to index video library? How to index image library? It is the work for machines, to crawl and parse each frame and sequences of frames to classify what it there, remember timestamp and clip location, make a thumbnail and give this information to those who are users of the media archives (other apps, services and people). Usually images are identified/parsed by Convolution Neural Networks (CNN) or Autoencoder + Perceptron. For various business purposes, the good way to start doing visual object identification right away is Berkeley Caffe framework.

Ever heard about DeepMind? They are not on Kaggle today. They were there much earlier. One of them, Volodymyr Mnih, won the prize in early 2013. DeepMind invented some breakthrough technology and was bought by Google for $400 million (Facebook was another potential buyer of DeepMind). So what is interesting with them? Well yeah, the acquisition was conditional that Google would not abuse the technology. There is special Ethical Board set up at Google to validate use of DeepMind technology. We could try to figure out what their secret sauce is. All I know is that they went beyond dumb predefined machine learning by applying more neuroscience stuff which unlocked learning from own experience, with nothing predefined a priori.

Volodymyr Mnih has been featured in recent (at the moment of this post) issue of Nature magazine, with affiliation to DeepMind in the references. Read what they did – they build neural network that learns game strategy, ran it on old Atari games and outperformed human players on 43 games!! It is CNN, with time dimension (four chronological frames given to input). Besides time dimension, another big difference to classic CNN is delayed rewards learning mechanism, i.e. it’s true strategy from your previous moves. The algorithm is called Deep Q-learning, and the entire network is called Deep Q-learning Network (DQN). It is a question of time when DQN will be able to handle more complicated graphical screens than old Atari. They have tried Doom already. May be StarCraft is next. And soon it will be business processes and workflows…

Those who subscribed to Nature, log in and read main article, especially Methods part. Others could check out reader-friendly New Yorker post. Pay attention to Nature link there, you might be lucky to access Methods section on Nature site without subscription. Check out DQN code in Lua and DQN + Caffe and DQN/Caffe ping pong demo.

Who eats whom?

All right with importance of optical identification, hope it’s time to switch back to high-level and continue on the Internet of Things as macro trend, at global scale. Many of you got used to the statement that Software is eating the World. That’s correct for two aspects: hardware flexibility is being shifted to software flexibility; fabricators are making hard goods from digital models.

Shifting flexibility from hardware to software is huge cost reduction of maintenance and reconfiguration. The evidence of hardware eaten by software are all those SDX, Software Defined Everything. E.g. SDN aka Software Defined Networks, SDR aka Software Defined Radio, SRS aka Storage, and so of for Data Center etc. Tesla car is a pretty software defined car.

But this World has not been even eaten by hardware yet! Miniaturization of electric digital devices allows the Hardware to eat the World today and tomorrow. Penetration and reach of devices into previously inaccessible territories is stunning. We establish stationary identification devices (surveillance cameras, weather sensors, industrial meters etc.) and launch movable devices (flying drones, swimming drones, balloons, UAVs, self-driving cars, rovers etc.) Check out excellent hardware trends for 2015. Today we put plenty of remora devices onto the cars and ourselves. Further miniaturization will allow to take devices inside ourselves. The evidence is that Hardware is eating the World.

Wait, there are fabricators or nanofactories, producing hard goods from 3D models! 3D printed goods and 3D printed hamburger are evidences of Software directly eating the World. Then, the conclusion could be that Software is eating the World previously eaten by Hardware, while Hardware is eating the rest of the World at higher pace than Software is eating via fabrication.

Who eats whom? Second pass

Things are not so straightforward. We [you and me] have stuck in silicon world. Those ruminations are true for electrical/digital devices/technologies. Things are not limited to digital and electrical. The movement of biohackers can’t be ignored. Those guys are doing garage bio experiments on 5K equipment exactly as Jobs and Woz did electrical/digital experiments in their garage during PC era birth.

Biohackers are also eating the World. I am not talking about standard boring initiation [of biohacker] to make something glowing… There are amazing achievements. One of them is night vision. Electrical/digital approach to night vision is infra red camera, cooler and analog optical picture into your eyes, or radio scanner and analog/digital reconstruction of the scene for your eyes. Bio approach is injection of Chlorin e6 drops into your eyes. With the aid of Ce6 you could see in the darkness in the range of 10 to 50 meters. Though there is some controversy with that Ce6 experiment.

The new conclusion for the “Eaters Club” is this:

  • Software is eating the world previously eaten by Hardware
  • Hardware is eating the rest of the World, much bigger part than it’d already been eaten, at high pace
  • Software is eating the world slowly thru fabrication and nanofactories
  • Biohackers are eating the world, ignoring both Hardware & Software eaters

Will convergence of hardware and bio happen as it happened with software and hardware? I bet yes. For remote devices it could be very beneficial to take energy from the ambient environment, which potentially could be implemented via biological mechanisms.

sw_hw_bio

Blending it all together

Time for putting it all together and emphasizing onto practical consequences. Small and smaller devices are needed to wrap entire business (machines, people, areas). Many devices needed, 50 billion by 2020. Networking is needed to connect 50 billion devices. Data flow will grow from 50 billion devices and within the network. Data Gravity phenomenon will become more and more observable, when data attracts apps, services and people to itself. Keep reading for details.

Internet of Things is a sweet spot at the intersection of three technological macro trends: Semiconductors, Telecoms and Big Data. All three parts work together, but have different evolution pace. That’s lead to new rules of the ‘common sense’ emerging within IoT.
1_IoT

Remote devices need networking, good networking. And we got an issue, which will only strengthen. The pace of evolution for semiconductors is 60%, while the pace of evolution of networks is 50%. The pace of evolution of storage technology is even faster than 60% annually. It means that newly acquired data will fit into the network less and less in time [less chances for data to get into the Cloud] . It means that more and more data will be left beyond the network [and beyond the Cloud].

Off-the-Cloud data must be handled in-place, at location of acquisition or so. It means huge growth of Embedded Programming. All those small and smaller devices will have to acquire, store, filter, reduce and sync data. It is Embedded Programming with OS, without OS. It is distributed and decentralized programming. It is programming of dynamic mesh networks. It is connectivity from device to device without central tower. It is new kind of the cloud programming, closest to the ground, called Fog. Hence Fog Programming, Fog Computing. Dynamic mesh networks, plenty of DSP, potentially applicable distributed technologies for business logic foundation such as BitTorrent, Telehash, Blockchain. Interesting times in Embedded Programming are coming. This is just Internet of Things Miniaturization phase. Add smart sensing on those P2P connected small and smaller devices in the Fog, and Internet of Things Identification phase will be addressed properly.

The Reference Architecture of IoT is seven-layered (because 7 is a lucky number?).
5_7layers

Conclusion

We are building new tools that we will use to build our future. We’re doing it through digitization of the World. Everything physical becomes connected and reflected into its digital representation. Don’t overfocus onto Software, think about Hardware. Don’t overfocus onto Hardware, think about Bio. Expected convergence of software-hardware-bio as most stable and eco-friendly foundation for those 50 billion devices by 2020.

Recall Peter Thiel and biz frustrations nowadays. With digitized connected World we will turn from negative direction within current spiral cycle into positive. And of course we will continue with long term acceleration. The future looks exciting.

PS.
Music for reading and thinking: from the near future, Blade Runner, Los Angeles 2019

Advertisements
Tagged , , , , , , , , , , , , , , , ,

Security in IoT

@ 4th Annual Nordic Cloud & Mobile Security Forum in Stockholm

 

Internet of Things
as I understand it

IoT emerges at the interaction of Semiconductors, Telecoms, Big Data and their laws. Moore’s Law for Semiconductors, observed as 60% annual computing power increase. Nielsen’s Law for Telecoms, observed as 50% annual network bandwidth increase; Metcalfe’s Law for networks, observed as value of the network proportional to the squared number of connected nodes (human and machines, many-to-many). Law of Large Numbers is observed as known average probabilities for everything, that you don’t need statistics anymore. On Venn diagram IoT looks smaller than either of those three foundations – Semiconductors, Telecoms and Big Data, but in reality IoT is much bigger, it is digitization and augmentation of our physical world, both in business and lifestyle.

1_IoT

How people recognize IoT? Propably some see only one web, some see another web, others see few webs? There are good known six webs: Near, Hear, Far, Weird, B2B, D2D [aka M2M]. Near is laptop, PC. Hear is smartphone, smartwatch, armband, wristband, chestband, Google Glass, shoes with some electronics. Far is TV, kiosk, projection surface. Weird is voice and gesture interface to Near and Far, with potential new features emerging. B2B is app-to-app or service-to-service. D2D is device-to-device or machine-to-machine.

People used to sit in front of computer, now we sit within big computer. In 3000 days there will be super machine, let’s call it One, according to Kevin Kelly. It’s operating system is web. One identifies and encodes everything. All screens look into One. One can read it all, all data formats from all data sources. To share is to gain, yep, sharing economy. No bits live outside of One. One is us.

2_6webs

Where we are today
or five waves of IoT

Today we are at Identification of everything, especially visually; and Miniaturization of everything, especially with wearables and M2M. High hopes are onto visual identification and recognition. On the one hand, ubiqutous identification is just needed. On the other hand, visual recognition and classification is probably the way to security in IoT. Instead of enforcing tools or rules, there are policies and some control how those policies applied. The rationale is straightforward: technologies change too fast, hence to build something lasting, you should build policies. Policies are empowered by some technology, but remain other technologies agnostic.

3_5waves

Fifth wave is augmentation of life with software and hardware…

Who is IoT today? Let’s take Uber. Today it is not. In several years with self-driven cars it will be. Tim O’Reilly perfectly described IoT as ecosystem of things and humans. Below is comparison, with significantly extended outlook of tomorrow.

4_uber

It is great step towards personalized experience that Uber linked Spotify to your cab, so that you experience your individual stage in any Uber car. More about personal experience in my previous post Consumerism via IoT, delivered in Munich.

IoT Reference Architecture
or magic of seven continues

Well, high-level mind-washing stuff is interesting, but is there a canonical architecture for IoT? What could I touch as an engineer? There is reference architecture [revealed several weeks ago by Cisco, Intel and others], consisting of seven layers, shown below:

5_7layers

Notice that upper part is Information Technology, which is non-real-time, and which must be personalized. Lower part is Operational Technology, which is real-time or near-real-time, and which is local and geo-spread. Central part is Cloud-aware, which is IT and it’s centralized with strategic geo-distribution, with data centers for primary internet hubs and user locations.

From infosec point of view, top level is broken, i.e. people are broken. They continue to do stupid things, they are lazy, so it’s not rational to try to improve people. They will drive you crazy with BYOD, BYOA and BYOT (bring your own device/app/technology). It is better to invest into technologies which are secure by design. Each architectural layer has own technological & security standards, reinforced by industry standards. Really? Yes for upper part and not obvious for the lower…

Pay attention to the lower part, from Edge Computing and downstairs. It is blurred technology as for today, it could be called Fog. Anyway, Cisco calls it Fog. The Fog perfectly reflects the closest cloud to the ground; encapsulates plenty of computing, storage and networking functionality within. Fog provides localization and context awareness with low latency. Cloud provides global centralization, probably with some latency and less context. Experience on top of Cloud & Fog should provide profiling and personalization, personal UX. The World is flat. The World is not flat. It’s depends on which layer of IoT you are now.

Edge of computing
or computing at the Edge

Data growths too fast, that in many scenarios it simply can’t be moved to the Cloud for intelligence; hence BI comes to the Data. Big Data has big gravity and it attracts apps, services to itself. And hackers too. Gathering, filtering, normalizing, accumulating data at location or elsewhere, outside the cloud, is called Edge Computing. It is often embedded programming of single-card computers or other mediums (controllers, Arduino, Raspberry Pi, Tessel.io, smartphones when much computing power required).

6_fog0

 

Fog Computing
or cloud @ data sources

Fog Computing is a virtualized distributed platform that provides computing, storage, and networking services between devices and the cloud. Fog Computing is widespread, uncommon, interconnected. Fog Computing is location-aware, real-time/near-real-time, geo-spread, large-scale, multi-node, heterogeneous. Check out http://www.slideshare.net/MichaelEnescu/michael-enescu-cloud-io-t-at-ieee

6_fog1

Fog is hot for infosec, because plenty of logic and data will sit outside of the cloud, outside of the office, somewhere in the field… so vulnerable because of immaturity of IoT technologies at that low level.

Secure Fog Fabric
or security by design

How to find or build technologies for the Fog Computing, which would be secure by design? Which would live quite long, like TCP/IP:) Is it possible? Are some candidate technologies exist so far? And potentially they should be built on top of proven open-sourced tools & technologies, to keep trust and credibility. It all must synergize at large collaboration scale to breakthrough with proper tech fabric. So what do we have today? Fog is about computing, storage and networking, just a bit different from the same stuff in the cloud or in the office.

Computing. Which computing is secure, transactional and distributed? And could fit onto Raspberry Pi? Ever thought about Bitcoin? Ha! Bitcoin’s Block Chain algorithm is exactly the secure transactional distributed engine, even platform. Instead of computing numbers for encryptions and mine Bitcoins, you could do more useful computing job. Technology has all necessary features included in it by design. Temporary and secure relations are established between smartphones and gadgets, devices and transactions happen. Check out Block Chain details.

Storage. Data sending & receiving. Which technology is distributed, efficient of low-bandwidth networks, reliable and proven? BitTorrent! BitTorrent is not for pirates, it is for Fog Computing. For mesh networks and efficient data exchange on many-to-many topologies, built over P2P protocol. BitTorrent is good for video streaming too. Check out BitTorrent details .

Identification. Well, may be it’s not identification of everything and everyone, but authentication and authorization is needed anyway, and needed right now. Do we have such technology? Yes, it is Telehash! Good for mesh networks, based on JSON, enables secure messaging. Check out Telehash details.

6_fog2

Fog Computing is new field, we have to use applicable secure technologies there, or create new better technologies. Looks like it is going to be hybrid, something applied, something invented. Check out original idea from IBM Research for original arguments and ideation.

Security for IoT

A proposal is to go ahead with OWASP Top 10 for IoT. Just google for OWASP and code like I10 or I8. You will get the page with recommendations how to secure certain aspect of IoT. The list of ten doesn’t match seven layers of reference architecture precisely, while some relevance is obvious. Some layers are matched. Some security recommendations are cross-functional, e.g. Privacy.

7_OWASP

For Fog Computing pay attention to I2, I3, I4, I7, I9, I10. All those recommendations could be googled by those names; though they are slightly different at OWASP site. Below is a list of hyperlinks for your convenience. Enjoy!

I1 Insecure Web Interface
I2 Insufficient Authentication/Authorization
I3 Insecure Network Services
I4 Lack of Transport Encryption
I5 Privacy Concerns
I6 Insecure Cloud Interface
I7 Insecure Mobile Interface
I8 Insufficient Security Configurability
I9 Insecure Software/Firmware Updates
I10 Poor Physical Security

More about Internet of Things, especially from user point of view could be found at my recent post Consumerism via IoT.

Tagged , , , , , , , , , , , , , , , , ,

A Story behind IoE @ THINGS EXPO

This post is related to the published visuals from my Internet of Everything session at THINGS EXPO in June 2014 in New York City. The story is relevant to the visuals but there is no firm affinity to particular imagery. Now there story is more like a stand alone entity.

How many things are connected now?

Guess how many things (devices & humans) are connected to the Internet? Guess who knows? The one who produces those routers, that moves your IP packets across the global web – Cisco. Just navigate to the link http://newsroom.cisco.com/ioe and check the counter in right top corner. The counter doesn’t look beautiful, but it’s live, it works, and I hope will continue to work and report approximate number of the connected things within the Internet. Cisco predicts that by 2020, the Internet of Everything has the potential to connect 50 billion. You could check yourself whether the counter is already tuned to show 50,000,000,000 on 1st of January 2020…

Internet of Everything is next globalization

Old good globalization was already described in The World is Flat. With the rise of smart phones with local sensors (GPS, Bluetooth, Wi-Fi) the flatness of the world has been challenged. Locality unflattened the world. New business models emerged as “a power of local”. The picture got mixed: on one hand we see same burgers,  Coca-Cola and blue jeans everywhere, consumed by many; while on the other hand we already consume services tailored to locality. Even hard goods are tailored to locality, such as cars for Alaska vs. Florida. Furthermore, McDonald’s proposes locally augmented/extended menu, and Coca-Cola wraps the bottles with locally meaningful images.

Location itself is insufficient for the next big shift in biz and lives. A context is a breakthrough personalizer. And that personal experience is achievable via more & smaller electronics, broadband networks without roaming burden, and analytics from Big Data. New globalization is all about personal experience, everywhere for everyone.

Experience economy

Today you have to take your commodities, together with made good, together with services, bring it all onto the stage and stage personal experience for a client. It is called Experience Economy. Nowadays clients/users want experiences. Repeatable experiences like in Starbucks or lobster restaurant or soccer stadium or taxi ride. I already have a post on Transformation of Consumption. Healthcare industry is one of early adopters of the IoT, hence they deserved separate mentioning, there is a post on Next Five Years of Healthcare.

So you have to get prepared for the higher prices… It is a cost of staging of personal experience. Very differentiated offering at premium price. That’s the economical evolution. Just stick to it and think how to fit there with your stuff. Augment business models correspondingly. Allocate hundreds of MB (or soon GB) for user profiles. You will need a lot to store about everybody to be able to personalize.

Remember that it’s not all about consumer. There are many things around consumer. They are part of the context, service, ecosystem. Count on them as well. People use those helper things [machines, software] to improve something in biz process or in life style, either cost or quality or time or emotions. Whatever it is, the interaction between people and between people-machine is crucial for proper abstraction and design for the new economy.

Six Webs by Bill Joy

Creator of Berkeley Unix, creator of vi editor, co-founder of Sun Microsystems, partner at KPCB – Bill Joy – outlined six levels of human-human, human-machine, machine-machine interaction. That was about 20 years ago.

  • Hear – human & intimate device like phone, watch, glasses.
  • Near – human & personal while less intimate device like laptop, car infotainment.
  • Far – human & remote machines like TV panels, projections, kiosks.
  • Weird – human-machine via voice & gesture interfaces.
  • B2B – machine-machine at high level, via apps & services.
  • D2D – machine-machine as device to device, mesh networks.

About 10 years ago Bill Joy reiterated on Six Webs. He pointed to “The Hear Web” as most promising and exciting for innovations.

“The Hear Web” is anatomical graphene

The human body is anatomically the same through the hundreds of years. Hence the ergonomics of wearables and handhelds is predefined. Braceletswristwatchesarmbands are those gadgets that could we wear for now on our arms. The difference is in technology. Earlier it was mechanical, now it is electrical.

Vitruvian

We are still not there with human augmentation to speak about underskin chips… but that stuff is being tested already… on dogs & cats. Microchip with information about rabies vaccination is put under the skin. Humans also pioneer some things, but it is still not mainstream to talk much about.

For sure “The Hear Web” was a breakthrough during recent years. The evolution of smartphones was amazing. The emergence of wrist-sized gadgets was pleasant. We are still to get clarity what will happen with glasses. Google experiments a lot, but there is a long way to go until the gadget is polished. That’s why Google experiments with contact lenses. Because GLASS still looks awkward…

The brick design of touch smartphone is not the true final one. I’ve figured out significant issue with iPhone design. LG Flex is experimenting with bendable, but that’s probably not significantly better option. High hopes are on Graphene.    Nokia sold it’s plastic phone business to Microsoft, because Nokia got multi-billion grant to research in graphene wearables. Graphene is good for electricity, highly durable, flexible, transparent. It is much better for the new generation of anatomically friendly wearables.

BTW there will be probably no space for the full-blown HTML5/CSS3/JavaScript browsers on those new gadgets. Hence forget about SaaS and think about S+S. Or client-server which is called client-cloud nowadays. Programming language could be JavaScript, as it works on small hardware already, without fat browsers running spreadsheets. Check out Tessel. The pathway from current medium between gadgets & clouds is: smartphone –> raspberry pi –> arduino.

D2D

D2D stands for Device-to-Device. There must be standards. High hopes are on Qualcomm. They are respected chipset & patents maker. They propose AllJoyn – open source approach for connecting things – during recent years. All common functionality such as discovery/onboarding, Wi-Fi comms, data streaming to be standardized and adopted by developers community.

AllSeen Alliance is an organization of supporters of the open source initiative for IoT. It is good to see there names like LG, Sharp, Haier, Panasonic, Technicolor (Thomson) as premier members, Cisco, Symantec and HTC as community members. And really nice to see one of etalons of Wikinomics – Local Motors!

For sure Google would try to push Android onto as many devices as possible, but Google must understand that they are players in plastic gadgets. It’s better to invest money into hw & graphene companies and support the alliance with money and authority. IoT needs standards, especially at D2D/M2M level.

How to design for new webs?

If you know how – then go ahead. Else – design for personal experience. Internet of Everything includes semiconductors, telecoms and analytics from Big Data.

IoT

 

Assuming you are in software business, let semiconductors continue with Moore’s Law, let telecoms continue with Metcalfe’s Law, while concentrate on Big Data to unlock analytics potential, for context handling, for staging personal experience. Just consider that Metcalfe’s Law could be spread onto human users and machines/devices.

Start design of Six Graphs of Big Data from Five Sources of Big Data. The relation between graphs and sources is many-to-many. Blending of the graphs is not trivial. Look into Big Data Graphs Revisited. Conceptualization of the analytics pipeline is available in Advanced Analytics, Part I. Most interesting graphs are Intention & Consumption, because first is a plan, second is a fact. When they begin to match, then your solution begin to rock. Write down and follow it – the data is the next currency. 23andme and Uber logs so much data besides the cap of service you see and consume…

Where we are today?

There are clear five waves of the IoT. Some of those waves overlap. Especially ubiquitous identification of people or things indoors and outdoors. If the objects is big enough to be labeled with RFID tag or visual barcode than it is easy. But small objects are not labeled neither with radio chip nor with optical code. No radio chip because it is not good money-wise. E.g. bottles/cans of beer are not labeled because it’s too expensive per item. The pallets of beer bottles are labeled for sure, while single bottle is not. There is no optical code as well, to not spoil the design/brand of the label. Hence it is a problem to look for alternative identification – optical – via image recognition.

Third wave includes image recognition, which is not new, but it is still tough today. Google has trained Street View brain to recognize house numbers and car plates at such high level of accuracy, that they could crack captcha now. But you are not Google and you will get 75-78% with OpenCV (properly tuned) and 79-80% with deep neural networks (if trained properly). The training set for deep learning is a PITA. You will need to go to each store & kiosk and make pictures of the beer bottles under different light conditions, rotations, distances etc. Some footage could be made in the lab (like Amazon shoots the products from 360) but plenty of work is your own.

 

FiveWavesIoT

Fourth wave is about total digitization of the world, then newer world will work with digital things vial telepresence & teleoperations. Hopefully we will dispense with all those power adapters and wires by that time. “Software is eating the World”. All companies become software companies. Probably you are comfortable with digital music (both consuming and authoring), digital publishing, digital photos and digital movies. But you could have concerns with digital goods, when you pay for the 3D model and print on 3D printer. While atomic structure of the printed goods is different, your concern is right, but as soon as atomic structure is identical [or even better] then old original good has, then your concern is useless. Read more in Transformation of Consumption.

With 3D printing of hard goods it’s less or more understandable. Let’s switch tp 3D printed food. Modern Meadow printed a burger year ago. It costed $300K, approximately as much as Sergei Brin (Googler) invested into Modern Meadow. Surprised? Think about printed newest or personal vaccines and so forth…

Who is IoT? Who isn’t?

Is Uber IoT or not? With human drivers it is not. When human-driven cabs are substituted by self-driving cabs, then Uber will become an IoT. There is excellent post by Tim O’Reilly about Internet of Things & Humans. CEO of Box.com Levie tweeted “Uber is a $3.5 billion lesson in building for how the world *should* work instead of optimizing for how the world *does* work.” IoT is not just more data [though RedHat said it is], IoT is how this world should work.

How to become IoT?

  • Yesterday it was from sensors + networks + actuators.
  • Today it becomes sensors + networks + actuators + cloud + local + UX.
  • Tomorrow it should be sensors + networks + actuators + cloud + local + social + interest + intention + consumption + experience + intelligence.

Next 3000 days of the Web

It was vision for 5000 days, but today only 3000 days left. Check it out.

Next 6000 days of the Web

Check out There will be no End of the World. We will build so big and smart web, that we as humans will prepare the world to the phase shift. Our minds are limited. Our bodies are weird. They survive in very narrow temperature range. They afraid of radiation, gravity. We will not be able to go into deep space, to continue reverse engineering of this World. But we are capable to create the foundation for smarter intelligence, who could get there and figure it out. Probably we would even don’t grasp what it was… But today IoT pathway brings better experiences, more value, more money and more emotions.

PS.

Let’s check Cisco internet of things counter. ~300,000 new things have connected to the Internet while I wrote this story.

Tagged , , , , , , , , , , ,

Big Data Graphs Revisited

Some time ago I’ve outlined Six Graphs of Big Data as a pathway to the individual user experience. Then I’ve did the same for Five Sources of Big Data. But what’s between them remained untold. Today I am going to give my vision how different data sources allow to build different data graphs. To make it less dependent on those older posts, let’s start from the real-life situation, business needs, then bind to data streams and data graphs.

 

Context is a King

Same data in different contexts has different value. When you are late to the flight, and you got message your flight was delayed, then it is valuable. In comparison to receiving same message two days ahead, when you are not late at all. Such message might be useless if you are not traveling, but airline company has your contacts and sends such message on the flight you don’t care about. There was only one dimension – time to flight. That was friendly description of the context, to warm you up.

Some professional contexts are difficult to grasp by the unprepared. Let’s take situation from the office of some corporation. Some department manager intensified his email communication with CFO, started to use a phone more frequently (also calling CFO, and other department managers), went to CFO office multiple times, skipped few lunches during a day, remained at work till 10PM several days. Here we got multiple dimensions (five), which could be analyzed together to define the context. Most probably that department manager and CFO were doing some budgeting: planning or analysis/reporting. Knowing that, it is possible to build and deliver individual prescriptive analytics to the department manager, focused and helping to handle budget. Even if that department has other escalated issues, such as release schedule or so. But severity of the budgeting is much higher right away, hence the context belongs to the budgeting for now.

By having data streams for each dimension we are capable to build run-time individual/personal context. Data streams for that department manager were kind of time series, events with attributes. Email is a dimension we are tracking; peers, timestamps, type of the letter, size of the letter, types and number of attachments are attributes. Phone is a dimension; names, times, durations, number of people etc. are attributes. Location is a dimension; own office, CFO’s office, lunch place, timestamps, durations, sequence are attributes. And so on. We defined potentially useful data streams. It is possible to build an exclusive context out of them, from their dynamics and patterns. That was more complicated description of the context.

 

Interpreting Context

Well, well, but how to interpret those data streams, how to interpret the context? What we have: multiple data streams. What we need: identify the run-time context. So, the pipeline is straightforward.

First, we have to log the Data, from each interested dimension. It could be done via software or hardware sensors. Software sensors are usually plugins, but could be more sophisticated, such as object recognition from surveillance cameras. Hardware sensors are GPS, Wi-Fi, turnstiles. There could be combinations, like check-in somewhere. So, think that it could be done a lot with software sensors. For the department manager case, it’s plugin to Exchange Server or Outlook to listen to emails, plugin to ATS to listen to the phone calls and so on.

Second, it’s time for low-level analysis of the data. It’s Statistics, then Data Science. Brute force to ensure what is credible or not, then looking for the emerging patterns. Bottleneck with Data Science is a human factor. Somebody has to look at the patterns to decrease false positives or false negatives. This step is more about discovery, probing and trying to prepare foundation to more intelligent next step. More or less everything clear with this step. Businesses already started to bring up their data science teams, but they still don’t have enough data for the science:)

Third, it’s Data Intelligence. As MS said some time ago “Data Intelligence is creating the path from data to information to knowledge”. This should be described in more details, to avoid ambiguity. From Technopedia: “Data intelligence is the analysis of various forms of data in such a way that it can be used by companies to expand their services or investments. Data intelligence can also refer to companies’ use of internal data to analyze their own operations or workforce to make better decisions in the future. Business performance, data mining, online analytics, and event processing are all types of data that companies gather and use for data intelligence purposes.” Some data models need to be designed, calibrated and used at this level. Those models should work almost in real-time.

Fourth, is Business Intelligence. Probably the first step familiar to the reader:) But we look further here: past data and real-time data meet together. Past data is individual for business entity. Real-time data is individual for the person. Of course there could be something in the middle. Go find comparison between stats, data science, business intelligence.

Fifth, finally it is Analytics. Here we are within individual context for the person. There worth to be a snapshot of ‘AS-IS’ and recommendations of ‘TODO’, if the individual wants, there should be reasoning ‘WHY’ and ‘HOW’. I have described it in details in previous posts. Final destination is the individual context. I’ve described it in the series of Advanced Analytics posts, link for Part I.

Data Streams

Data streams come from data sources. Same source could produce multiple streams. Some ideas below, the list is unordered. Remember that special Data Intelligence must be put on top of the data from those streams.

In-door positioning via Wi-Fi hotspots contributing to mobile/mobility/motion data stream. Where the person spent most time (at working place, in meeting rooms, on the kitchen, in the smoking room), when the person changed location frequently, directions, durations and sequence etc.

Corporate communication via email, phone, chat, meeting rooms, peer to peer, source control, process tools, productivity tools. It all makes sense for analysis, e.g. because at the time of release there should be no creation of new user stories. Or the volumes and frequency of check-ins to source control…

Biometric wearable gadgets like BodyMedia to log intensity of mental (or physical) work. If there is low calories burn during long bad meetings, then that could be revealed. If there is not enough physical workload, then for the sake of better emotional productivity, it could be suggested to take a walk.

 

Data Graphs from Data Streams

Ok, but how to build something tangible from all those data streams? The relation between Data Graphs and Data Streams is many to many. Look, it is possible to build Mobile Graph from the very different data sources, such as face recognition from the camera, authentication at the access point, IP address, GPS, Wi-Fi, Bluetooth, check-in, post etc. Hence when designing the data streams for some graph, you should think about one to many relations. One graph can use multiple data streams from corresponding data sources.

To bring more clarity into relations between graphs and streams, here is another example: Intention Graph. How could we build Intention Graph? The intentions of somebody could be totally different in different contexts. Is it week day or weekend? Is person static in the office or driving the car? Who are those peers that the person communicates a lot recently? What is a type of communication? What is a time of the day? What are person’s interests? What were previous intentions? As you see there could be data logged from machines, devices, comms, people, profiles etc. As a result we will build the Intention Graph and will be able to predict or prescribe what to do next.

 

Context from Data Graphs

Finally, having multiple data graphs we could work on the individual context, personal UX. Technically, it is hardly possible to deal with all those graphs easily. It’s not possible to overlay two graphs. It is called modality (as one PhD taught me). Hence you must split and work with single modality. Select which graph is most important for your needs, use it as skeleton. Convert relations from other graphs into other things, which you could apply to the primary graph. Build intelligence model for single modality graph with plenty of attributes from other graphs. Obtain personal/individual UX at the end.

Tagged , , , , , , , , , , , , , , , , , , , , , ,

On the Internet of Everything

Five Waves of the “Internet of Things” on its Way of Transforming into “Internet of Everything”

http://united.softserveinc.com/blogs/software-engineering/may-2014/internet-of-things-transforming/

Tagged , , , , , , ,

Advanced Analytics, Part V

This post is related to the details of visualization of information for executives and operational managers on the mobile front-end. What is descriptive, what is predictive, what is prescriptive, how it looks like, and why. The scope of this post is a cap of the information pyramid. Even if I start about smth detailed I still remain at the very top, at the level of most important information without details on the underlying data. Previous posts contains introduction (Part I) and pathway (Part II) of the information to the user, especially executives.

Perception pipeline

The user’s perception pipeline is: RECOGNITION –> QUALIFICATION –> QUANTIFICATION –> OPTIMIZATION. During recognition the user just grasps the entire thing, starts to take it as a whole, in the ideal we should deliver personal experience, hence information will be valuable but probably delivered slightly different from the previous context. More on personal experience  in next chapter below. So as soon as user grasps/recognizes she is capable to classify or qualify by commonality. User operates with categories and scores within those categories. The scores are qualitative and very friendly for understanding, such as poor, so-so, good, great. Then user is ready to reduce subjectivity and turn to the numeric measurements/scoring. It’s quantification, converting good & great into numbers (absolute and relative). As soon as user all set with numeric measurements, she is capable to improve/optimize the biz or process or whatever the subject is.

Default screen

What should be rendered on the default screen? I bet it is combination of the descriptive, predictive and prescriptive, with large portion of space dedicated to descriptive. Why descriptive is so important? Because until we build AI the trust and confidence to those computer generated suggestions is not at the level. That’s why we have to show ‘AS IS’ picture, to deliver how everything works and what happens without any decorations or translations. If we deliver such snapshot of the business/process/unit/etc. the issue of trust between human and machine might be resolved. We used to believe that machines are pretty good at tracking tons of measurements, so let them track it and visualize it.

There must be an attempt from the machines to try to advice the human user. It’s could be done in the form of the personalized sentence, on the same screen, along with descriptive analytics. So putting some TODOs are absolutely OK. While believing that user will trust them and follow them is naive. The user will definitely dig into the details why such prescription is proposed. It’s normal that user is curious on root-cause chain. Hence be ready to provide almost the same information with additional details on the reasons/roots, trends/predictions, classifications & patterns recognition within KPI control charts, and additional details on prescriptions. If we visualize [on top of the inverted pyramid] with text message and stack of vital signs, then we have to prepare additional screen to answer that list of mentioned details. We will still remain on top of the pyramid.

default_screen

 

Next screen

If we got ‘AS IS’ then there must be ‘TO BE’, at least for the symmetry:) User starts on default screen (recognition and qualification) and continues to the next screen (qualification and quantification). Next screen should have more details. What kind of information would be logically relevant for the user who got default screen and looks for more? Or it’s better to say – looks for ‘why’? May be it’s time to list them as bullets for more clarity:

  • dynamic pattern recognition (with highlight on the corresponding chart or charts) what is going on; it could be one from seven performance signals, it should be three essential signals
  • highlight the area of the significant event [dynamic pattern/signal] to the other charts to juxtapose what is going on there, to foster thinking on potential relations; it’s still human who thinks, while machine assists
  • parameters & decorations for the same control charts, such as min/max/avg values, identifications of the quarters or months or sprints or weeks or so
  • normal range (also applicable to the default screen) or even ranges, because they could be different for different quarters or years
  • trend line, using most applicable method for approximation/prediction of future values; e.g. release forecast
  • its parts should be clickable for digging from relative values/charts into the absolute values/charts for even more detailed analysis; from qualitative to quantitative
  • your ideas here

signal

Recognition of signals as dynamic patterns is identification of the roots/reasons for smth. Predictions and prescriptions could be driven by those signals. Prescriptions could be generic, but it’s better to make personalized prescriptions. Explanations could be designed for the personal needs/perception/experience.

 

Personal experience

We consume information in various contexts. If it is release of the project or product then the context is different from the start of the zero sprint. If it’s merger & acquisition then expected information is different from the quarterly review. It all depends on the user (from CEO to CxOs to VPs to middle management to team management and leadership), on the activity, on the device (Moto 360 or iPhone or iPad or car or TV or laptop). It matters where the user is physically, location does matter. Empathy does matter. But how to reach it?

We could build users interests from social networks and from the interaction with other services. Interests are relatively static in time. It is possible to figure out intentions. Intentions are dynamic and useful only when they are hot. Business intentions are observable from business comms. We could sense the intensity of communication between the user and CFO and classify it as a context related to the budgeting or budget review. If we use sensors on corporate mail system (or mail clients), combine with GPS or Wi-Fi location sensors/services, or with manual check-in somewhere, we could figure out that the user indeed intensified comms with CFO and worked together face-to-face. Having such dynamic context we are capable to deliver the information in that context.

The concept of personal experience (or personal UX) is similar to the braid (type of hairstyle). Each graph of data helps to filter relevant data. Together those graphs allows to locate the real-time context. Having such personal context we could build and deliver most valuable information to the user. More details how to handle interest graph, intention graph, mobile graph, social graph and which sensors could bring the modern new data available in my older posts. So far I propose to personalize the text message for default screen and next screen, because it’s easier than vital signs, and it’s fittable into wrist sized gadgets like Moto 360.

Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Advanced Analytics, Part I

I’m writing this post in very difficult times. Russia declared war to Ukraine, annexed Crimea, significant part of Ukrainian territory. God bless Ukraine!

This post will melt together multiple things, partially described in my previous posts. There will be references for details by distinguished topics.

From Data to Information

We need information. Information is a “knowledge communicated or received concerning a particular fact or circumstance“, it’s citation from the Wikipedia. Modern useful information technique [in business] is called analytics. There is descriptive analytics or AS-IS snapshot of the things. There is predictive analytics or WHAT-WILL. Predictive analytics is already on Gartner’s Plateau of Productivity. Prescriptive analytics goes further, answering WHAT & WHY. Decision makers [in business and life] need information in Inverted Pyramid manner. Most important information on top, then major facts & reasons, and so on downstairs…

But we have data at the beginning. Tons of data bits which are generated by wide variety of data sources. There are big volumes of classical enterprise data in ERPs, CRMs, legacy apps. That data is primarily relational, SQL friendly. There are big volumes of relatively new social data, as public and private user profiles, user timelines in social networks, mixed content of text, imagery, video, locations, emotions, relations, statuses and so forth. There are growing volumes of machine data, starting from access control systems with turnstiles in the office or parking to M2M sensors on transport fleet or quantified-self individuals. Social and machine data is not necessarily SQL friendly. Check out Five Sources of Big Data for more details.

Processing Pipeline

Everything starts from proper abstraction & design. Old school methods still works, but modern methods unlocks even more potential towards creation of information out of the raw data. Abstraction [of the business models or life models] leads to design of data models which are often some kinds of graphs. It is absolutely normal to have multiple graphs within a solution/product. E.g. people relations are straightforward abstracted to Social Graph, while machine data might be represented into Network Graphs, Mobile Graph. There are other common abstractions, such as Logistic Graph, Recommendations Graph and so on. More details could be found in Six Graphs of Big Data.

The key concept of the processing could be abstracted to a funnel. On the left you got raw data, you feed it into the funnel, and gets kind of information of the right. This is depicted at high-level on the diagram.

funnel

What makes it Advanced?

An interesting question… Modern does not always mean advanced. What makes it advanced is another technology, related to the user experience – mobiles and wearables. As soon as predictive and prescriptive analytics is delivered in real-time at your fingertips, it could be considered to be advanced.

There are several technological limitations and challenges. Let’s start from the mobiles and wearables. The biggest issue is a screen size. Entire visualization must be designed from the scratch. Reuse for big screens does not work, despite of our browsing of full blown web sites on those screens… The issue with wearables is related to their emergence. Nobody simply isn’t aware enough how to design for them. The paradigms will emerge as soon as adoption rate starts to decelerate. Right now we are observing the boom of wearables. There is insight on wearables: Wearable Technology and Wearable Technology, Part II. A lot to change there!

The requirement of real-time or near-real-time information delivery assumes high-performance computing at the backend, some data massage and pre-processing must be done in advance; then bits must be served out from the memory. It is client-cloud architecture, where client is mobile or wearable gadget, cloud is backend with plenty of RAM with ready-made bits. This is depicted on the diagram, read it from left to right.

funnel3

So what?

This is a new era of the tools and technologies to enable and accelerate the processing pipeline from data to information into your pocket/hand/eyes. There is a lack of tools and frameworks to melt old and modern data together. Hadoop is doing well there, but things are not so smooth as install & run. There is a lack of data platform tools. There is a lack of integration, aggregation tools. Visualization is totally absent, there are still no good executive dashboards even for PC screens, not mentioning smartphones. I will address those opportunities in more details in next posts.

Tagged , , ,

Mobile Programming is Commodity

This post is about why the programming for smart phones and tablets is commodity.

Mobiles are no more novelty.

Mobiles are substituting PCs. As we programmed in VB and Delphi 15 years ago, the same way we will program in Objective-C and Java today.  Because adoption rate for cell phone as technology (in USA) is fastest from other technologies, and the scale of adoption surpassed 80% in 2005. Smart phones are being adopted at same pace, surpassing 35% in 2011, just in several years since iPhone revolution happened in 2007. Go check out the evidence from New York Times since 2008 for cell phones , evidence from Technology Review since 2010 for smart phones , more details by Harvard Business Review on accelerated technology adoption.

Visionaries look further. O’Reilly.

The list of hottest conferences by direction from visionary O’Reilly:

  • BigData
  • New Web
  • SW+HW
  • DevOps

BigData still matters, matching approach to Gartner’s “peak of inflated expectations”. Strata, Strata Rx (Healthcare flavor), Strata Hadoop. http://strataconf.com/strata2014 Tap into the collective intelligence of the leading minds in data—decision makers using the power of big data to drive business strategy, and practitioners who collect, analyze, and manipulate data. Strata gives you the skills, tools, and technologies you need to make data work today—and the insights and visionary thinking O’Reilly is known for.

JavaScript got out of the web browser and penetrated all domains of programming. Expectations and progress for HTML5 .Web 2.0 abandoned, fluent created. Emerging technologies for new Web Platform and new SaaS. http://fluentconf.com/fluent2014 O’Reilly’s Fluent Conference was created to give developers working with JavaScript a place to gather and learn from each other. As JavaScript has become a significant tool for all kinds of development, there’s a lot of new information to wrap your head around. And the best way to learn is to spend time with people who are actually working with JavaScript and related technologies, inventing ways to apply its power, scalability, and platform independence to new products and services.

“The barriers between software and physical worlds are falling”. “Hardware startups are looking like the software startups of the previous digital age”. Internet of Things has longer cycle (according to Gartner’s hype cycle), but it is coming indeed. With connected machines, machine-to-machine, smart machines, embedded programming, 3D printing and DIY to assemble them (machines). Solid. http://solidcon.com/solid2014 The programmable world is creating disruptive innovation as profound as the Internet itself. As barriers blur between software and the manufacture of physical things, industries and individuals are scrambling to turn this disruption into opportunity.

DevOps & Performance is popular. Velocity. Most companies with outward-facing dynamic websites face the same challenges: pages must load quickly, infrastructure must scale efficiently, and sites and services must be reliable, without burning out the team or breaking the budget. Velocity is the best place on the planet for web ops and performance professionals like you to learn from your peers, exchange ideas with experts, and share best practices and lessons
learned.

Open Source matters more and more. Open Source is about sharing partial IP for free according to WikinomicsOSCON. http://www.oscon.com/oscon2014 OSCON is where all of the pieces come together: developers, innovators, business people, and investors. In the early days, this trailblazing O’Reilly event was focused on changing mainstream business thinking and practices; today OSCON is about how the close partnership between business and the open source community is building the future. That future is everywhere you look.

Digitization of conent continues. TOC.

Innovation in leadership and processes. cultivate.

Visionaries look further. GigaOM.

The list of conferences by direction from GigaOM:

  • BigData
  • UX
  • IoT
  • Cloud

BigData. STRUCTURE DATA. http://events.gigaom.com/structuredata-2014/ From smarter cars to savvier healthcare, today’s data strategies are driving business in compelling new directions.

User Experience. ROADMAP. http://events.gigaom.com/roadmap-2013/ As data and connectivity shape our world, experience design is now as important as the technology itself. It covers (and will cover) ubiquitous UI, wearables and HCI with all those new smarter machines (3D printed & DIY & embedded programming).

Internet of Things. MOBILIZE. http://event.gigaom.com/mobilize/ Five years ago, Mobilize was the first conference of its kind to outline the future of mobility after Apple’s iPhone exploded onto the scene. We continue to track the hottest early adopters, the bold visionaries and those about to disrupt the ecosystem. We hope that you will join us at Mobilize and be the first in line to ride this next wave of innovation. This year we’ll cover: The internet of things and industrial internet; Mobile big data and new product alchemy; Wearable devices; BYOD and mobile security.

Cloud. STRUCTURE. http://event.gigaom.com/structure/ Structure 2013 focused on how real-time business needs are shaping IT architectures, hyper-distributed infrastructure and creating a cloud that will look completely different from everything that’s come before. Questions we answered at Structure 2013 included: Which architects are choosing open source solutions, and what are the advantages? Will to-the-minute cloud availability be an advantage for Azure? What are the lessons learned in building a customized enterprise PaaS? Where is there still space to innovate for next-generation leaders?

Conclusion.

To be strong programmer for today you have to be able to design and code for smart phones and tablets as your father and mother did 20 years ago for PC and workstations. Mobile programming is shaped by the trends, described in Mobile Trends for 2014.

To be strong programmer for tomorrow you have to tame the philosophy, technologies and tools of BigData (despite Gartners prediction of inflated expectations), Cloud,  Embedded and Internet of Things. It is much less Objective-C but probably still plenty of Java. Seems like the future is better suited for Android developers. IoT is positioned last in the list because its adoption rate is significantly lower than for cell phones (after 2000 dotcom burst).

Tagged , , , , , ,

Mobile Trends for 2014

MOBILE FIRST
S+S or CLIENT-CLOUD
WEARABLES
BYOD, BYOA, BYOT
PERSONAL EXPERIENCE
UBIQUITOUS UI
PERSONALIZED HEALTHCARE

 

http://united.softserveinc.com/blogs/mobility/december-2013/mobile-trends-2014/

http://united.softserveinc.com/blogs/mobility/december-2013/mobile-trends-2014-part2/