Smart Home, IoT, Data and Analytics – Reading List

Smart homes and the Internet of Things
A smart home is one in which the various electric and electronic appliances are wired up to a central control system so they can either be switched on and off at certain times. Most homes already have a certain amount of “smartness” because many appliances already contain built-in sensors or electronic controllers. Virtually all modern washing machines have programmers that make them follow a distinct series of washes, rinses, and spins depending on how you set their various dials and knobs when you first switch on. If you have a natural-gas-powered central heating system, most likely you also have a thermostat on the wall that switches it on and off according to the room temperature, or an electronic programmer that activates it at certain times of day whether or not you’re in the house.
https://www.explainthatstuff.com/smart-home-automation.html

8 sensors to help you create a smart home
List of sensors that one can deploy in their home to help ensure that they are safe from the storm.
https://www.ibm.com/blogs/internet-of-things/sensors-smart-home/

Six Evolving Business Models for the Smart Home
Smart companies today understand that traditional models of one-time sell through of products, or standalone products that cannot be monitored intelligently, provide little or no opportunity to create a relationship with consumers and establish brand loyalty. They also realize that in today’s market, their products can serve as revenue generators with the data they gather based upon a homeowner’s energy usage, security connectivity and/or home entertainment preferences.
https://www.iotforall.com/business-models-for-smart-home/

The anatomy of an IoT solution: Oil, data and the humble washing machine
A lot of people think about data as the new gold, but a better analogy is data is the new oil. When oil comes out of the ground it is raw, it has intrinsic value but until that oil is refined into petrol and diesel, its true value is not gained. Data from sensors is very similar to oil. The data that comes from the sensor is raw, to gain insight from it, the data needs to be refined. Refining the data is at the heart of a successful Internet of Things project which leads to business growth and transformation.
https://www.ibm.com/blogs/internet-of-things/washing-iot-solution/

The best smart home systems: Top ecosystems explained
Apple, Google and Amazon are the major players in the smart home arena now, with their smart speakers, ecosystems and voice assistants on hand to not only make controlling your connected tech easier, but to make home automation a doddle.
https://www.the-ambient.com/guides/smart-home-ecosystems-152

How data analytics is adding value in the smart home?
Analytics can be expected to foster whole-home integration of various Internet of Things devices by increasing awareness across multiple facets of the home, from thermostats to door locks to refrigerators to solar panels. Having insight across the entire home can enable machine learning and artificial intelligence technologies to eventually create smarter, more intuitive systems that not only make consumers’ lives more convenient, but also play a role in smart cities and the larger digitalized grid.
https://www.smartcitiesdive.com/news/how-data-analytics-is-adding-value-in-the-smart-home/446406/

Navigating Smart Home Data Security Concerns
Smart homes are no longer science fiction. But many consumers are slow to adopt. They’re worried about smart home data security breaches—and rightly so.
https://www.iotforall.com/smart-home-data-security/

User Perceptions of Smart Home IoT Privacy
Smart home Internet of Things (IoT) devices are rapidly increasing in popularity, with more households including Internet-connected devices that continuously monitor user activities. Report that analyzes the smart home owners reasons for purchasing IoT devices, their perceptions of smart home privacy risks, and actions taken to protect their privacy from those external to the home who create, manage, track, or regulate IoT devices and/or their data.
https://arxiv.org/pdf/1802.08182.pdf

Advertisements

Data Infrastructure, Data Pipeline and Analytics – Reading List – Sep 27, 2016

Splunk vs ELK: The Log Management Tools Decision Making Guide
Much like promises made by politicians during an election campaign, production environments produce massive files filled with endless lines of text in the form of log files. Unlike election periods, they’re doing it all year around, with multiple GBs of unstructured plain text data generated each day.
http://blog.takipi.com/splunk-vs-elk-the-log-management-tools-decision-making-guide/

Building a Modern Bank Backend
https://monzo.com/blog/2016/09/19/building-a-modern-bank-backend/

An awesome list of Micro Services Architecture related principles and technologies.
https://github.com/mfornos/awesome-microservices#api-gateways–edge-services

Stream-based Architecture
Part of the Stream Architecture Book. An excellent overview on the topic.
https://www.mapr.com/ebooks/streaming-architecture/chapter-02-stream-based-architecture.html

The Hardest Part About Micro services: Your Data
Of the reasons we attempt a micro services architecture, chief among them is allowing your teams to be able to work on different parts of the system at different speeds with minimal impact across teams. So we want teams to be autonomous, capable of making decisions about how to best implement and operate their services, and free to make changes as quickly as the business may desire. If we have our teams organized to do this, then the reflection in our systems architecture will begin to evolve into something that looks like micro services.
http://blog.christianposta.com/microservices/the-hardest-part-about-microservices-data/

New Ways to Discover and Use Alexa Skills
Alexa, Amazon’s cloud-based voice service, powers voice experiences on millions of devices, including Amazon Echo and Echo Dot, Amazon Tap, Amazon Fire TV devices, and devices like Triby that use the Alexa Voice Service. One year ago, Amazon opened up Alexa to developers, enabling you to build Alexa skills with the Alexa Skills Kit and integrate Alexa into your own products with the Alexa Voice Service.
http://www.allthingsdistributed.com/2016/06/new-ways-to-discover-and-use-alexa-skills.html

Happy Learning!

Data Infrastructure, Data Pipeline and Analytics – Reading List – Sep 20, 2016

Hadoop architectural overview
An Excellent series of posts – talking about Hadoop and Related components, Key metrics to monitor in Production
https://www.datadoghq.com/blog/hadoop-architecture-overview/
Surviving and Thriving in a Hybrid Data Management World
The vast majority of our customers who are moving to cloud applications also have a significant current investment in on premise operational applications and on premise capabilities around data warehousing, business intelligence and analytics. That means that most of them will be working with a hybrid cloud/on premise data management environment for the foreseeable future.
http://blogs.informatica.com/2016/08/19/surviving-thriving-hybrid-data-management-world/#fbid=dlbfZB7A1Sd
Data Compression in Hadoop
File compression brings two major benefits: it reduces the space needed to store files, and it speeds up data transfer across the network or to or from disk. When dealing with large volumes of data, both of these savings can be significant, so it pays to carefully consider how to use compression in Hadoop.
http://comphadoop.weebly.com/
 What is “Just-Enough” Governance for the Data Lake?
Just-enough governance is similar to the Lean Startup methodology concept of building of a Minimum Viable Product (MVP). From an enterprise perspective, just-enough governance means building only the process and control necessary to solve a particular business problem.
https://infocus.emc.com/rachel_haines/just-enough-governance-data-lake/
Mind map on SAP HANA
https://www.mindmeister.com/353051849/sap-hana-platform
Should I use SQL or NoSQL?
Every application needs persistent storage — data that persists across program restarts. This includes usernames, passwords, account balances, and high scores. Deciding how to store your application’s important data is one of first and most important architectural decisions to be made.
https://www.databaselabs.io/blog/Should-I-use-SQL-or-NoSQL
Happy Learning!

Developing a Robust Data Platform : Key Considerations

key-considerations

Developing a robust data platform requires definitely more than HDFS, Hive, Sqoop and Pig. Today there is a real need for bringing data and compute as close as possible. More and more requirements are forcing us to deal with high-throughput/low-latency scenarios. Thanks to in-memory solutions, things definitely seems possible right now.

One of the lesson I have learnt in the last few years is that it is hard to resist developing your own technology infrastructure while developing a platform infrastructure. It is always important to remind ourselves that we are here to build solutions and not technology infrastructure.

Some of the key questions that needs to be considered while embarking on such journey is that

  1. How do we handle the ever growing volume of data (Data Repository)?
  2. How do we deal with the growing variety of data (Polyglot Persistence)?
  3. How do we ingest large volumes of data as we start growing (Ingestion Pipelines/Write Efficient)?
  4. How do we scale in-terms of faster data retrieval so that the Analytics engine can provide something meaningful at a decent pace?
  5. How do we deal with the need for Interactive Analytics with a large dataset?
  6. How do we keep our cost per terabyte low while taking care of our platform growth?
  7. How do we move data securely between on premise infrastructure to cloud infrastructure?
  8. How do we handle data governance, data lineage, data quality?
  9. What kind of monitoring infrastructure that would be required to support distributed processing?
  10. How do we model metadata so that we can address domain specific problems?
  11. How do we test this infrastructure? What kind of automation is required?
  12. How do we create a service delivery platform for build and deployment?

One of the challenges I am seeing right now is that the urge to use multiple technologies to solve similar problems.  Though this gives my developers the edge to do things differently/efficiently, from a platform perspective this would increase the total cost of operations.

  1. How do we support our customers in production?
  2. How can we make the life our operations teams better?
  3. How do we take care of reliability, durability, scalability, extensibility and Maintainability of this platform?

Will talk about the data repository and possible choices in the next post.

Happy Learning!

Data Infrastructure, Data Pipeline and Analytics – Reading List – Sep 12, 2016

Three incremental, manageable steps to building a “data first” data lake
Applications have always dictated the data. That has made sense historically, and to some extent, continues to be the case. But an “applications first” approach creates data silos that are causing operational problems and preventing organizations from getting the full value from their business intelligence initiatives.
http://www.networkworld.com/article/3098937/analytics/three-incremental-manageable-steps-to-building-a-data-first-data-lake.html

Azure SQL Data Warehouse: Introduction
Azure SQL Data Warehouse is a fully-managed and scalable cloud service.
https://www.simple-talk.com/cloud/azure-sql-data-warehouse/

The Informed Data Lake: Beyond Metadata
Historically, the volume and extent of data that an enterprise could store, assemble, analyze and act upon exceeded the capacity of their computing resources and was too expensive. The solution was to model some extract of a portion of the available data into a data model or schema, presupposing what was “important,” and then fit the incoming data into that structure.
https://hiredbrains.wordpress.com/2016/05/13/the-informed-data-lake-beyond-metadata/

Real Time Streaming with Spring xd, Apache Geode (Gemfire), and Greenplum
Spring xd is a unified, distributed, and extensible service for data ingestion, real time analytics, batch processing, and data export.
http://zdatainc.com/2016/01/real-time-streaming-with-spring-xd-apache-geode-gemfire-and-greenplum-earthquake-data-demo/

Data Orchestration using Hortonworks DataFlow (HDF)
Hortonworks Dataflow (HDF), powered by Apache NiFi, is the first integrated platform that solves the real time complexity and challenges of collecting and transporting data from a multitude of sources be they big or small, fast or slow, always connected or intermittently available
http://zdatainc.com/2016/02/hello-nifi-data-orchestration-using-hortonworks-dataflow-hdf/

Happy Learning!

Data Infrastructure, Data Pipeline and Analytics – Useful Links – September, 2016

Some of the Interesting links i have read in the last couple of weeks around Data, In-Memory Databases, Pipeline Development and Analytics.
In Search of Database Nirvana
An excellent post providing an in-depth look at the possibilities and the challenges for companies that long for a single query engine to rule them all.
https://www.oreilly.com/ideas/in-search-of-database-nirvana
http://www.slideshare.net/RohitJain0813/in-search-of-database-nirvana-the-challenges-of-delivering-hybrid-transactionanalytical-processing
Aerospike Vs Cassandra Comparison
Comparison on Aerospike with Apache Cassandra. Cassandra is a columnar NoSQL database that is great for ingesting and analyzing hundreds of terabytes of data stored on rotational disks. Aerospike is an in-memory, NoSQL database, a key-value store that can run purely in RAM and is also optimized for storing data in Flash (SSDs).
http://www.aerospike.com/when-to-use-aerospike-vs-cassandra/
An overview of Apache Streaming Technologies
A very good comparison comparing technologies around simple event processors, stream processors, and complex event processors.
https://databaseline.wordpress.com/2016/03/12/an-overview-of-apache-streaming-technologies/
Flow based Programming
Flow-Based Programming defines applications using the metaphor of a “data factory”. It views an application not as a single, sequential process, which starts at a point in time, and then does one thing at a time until it is finished, but as a network of asynchronous processes communicating by means of streams of structured data chunks, called “information packets” (IPs).
http://www.jpaulmorrison.com/fbp/introduction.html
Hadoop Deployment Cheat Sheet
If you are using, or planning to use the Hadoop framework for big data and Business Intelligence (BI) this document can help you navigate some of the technology and terminology, and guide you in setting up and configuring the system.
http://jethro.io/hadoop-deployment-cheat-sheet/
Amazon Redshift for Custom Analytics
Experience summary on building Custom Analytics on top of Redshift
https://www.alooma.com/blog/custom-analytics-amazon-redshift
Building Analytics at 500px
Experience summary on how they have built the ecosystem
https://medium.com/@samson_hu/building-analytics-at-500px-92e9a7005c83#.pdkk7xrui
How Artificial Intelligence Will Kickstart the Internet of Things?
IoT will produce a tsunami of big data, with the rapid expansion of devices and sensors connected to the Internet of Things continues, the sheer volume of data being created by them will increase to an astronomical level. This data will hold extremely valuable insights into what’s working well or what’s not.
https://datafloq.com/read/Artificial-Intelligence-Kickstart-Internet-Things/1776

Happy Learning!

Scaling data operations with in-memory OLTP

Data has become the center of our universe in modern digital world. Applications are designed to store and collect more and more data. Companies are looking to integrate and analyse the data to generate insights and take actions.

Data is a precious thing and will last longer than the systems themselves ~ Tim Berners-Lee

Can an existing relational database scale with high ingestion rates, improved read performance?Database

In-Memory OLTP seems to be the direction forward. This is considering your existing technology investments. Of course if the company is open to change technology there would be more options.

Found couple of very good articles posts related to SQL Server in-memory OLTP. Looks like SQL Server 2016 has fixes to most of the issues with in-memory OLTP.

I just think it is an amazing technology and if we can use it in the right way, will definitely yield great results for your customers.

Introducing SQL Server In-Memory OLTP
https://msdn.microsoft.com/en-in/library/dn133186.aspx
https://www.simple-talk.com/sql/learn-sql-server/introducing-sql-server-in-memory-oltp/
http://blog.sqlauthority.com/2014/08/08/sql-server-introduction-to-sql-server-2014-in-memory-oltp/

The Use Cases for SQL Server 2014 In-Memory OLTP
http://sqlturbo.com/the-use-cases-for-sql-server-2014-in-memory-oltp/

SQL Server In-Memory OLTP Internals Overview
https://msdn.microsoft.com/en-us/library/dn720242.aspx

The Promise – and the Pitfalls – of In-Memory OLTP
https://www.simple-talk.com/sql/performance/the-promise—and-the-pitfalls—of-in-memory-oltp/
https://msdn.microsoft.com/en-us/library/dn246937.aspx

SQL Server 2016 : In-Memory OLTP Enhancements
http://sqlperformance.com/2015/11/sql-server-2016/in-memory-oltp-enhancements

Speeding up Business Analytics Using In-Memory Technology
https://blogs.technet.microsoft.com/dataplatforminsider/2015/12/08/speeding-up-business-analytics-using-in-memory-technology/

Dynamic Data Masking in SQL Server 2016
http://www.codeproject.com/Articles/1084808/Dynamic-Data-Masking-in-SQL-Server
https://blogs.technet.microsoft.com/dataplatforminsider/2016/01/25/use-dynamic-data-masking-to-obfuscate-your-sensitive-data/

Happy Learning!