Tuesday, December 16, 2014

IDC 3rd PLATFORM ADOPTION PREDICTIONS AND GUIDENCE

Description:
A short 3.5 minute IDC video predicting company adoption of 3rd Platform technologies into 2020. Published on Youtube on Dec 9, 2014

Disclaimer:
While IT Sales vendors will continue to make their sales numbers by staying focused on selling 2nd Platform technologies…
Vendors must not close their eyes to customer's latent testing and adoption of 3rd platform technologies.

Notables: 
What is different for 3rd Platform adopters?
  1. How they communicate with their customers (think more real-time and targeted communications thanks to mobile app instrumentation, live clickstream data and near-line data mining & sentiment analysis)
  2. Quicker Time-To-Market and thus Time-To-Value from products (think Agile methodologies for Application Development as well as DevOPs & Continuous Integration & PaaS)
  3. How they innovate (think new technologies means ways of doing things that just were not possible before, better / more timely appreciation of customer needs & wants)
  4. Increased reliability of Operations & Resiliency  (think Cloud Services,  redundancy now found in app layer as well as fast recovery due to SDDC & geo-distribution of data)

Mention of: shadow IT is LARGER than CIOs believe. This hinting…
  • You will need to be talking/selling to LoB contacts outside IT.
  • If you ARE talking to IT … if possible, let them know your company can help design a strategy that properly lands workloads according to security, functionality and budget

Saturday, November 22, 2014

DATABASES - SQL, CRUD AND ACID - 102


REVIEW:

In the Database 101 post we discussed how databases are used for tracking and managing objects using records. Sometimes those records are digital objects like .pdf documents stored directly in the database file. Sometimes the records in the database simply hold metadata about objects such as employees who exist outside of the database file. We also learned about how databases store the records in a file on disk but use smaller sorted indexes stored in RAM for fast searching. Indexes contain only one small piece or metadata as well as the location of the complete record on disk that the index refers to. 

In this post we will discuss two acronyms that stand for properties that databases demonstrate called CRUD and ACID.   


CRUD:

CRUD is an acronym for Create, Read, Update and Delete. These are the four basic operations that the DBMS must allow on the records in the database. It may seem obvious that if you have a collection of records for managing objects, you may want to:
1. create new records
2. read back records that you stored in the database file
3. update those records when some property of the tracked object changes
4. delete a record for an object that you no longer want in the collection.

An example of CRUD for an database of employee records would be the following. You will want to CREATE a record for each employee in the company. Issue a query to READ back all employee records who work in a particular office. UPDATE an employee record when their location moves to a new office as well as DELETE employees when the leave the company. 



SMITH indexes point to SMITH record locations
CRUD operations are performed not only on the object records in the database but also the indexes used to search those records. For example, when you update an employee record to change their office location you must also update the sorted indexes as well. When we discuss ACID next, you'll see that implementing CRUD operations can get tricky when multiple CRUD operations overlap each other in time. 




Databases files by themselves simply store records. It's the DBMS application that presents an interface to the user to display and modify those records. For relational DBMSes there is an agreed upon method or language for how the users communicates with the DBMS. One such language is SQL or Structured Query Language. As you can imagine the language has words for all four basic CRUD operations. 



Example of SQL statements
SQL language or any query language for that matter, is crucial for usability of DBMSes. Application programmers, the people who create application software via programming languages (Java, Python or maybe C++) are the users of databases and not database administrators. I say this because they want a simple method of storing and retrieving records no matter what the underlying database is (Microsoft SQL, Oracle 12c or maybe IBM Informix 10.2). By learning the SQL language they have everything they need to perform CRUD operations on database records. I have included a picture of what SQL language actually looks like. You can see that it is quite readable and understandable even for those who have never seen the language before. SQL is called a "Declarative" language as it simply declares what data you want rather than telling the DBMS how to get it.



MULTIPLE USERS AND CONCURRENCY:

If you are creating a database that only has one user who issues SQL statements one at a time, your database implementation would be quite simple. However, most DBMSes are the backend record storage for front-end applications that have multiple users modifying records at the same time. This presents several problems that ACID attempts to define and solve.





ACID:

ACID stands for Atomicity pronounced "Atom-miss-city", Consistency, Isolation and Durability. Let's look at each one-by-one. 



A brief explanation follows:


  • Atomicity - pronounced "atom-miss-city". By "atomic" we mean "as a single atom or unit". This is a method of grouping data together as single unit so that the atomic unit of data is either entered into the database or not. It's either "All or nothing" as partially updating a record in the database can have bad results in real-life. Imagine you deposit $1M dollars into your bank account but the deposit transaction only updated the database with your account number but not the deposit amount. You would be very unhappy if you deposited $1M but the database transaction to record that failed before increasing your balance. If you were the bank teller who was using the software application that allows for entering a check deposit, you need instant application feedback about the success or failure when you hit submit after entering such things as the account number and deposit amount. The teller needs confirmation that the transaction was entire transaction completed successfully or failed completely and nothing about that deposit was added to the database. When failure happens for any reason because the transaction was an all or nothing the teller is free to attempt the transaction again without fear that he or she is adding a second $1M deposit.  
  • Consistency - can be thought of as... will an update to a database record field leave the database in a good or consistent state? Imagine if you designed your US customer database to store a customer's phone number. Here in the US, we use a 10 digit phone number that consists of a 3 digit area code followed by a 3 digit local prefix and a 4 digit number to uniquely identify a phone endpoint in that locality. For example (605-475-6968). Imagine if a request to create a record with only a four digit number, 6968, came into the database. This erroneous piece of data would leave that particular record in a logically in-consistent or corrupted state.
  • Isolation - Because databases allow multiple updates to be occurring at the same time, each transaction must be isolated from the others to prevent unwanted side-effects of transactions "stepping" on one another. An example of when this can be tricky is if a database were to delete a record and before the associate index could be removed, a second transaction to find records matching a certain criteria found the index for that record and went to retrieve the record that was just deleted. This would cause an error because the two transactions stepped on one another in time.
  • Durability - Once a transaction is stored in the database it must be durable in the sense of it will exist even in the face of a computer power outage (reboot). Modern databases often store the received data in RAM due to it's speed matches that of the CPU. However, RAM is volatile if power is lost. To be durable the transaction is often written to a database log file before being acknowledged as "written" back to the application which asked to store data. 

TAKEAWAYS:

Modern DBMSs store metadata about entities that front-end applications submit using SQL. DBMSs store their data in a database file for persistence and provide abilities such as query, indexing, CRUD, ACID and concurrency. In future posts we will discuss aspects of different databases in terms of CRUD and ACID.

Friday, November 21, 2014

DATABASES - RECORDS AND INDEXES - 101


The term "database" was first used in a 1962 and coincided with storing data on disk drives as opposed to tape. To me, the term "database" referred specifically to storage, updating and retrieval of information stored in a file on disk rather than tape. Randomly accessed disk drives opened a door to a new way to manage data. This new-style of data access was very different from sequential tape access. A file full of data serves as the "base" that you can get "data" from. This "database" is often combined with a running process/application that serves as the single application process that can access & modify the database file. Humans are required to "query" this application process to "locate and retrieve" data from inside the "base" file on their behalf. We collectively refer to the application process and its associated datafile as a DBMS or DataBase Management System. Humans simply write a question referred to as a "query" for a particular piece of data in the collection and the DBMS has methods of fast search and retrieval of data the matching the query description. Oracle's 12C, Microsoft's SQL 2014 and IBM's DB2 v10.5 are specific examples of DBMSs.


COLLECTIONS OF DATA:

Collection of books
Lets face it, human beings have been collecting & tracking objects in collections long before the 1962 "database" term or the invention of disk drives which started only a few years prior, in 1957.  We can all easily think of use cases why early humans wanted to collect information about objects and search that metadata. Recall that metadata is oft called "data about data". 



HISTORICLE COLLECTIONS:

Library of Alexandria in Ancient Egypt

Glancing backward through history to the time of the "Library of Alexandria" which existed in Egypt roughly some 2300 years ago. The library of Alexandria was early mankind's attempt to gather up or "collect" all the written knowledge in the world. This first known effort to preserve an understanding of the natural world and its history. The library existed as a physical location, full of shelves to store possibly a half million writings on scrolls. Anytime you collect more objects that you can track in your head reliably, you need to develop a system to track and organize the objects for fast search and retrieval. 


MODERN-DAY LIBRARIES:



Library Card Catalog
If we jump forward to modern day libraries circa 1990, they used bibliographical records which are cards containing summarized information about the books they refer to. We refer to these "bibliographical records" simply as metadata since they are NOT the actual objects being tracked but by definition, "data about data". Specifically, metadata about the books in the library's collection. It is interesting to note that if we scanned all of the books into PDF files and stored them directly into a modern-day database then the database would not be a collection of records where each record held metadata about the book that exists in the real world but instead the database records would hold the ACTUAL objects being collected and managed. When the things being collected & managed are digital , a database often contains those digital objects. This way the digital objects are contained inside the database files. When the entities being tracked are not-digital but real-world physical objects, we give them some unique identifier (EmployeeID if we are tracking real world employees) and use database records that hold the employeeIDs as well as metadata about that specific object.



Library's Online Database
Many readers may recognize the above photo of a library card catalog. I recall searching an "author's name" catalog or "Book Title" catalog for a particular book. As I flipped through the index cards I could not help but think... Wow, what a lot of work to type up all these index cards on a typewriter and then insert them into their sorted location and keep them sorted! This is the exact type of repetitive, tedious work databases were created to do. 


THE WORK DBMSs DO:

Work Automated by DBMS
Thinking about this a bit ... so every physical book exists on only one Floor-Isle-Shelf location in the library. That being true, we can create multiple metadata catalogs. A catalog being a set of metadata cards each with only two pieces of information on them, a  location that is a unique reference to a single book in the library and the "other" piece of information the cards are sorted on. Each catalog could contain cards sorted by "author's last name" or maybe the "book's title" or even "publication year". The cards in the catalog serve as an "index" which points to the location of the physical book which the index card itself was derived from. I am using this "library of books" analogy because it is a collection or things that we should all be familiar with and it was not long ago that databases took over how books are searched and tracked in modern-day libraries. DB management systems are software application processes that accept requests for a set of records matching a particular set of criteria. The DBMS process searches the catalogs of indexes that it builds and maintains. That is the work that a database does. I think of a database as a robot that stores, indexes and retrieves particular pieces of data according to my query.

DEWEY DECIMAL SYSTEM:

Dewey Decimal Location Marker
Since I'm using a library analogy. The 1876 Dewey Decimal System may come to mind here. The DDS used a system of logical decimal numbers that could be used to point to a physical location. Since the books are physical objects, humans find it useful to group books of similar topics together to facilitate browsing books on the shelf to the left and right of the book you located. 




Dewey Decimal Top-Level Classes
When you think about lesser methods like simply giving each shelf location a number that starts at one and increases as you add library shelf locations. While this expandable system of numbered locations would allow for adding shelves to the library, it would require you to re-number for additions and leave empty shelf locations if a book was removed from the library's collection. The DDS allows for books of a similar topic to be physically located in areas of the library grouped by a classes. The whole number in front of the decimal point would represent a particular "Class" which can be further divided into "Divisions" and even further divided into "Sections". The photo above of a location that starts with 341.237  would be part of the 300 - "Social Sciences" Class, the 340 - "Law" Division and the 341 - "Law of Nations" Section. 



Dewey Decimal Number Decoder
This clever system of decimal location numbers  gives library users the ability to "browse books of similar topic" simply by going to the library isle location for a topic. 

However, when the DBMS stores digital objects directly in it's database file, there is no need for browsing. Objects are stored at a byte offset from the start of the database file. If the digital records are each 100 bytes long and you want to retrieve the 5th record, simply read 100 bytes of data starting 500 bytes in from the start of the database file. 



DATABASE FILE STRUCTURE:



Database File Record Offsets
The picture to the right shows three records each with only two fields. The records are stored inside a database file which is normally visualized as a long linear string of bytes but shown here instead as stacked on top of each other for display purposes. In real-life there can be more than two columns of data about each entity but for display purposes we have shown only two. This is how the records are actually stored in the database file. Just like in an Excel spreadsheet, each row contains columns of data about a single entity. 



Single index is smaller than full book record it points to on disk
Recall, that the "job" of a database is to NOT just store the records row by row in a database file but to allow humans to query for specific records in that database file. The DB file could potential contain millions of rows of records. If when the DBMS receives a query from a human asking for all the books with a "publication date" after 2012,
Last_Name index shows record location
It would take too long to simply read through each of the 1 million records describing the specific books pulling out the books that match the requested criteria (>2012). The database needs to build a digital version of a library card catalog. The DBMS can search these indexes hold a single piece of sorted metadata and the location of that specific book. Because the indexes are sorted and hold far fewer columns of data than the actual records they point to ... they can be searched much quicker than reading through the whole table of full records. Searching a catalog of index records which have been sorted by "publication year" allows the DBMS to quickly locate all the "books published after 2012".

Indexes sorted alphabetically in RAM

While the the database records are stored on disk, the indexes, due to their smaller size, can be stored in RAM. As each new book is added to or removed from the library, the DBMS will need to update each books associated index. Because the indexes are in memory this process is much faster than if they were stored on disk. The requirement of keeping the indexes in memory is one of the main reasons database servers require lots of RAM. It should be noted that the use of indexes in addition to the records themselves is a duplication of data. Having many different indexes say by First_Name, Last_Name, Hire_Date, Office_location adds to the duplication and work since every time you modify or insert a record you must also update the indexes. It should also be noted that if the database records are never modified, the indexes would never also never need to change. Performance tests are often done to determine whether adding another index will have a positive or negative performance effect on the database.




TAKEAWAYS:



Modern DBMSs either store records full of metadata about entities that exist in the real world or the database records are the actual digital objects themselves. DBMSs store there data records in a database file and give humans the ability to query for a specific set of data records matching some criteria. The DBMS will keep sorted indexes in RAM to allow for fast location and retrieval of the requested set of records called a "recordset". While the database term may have been coined in 1962 to refer to the methods of storing and retrieving digital data, the concepts such as indexing and metadata have existed for millennia. In future blog posts DATABASE 102 & 103, we will investigate the database concept further. In even more database blog posts, I will investigate the different types of databases such as relational, NoSQL and NewSQL as well as their use cases. 

Sunday, September 7, 2014

BIG DATA 101

Big Data is one of the 4 technologies (Mobile, Cloud, Social and Big Data) that make up 3rd Platform.

Big Data is defined as any data that cannot be handled using traditional IT methods


Big Data Challenges


Calling it BIG data makes people only think that it's about the VOLUME of data but Big Data encompasses fast velocity data as well as varied types of data.








3 Vs



Volume

Velocity

Variety 



Volume:

Data, free to create, NOT to store. IT has been storing digital data for 50+ years. As the cost of data storage devices decrease, the trend toward retaining low value data has been on the rise. After all there is value in ALL data if you just find a way to extract it. While it's hard to store 100 million pennies, if you find a way to cash them in... it's still 1 million bucks. The trick is finding a low cost method to store, manage and extract value from the data.
Hard drive capacities have increased making it easier to store large amounts of data on fewer drives. The cost however is not simply the device to store the data on but from managing the data. Large volumes of low value data require you need to leverage technologies like...

Data Compression


Compression - a method of storing the same amount of information in less space. Not all data is compressible. Media files like pictures and video are already compressed and cannot be stored in a smaller size. Encrypted data is not compressible.






Data Deduplication


Deduplication - never store the same data twice. Save cost and simply recreate the original data from the data you stored.





Scalability



Scalable Data Containers - storage container size can simply grow to accommodate data growth. Can your data container grow as your data volume does? Individual storage device limits force the use of many storage devices and drives up complexity and costs.





Data Protection Methods




Efficient Data Protection Methods - creating a separate copy of each piece of data to plan for a device failure will force you to have 2X the amount of storage capacity. Your Big Data problem just doubled. You need more efficient data protection algorithms.





Power Costs



Power & Cooling Costs - storage devices must reduce the amount of power they consume or you will pay more money to power and cool your data then the value it has locked inside it.









Variety:

Applications are the automation of business processes. Those processes (applications) create data. Business apps such as ERP, CRM and sales ordering applications create data using input forms whose fields that fit perfectly into the tables of a database. These applications don't just store their data but frequently search that data and rely on the relational database to perform fast searches. The applications make requests to the relational database via SQL statements and the database returns a set of records (recordset) that matches the SQL query statement. To summarize, the structure of the backend database conforms to what is mandated by the forms users use to enter the data in the applications.


Not all Apps are HUMAN


Nest Home Thermostat
Not all applications enter their data using nice form fields that fit perfectly into a database table. Heck, some applications are not even humans creating the data. Examples would include a Nest home automation thermostats (just acquired by Google for $3.2B).  Not all applications need a DB to constantly search, aggregate and display data that they have previously saved. An example would be an CRM app that lists all a customers previous orders.

Unlock the Value Hidden in Big Data

Unlocking Information Hidden in Data
Herein lies the problem. These new devices and applications don't use a database but instead simple files to store their data. Businesses typically collect all their DB data into a data warehouse (large collection of DB data) for analysis and reporting. The DW (data warehouse) reports will track KPIs (Key Performance Indicators) that help leaders make business decisions. Business leaders must make decisions everyday. Those with the most information make the best decisions. That information is hidden in the data and must be extracted with analysis. What about all the varieties of data that don't fit in a database and ultimately make it into your data warehouse for analysis? You are leaving lots of valuable data and the information it would give you hidden in those files. Big data is about getting access to that information to make your business more competitive, productive and profitable.


Business Justification to Drill
Tapping into big data is like drilling for oil. When the oil first comes out of the ground it takes a small amount of effort and money making it profitable to go after. When oil extraction and refinement costs are more than the price we can get for oil on the open market there is no business justification to extract it. The oil, just like the big data in files will remain there until the value goes up or a cheaper way to extract the oil (information) is created.

Let the Drilling Begin!
Enter FREE open source NoSQL databases and cost efficient scale-out block & file systems running on commodity hardware. Suddenly, the cost to extract the data has come way down.
Enter Social Media and its hugely valuable customer preference data.
Suddenly there is a business justification to go after the information locked in these unstructured files.




Social Networking and the internet have created many new data sets or streams that are unstructured data (not structured like in a database). If you want to analyze this type of data you will need something different than a relational database. Recent advances in non-relational databases have given IT shops the ability to easily analyze unstructured or semi-structured data.

What is a relational database? A relational database is a collection of related tables of data. These tables are under the control of the RDBMS - Relational DataBase Management System. The RDBMS is the database application where the database is the collection of related tables of data.



Velocity:

How Many Tweeters?
Human beings can create data via typing at say 60 words a minute. A million Twitter users can create data at 60 million words a minute. Velocity is trying to consume all the Twitter tweets of millions of people in real-time.

IoT - Internet of Things
Humans are not the only ones creating the data. Smart machines are now sending their data over the internet. We call this concept the IoT or Internet of Things. Gone are the days of calling your customer to ask how they are using your products. Smart companies are embedding simple inexpensive internet hardware into their products that stream useful information home to product teams. How could your company benefit from this type of data? Extracting value from this data often requires that you can analyze it in real-time. This data is not structured so you can't wait to modify it to fit into your database or a data warehouse. Even writing the data to disk and attempting to read it back for analysis may simply be too slow. New methods of using large amounts of RAM to store and query the data for analysis have appeared.

Stream Analysis
Your business's competition is building IT solutions to retrieve these fast data streams and analyze them in real-time to make better decisions about how to interact with their customers. Companies that can innovate the fastest by leveraging technology win.





Conclusion:

The era of Big Data is here. Companies large and small are being disrupted by their competition who are leveraging Big Data. There are examples everywhere of how to use Big Data. There is a learning curve to dealing with big data and it's one you will want to get ahead or you may be finding yourself chasing your competition.

Monday, September 1, 2014

CLOUD 101

Cloud is one of the 4 technologies (Mobile, Cloud, Social and Big Data) that make up 3rd Platform.

Cloud is... a self-service, automated, virtual data center environment. While I'm sure there are exceptions to that rule, when things are not well defined you pick a simple definition and go with it.

A Data Center is made of up of:
  • Physical Servers - 1U or 2U rack servers or blade style servers are common.
  • Physical Networking - wires, switches and routers to direct packets of data over wires
  • Physical Storage - Block and File storage arrays as well as local disk storage in servers
  • Security both physical (building, cameras) and digital (firewalls, intrusion detection systems, etc)
  • Power & Cooling + backup generators for emergency
Server virtualization is often the first step in trying to gain control over costs and complexity while adding much needed agility.  Step 1. choose your hypervisor platform.
Hypervisor
Hypervisor Installed on top of Physical System
A hypervisor is a sort of tiny operating system that gets installed on the bare physical server and allows the administrator to logically segment the physical server into many virtual servers running on the same physical server. Each logical virtual server will have it's own operating system such as Windows or Linux. Each of the virtual servers running on the physical server will not be aware that it has been virtualized. Popular companies and their hypervisor platforms are:

Vmware's vSphere         Citrix's XenServer       Redhat's KVM       Microsoft's Hyper-V

 Virtual servers allow you to pool the physical server's resources such as CPU, Memory and NICs (Network Interface Card). Most applications only require about 10% of the resources of a single physical server. By installing a hypervisor on a physical server you can run on average eight virtual servers that will consume 80% of the available resources. This is where you get the majority of your capital expenditure savings. CAPEX savings is often given as a reason for virtualizing.

After virtualizing your server environment you quickly realize that operationally everything is still a manual process. When a business unit requests an application be deployed, IT still must go through the manual steps to deploy that application all while the business is waiting.
  1. Select a physical server running a hypervisor that has enough unused resources to support the creation of a VM (Virtual Machine) on it.
  2. Configure the VM container resource amounts (CPU, RAM, Storage, Networking) 
  3. Load an OS (Operating System) on the Virtual machine.
  4. Configure the Networking for the VM. 
  5. Provision the Storage for the VM.
  6. Add the VM to the list of IT monitored VMs
  7. Delete or archive VM when no longer needed by business
Cloud technology is the automation of the steps above and allows IT to deliver applications to the business faster.  
Cloud Deployment Models:
  1. Private Clouds - automation is done by IT in their own private DC (Data Center). 
  2. Public Clouds - automation is done by a CSP (Cloud Service Provider) in the CSP's own data center not the private customer's location
  3. Hosted Private Clouds - a CSP dedicates a portioned off set of servers, networking and storage for exclusive use and administration by private IT company.
  4. Hybrid Clouds - Moving of VMs between a compatible private and public clouds.
Hybrid Clouds


While Private cloud allows the business to keep 100% control and security of their applications and data, it requires them to build & operate the private cloud CAPEX & OPEX. Public clouds allow the company to only pay for use but with the downside of loosing control over the operation and security of their data. Hosted private clouds allows the CSP to purchase, configure and be the "hands and eyes" on the servers, networking  and storage while the business knows the hardware is for exclusive use by them. The exclusive use part is important because it improves security and allows the CSP to allow IT some level of administrative access into the configuration and operation of the hosted private cloud. 
The last deployment model called Hybrid cloud is the best of both worlds. Some workloads (applications) may run on the private cloud because of security or controllability. Often the most critical applications to the business require high uptime and rapid response time when anything does impact the applications availability. Less critical workloads may benefit from being deployed on a public cloud. Hybrid cloud computing gives the ability to move workloads sometimes even non-disruptively (to the users of the application) back and forth between private and public clouds. 

Service Models - 'X - aaS'

Cloud Service Models
  1. IaaS - Infrastructure as a Service - upload or select & configure your VM on the IaaS cloud. Examples: Vmware's vCloud Air, AWS EC2, Microsoft's Azure 
  2. PaaS - Platform as a Service - write your custom application on and/or upload it to a private or public PaaS cloud. Examples: Cloud Foundry, Heroku, AWS Elastic Beanstalk, AppFog, etc
  3. SaaS - Software as a Service - pay a fee and get a username/login to use a CSP provided application. Examples: Salesforce.com, Cisco WebEx, ADP, etc.
Consumption / Pricing Models:
  • Pay as you go - order a VM on IaaS, deploy an app on PaaS or a login on SaaS and pay for only what you use and discontinue the service at any time without penalty. While this pricing model is the most flexible it can come at the cost of unpredictable service levels. 
  • Contract basis - Sign a short term 30 day contract or a longer multi-year contract. CSPs can and will often guarantee some level of service in exchange for the longer contract. 
Service Level Agreement:
     CSPs as part of the cloud service will offer what is called a SLA (Service Level Agreement) which is a contract stating what the users of the cloud service can expect for Bandwidth, compute resources, service uptime, problem resolution times, etc.
Self-Service

Self Service:
        Cloud technology automation makes it possible to have a Service Catalog ( a webpage listing all the services offered for rent/lease). Users of the cloud can simply select a service from a menu of choices and have that service provisioned automatically in the CSP's  datacenter or locally on IT's own private cloud datacenter.

Business Value of Cloud:
      Competition is everywhere these days. Business must innovate and bring there ideas to market faster that everyone to gain market share and profits while keeping costs at a minimum. Business needs technology and IT to deploy it for them. IT is in the critical path of nearly everything a business needs these days. Cloud technology is allowing IT to move at the speed of
business.


Sunday, June 22, 2014

SOCIAL NETWORKING 101

Social Networking is one of the 4 technologies (Mobile, Cloud, Social and Big Data) that make up 3rd Platform.
Social Networking Connecting People

Social networking is a method of communicating with people using web and or mobile technology over the internet.

Ray Tomlinson Email Inventor
When I think about where social networking came from... it's like most technologies that did not simply pop into existence but evolved from earlier communication technologies. The very first way humans communicated on the internet was using email. In fact, when the first few computers were connected to form the early internet around 1971, it was Ray Tomlinson who sent the first email from one computer to another by inserting an @ symbol. Before this email was only sent between users on the same computer system.


UseNet News 9 Top-Level Groups
Usenet news was another early method of communication. Users could post messages to series of groups whose names centered around topics of interest that are abbreviated as comp. "computers",  sci. "science", news., talk. and soc. "social".  these messages where all grouped together and copied between servers on the internet. Usenet readers could download some of all of the groups using a usenet news client similar to an email reading program.


Bulletin Board Systems
BBS or "Bulletin Board Systems" were a lot like Usenet news except they were generally not copied between servers but hosted on a single computer that often required the people to connect to the BBS server by dialing into it using a modem. CompuServe was the first to institute a real-time method chat services that was a big hit with users.

Chat programs came into existence everywhere with names like IRC, AOL and ICQ.  Many of these chat programs quickly added features like group chats and even file sharing.

Email, Usenet and BBSs were NOT real-time communications. IRC, AOL and ICQ were a big step forward from a users perspective.

Personal Blog/Website

As personal websites became popular, readers of these early websites wanted a way to communicate with the authors of the sites and that lead to having a comment section on websites. I think of this as the beginnings of blogs like the one you are reading now.

Sharing stories is how early humans evolved and social networking software and websites are just our way of using technology to share stories faster and even globally. Today, Social network sites such as Facebook, Twitter, SnapChat and Linkedin are just a small sample of what is available.

Avatar
Many of these online social services revolved around you creating a personal profile with an avatar or digital representation of yourself. People are very conscious of how other view them. Your digital representation is very important for those who chose to create one. Once your profile is created you can search to find others who you wish to follow and attract them to follow you. Friendster was an early example of this.

Social network is having a massive effect on our global culture. Today any human being with a cell phone can stay connected globally to what is going on in the world. Twitter can be used to spread news around the globe in seconds. There are many famous examples of the effects of people who can communicate instantly to gather in protest or for fun such as "Flash Mobs".
1/2 Price Coffee


I imagine that when the first business was created so were the first attempts to market to potential customers. Before social networking,  phone, road signs, snail mail flyers, and windows signs were how companies marketed to potential customers. Today businesses have a way to connect with any human carrying a smart phone, laptop or tablet. Marketing requires 2 things. 1. Locate the people you have identified as being most likely to purchase your product. 2. Get your pitch in front of them.
Businesses today identify the keywords associated with their perfect potential buyer and can simply search billions of digital profiles on the web to locate their model audience. Sometimes it's not enough just to find the users but to hit them with your pitch at the perfect moment. By gathering information from the phone such as their current location or recent search keywords, companies can target their marketing efforts to those likely to purchase their goods or services at the perfect moment. Imagine being able to popup a 1/2 price coffee offer to a user phone as they walk past your coffee shop.  Google has made a fortune from Adwords that appear in your customer search results.

If you are not seeking and getting in front of you customers in social networking... your competitors are. Buyers are making their decisions using technology such as their phone and the web. Only after their decision has been made do they drive to the store. Traditional marketing groups will find their nontechnical marketing attempts becoming less effective moving forward.