Sunday, February 28, 2016

Topic 4 / Post 3 – Technology Infrastructure Architecture Layer / Cloud ETA and Green IT

February 28, 2016 / Dennis Holinka

Topic 4 – Technology Infrastructure Architecture Layer

This week's posts go over the Technology Infrastructure Architecture Layer, the various perspectives, and related reflections in the blog.

Post 3 - Cloud ETA and Green IT

The advent of cloud enterprise technology architecture has provided many environmental and ecosystem conscious companies with the opportunity to benefit from the financial savings that comes with the greening of IT.  Cloud computing allows technology infrastructure planners consolidate their IT footprints due to an increase in the sharing of hardware resources and improved utilization, spreading fixed costs across a lower number of infrastructure assets, and reduced power usage that amounts to the largest costs in a data center.  It is estimated that up to 90% of data center costs goes to power even above the square footage costs of construction and housing of the assets within it.  Green IT allows for both power and square footage to be reduced, thereby providing a payback for the data center alignment to the cloud.  The cloud allows for optimal resource allocations of workloads across the infrastructure which amounts to having robotic agents consolidating workloads on a denser set of infrastructure both statically by way of consolidation and dynamically throughout the work day.  The initial consolidation reduces server footprint by the square foot and the removal of idle electricity on underutilized servers.  The dynamic consolidation allows for an even increasing reduction in the use of idle servers which amounts to savings both in electricity to power them but also to cool the facility for their idle cycling.


The universal strategy of the Cloud enabled Data Center is to reduce its “footprint” thereby reducing the true costs of the data center.  The footprints that abound are thermal, cooling, energy, physical equipment density, physical equipment space, rack space, system architecture space, software instance space, desktop clients, mobile clients, operating systems, system images, under utilized capacity (leakage), vulnerability (under provisioned capacity for disaster, high availability, and business continuity), IT labor, and service operations costs.  The new model for the future generation Data Center must incorporate the following themes in ascending order: Consolidate – Simplify IT delivery, Automate – Rapid, improved, and manageable deployment of IT services, and Innovate – Business innovation goal driven IT alignment.  Several existing metrics can help data centers optimize and improve the energy efficiency of their facilities, and on new data center deployments.  Data center power and cooling are two of the critical issues facing IT organizations today since it is tied to the largest of costs, that is, power.  By controlling costs, IT can manage increased compute, network, storage, power, and increased expenses from growing use.  Utilizing cloud computing technologies and approaches, IT can remain competitive while meeting future growing needs of IT.

 Figure: Green IT - Forrester Research
https://www.forrester.com/dl/The+Value+Of+A+Green+IT+Maturity+Assessment/-/E-RES55365/pdf

The use of cloud IT to attain greener IT starts with data center and facilities and moves into cloud enabling the enterprise and its supply chain.  Eventually, the cloud enabling of the enterprise will extend into its ecosystem to include other entities in its external environment.  It follows then, green public infrastructure initiatives will require enterprises to be cloud integration ready to interface with them.  This has led to an expansive definition of cloud enablement and the greening of IT to go from Enterprise Technology Architecture to the Enterprise Business Architecture so that integration can occur at higher levels of abstraction.  Forrester has prepared what it calls a Green IT Maturity Model to facilitate the evolution of the various layers of IT to evolve to the greener state.  The example provided by Forrester is an applied usage of the maturity model on example use cases.

 Figure: Green IT Maturity Model - Forrester Research

The Green IT Maturity model and the advent of cloud has created pressure on the creation of ETA services to include higher levels of sophistication that includes economic efficiencies brought by new emerging cloud technologies in the form of services.  Cloud Green IT has made the development, management, and rationalization of ETA even more complicated than before with the hope that these new compute services will provide even further benefits for the enterprise through IT.

  Figure: Green IT - Gartner

 


Topic 4 / Post 2 – Technology Infrastructure Architecture Layer / ETA Portfolio Rationalization Challenges

February 28, 2016 / Dennis Holinka

Topic 4 – Technology Infrastructure Architecture Layer

This week's posts go over the Technology Infrastructure Architecture Layer, the various perspectives, and related reflections in the blog.

Post 2 - ETA Portfolio Rationalization Challenges

Recent efforts at the company where I am employed has demonstrated the large hurdles that emerge doing Enterprise Technology Architecture (ETA) portfolio rationalization.  The work and methodology provided by Gartner allow ETA teams to organize the various and sprawling technologies that are in use in the enterprise.  The Technical Components, Domains, Patterns, and Services provide an excellent methodology to organize the chaos that exists in many enterprises including my own.  However, we have found that we lack a proper inventorying system of existing digital assets for ETA.  Many of the point solution repositories lack complete inventories and technologies are constantly being declared as "discovered" within the environment.  This has led to the point that we are adding more technologies into inventory both from new requests and discoveries then our ability to organize and rationalize the complexity.  The domain and component taxonomy is bewildering in a Fortune 100 insurance and financial services company such as ours and has led many to believe that our company has bought at least one of everything if not two or three of the same.  We have used patterned our domain and components taxonomy and are in the process of adopting an industry standard taxonomy for ETA domains, components, and services provided by BDNA.

Beyond the difficulty of inventorying the redundancy of the ETA assets is the ability to compare and contrast the total cost of ownership (TCO) of each of those in the inventory.  Rationalization can be done to reduce costs or increase capabilities which will increase revenue or prevent loss of revenue.  Either which way, a financial analysis is required to understand which of the choices are economically expedient.  Though in some instances, some technologies have reached their end of the road, most of them have continued independent roadmaps and require cross comparisons to determine which within domains, components, and services to converge upon.  It is true that patterns help simplify the use of these domain components and services in explicit use case patterns but the decisioning required for picking among the many alternatives looms large in efforts.

 Figure:  BDNA Technopedia - Taxonomy of Domain, Component, and Services

In order to make proper economic tradeoffs, financial information needs to be collected and analyzed in addition to the collection of inventory data.  The difficulty here is in the tracing and allocation of cost of technologies that are in the enterprise prior to any ETA efforts.  If there is no charge back or even show back of financial information on the technology usage, the efforts to make economic decisions across the existing technologies will be difficult if not impossible.  It is therefore a recommendation and conclusion that EA cannot be done properly without economic and financial data regarding the tradeoffs.  There are however, situations where the lowest cost alternative for implementing a new capability by way of technology can be more easily chosen when there are no incumbents.   The challenges in performing ETA will remain difficult and lead to anecdotal reasoning if it is not accompanied by a services financial model of both existing and proposed domain components, patterns, and services.

  Figure:  Making Enterprise Architecture based decisions: An Economic Approach -
Institute For Enterprise Architecture Developments
http://www.enterprise-architecture.info/Images/Presentaties/How%20valuable%20is%20EA%204U-06-2005.PDF

As is the case with all efforts, the success of such approaches is in the details of accomplishing it.  The ability to provide a cost benefit analysis and tradeoff among the various decisions required in EA needs an information repository to capture the various cost elements.  It has occurred to me that one of the most important viewpoints required to do EA properly is a Financial Viewpoint for each of the architectures under analysis.  It will require further research and inquiry as to how to properly create and prepare such a repository for the evaluation of architectures and for choosing among the various alternative tradeoffs in performing an effective and efficient portfolio rationalization.

Topic 4 / Post 1 – Technology Infrastructure Architecture Layer / Tracing ETA to Business Strategy

February 28, 2016 / Dennis Holinka

Topic 4 – Technology Infrastructure Architecture Layer

This week's posts go over the Technology Infrastructure Architecture Layer, the various perspectives, and related reflections in the blog.

Post 1 - Tracing ETA to Business Strategy

The enterprise architecture (EA) approach outlined by Gartner warns that doing Technology Infrastructure Architecture apart from the rest of EA will result in Technology Infrastructure Planning at best and failure at worst.  The infrastructure planning of technology apart from understanding the applications that are required to run on them and the information flows and data persistence that is in the application will lead to activity of doing technology for technology sake.  The business will criticize IT for doing technology that doesn't improve or benefit the business.  The process of EA emerged to provide the modeling and lineage between the layers of architecture so that such autonomous and disjointed activities can be prevented so that architecting the enterprise can be holistic.  However, even under the most rigorous of EA processes and lineage modeling, the traceability of ETA to the business strategy isn't present.  There have been multiple attempts to provide meta-models of the traceability which provides a high level concept of traceability but falls short of providing actual traceability.  One of the best lineage meta-models is the Enterprise Business Motivation Model (EBMM) but it also falls short to provide actual / practical lineage when used in practice.

The traceability provided by the EBMM model below shows that there is lineage between the Business Capabilities of an enterprise to the information technology that automates it.  The application that logically provides the business system processes is enabled by the Platform / Technology components which is provided by the Technology Infrastructure Architecture.  It would appear that such traceability provided by the EBMM would provide the clarity of the lineage because there is a relationship between business capability and the Business or Information Tool provided in this model.  However, it is this very relationship that is fuzzy and need to be further decomposed into a definitive relationship to show economic value linkage.

 Figure:  Enterprise Business Motivation Model (EBMM)
http://motivationmodel.com/download/EBMM%20Report%20v5.pdf

Enterprise architecture should take notice that the EBMM provides an unclear relationship in that the box for business or information tools is a type of application and is linked to business capabilities.  However, the "or" statement is the key to showing the lack of definitive relationship in the box.  It is true that a business application can be traced unequivocally to a business capability but it is when IT tools are traced to business capabilities that the issue emerges. Furthermore, since IT tools and platform / technology are intertwined, the reasoning follows that it is fuzzy to trace IT tools platforms / technology to the business capability.  Though, much further work is required to trace all of the relationships, the reason it is difficult to trace is because there is a duality in enterprises being defined in the Business or IT tools box.  IT is a business and enterprise unto itself and requires a EBMM for itself to be defined and then linked to a separate EBMM for the business enterprise being architected.  Otherwise, the end business enterprise such as an insurance company will begin to question the business enterprise decisions and investments in a IT enterprise and how it is linked to its benefit.  In other words, an investment in configuration management for all of IT is very much a part of business architecture of IT and it is used to build the business tools needed for an insurance company when the system is being developed.  If such a complicated relationship is not made plain or simplified, then ETA for the insurance business will be called under suspicion of malinvestment.

 Figure: Forrester Business Services Architecture

Recent Forrester research pointed out that the missing linkage between business capabilities and application is a service architecture. However, they don't provide the detailed mapping of the IT services provided by an IT organization that links to the formation of business applications which enable business capabilities.  It is the details of that relationship that provides the logical explanation of how platform / technology component investments are linked to the business strategies of the IT enterprise and End Business enterprise such as insurance.  Further research and work is required to flesh the relationships to link IT for IT to IT for Business so that the ETA can be conducted strategically to improve the enterprise's outcome.

Sunday, February 14, 2016

Topic 3 / Post 3 – Information Architecture layer / Stewardship, Data Governance, and Big Data Modeling

February 14, 2016 / Dennis Holinka

Topic 3 – Information Architecture layer

This week's posts go over the Information Architecture layer, the various perspectives, and related reflections in the blog.

Post 3 - Data Stewardship, Data Governance, and Big Data Modeling

Data stewardship is the role that people throughout the enterprise are tasked with in recognition that Data is an Asset and/or Data is a Liability and should be managed accordingly.  The stewardship role expects that since data is important and its lifecycle maintenance should be properly managed then it follows that the quality and entry of the data and that a person should be put to take care of or steward the data in the particular enterprise domain.  The role of data steward is shared between the business people who utilize the data in their respective daily contexts and the IT people who do the daily care and feeding of the systems and their systems of record where such data is placed under their charge for operations and maintenance during runtime and its persistence.  Since data interactions spans beyond individuals into groups or across groups, precsise means of collaborating become essential and requires coordination.  The coordination requires data governance with coordination based RACI roles defined to know who does what when why and where regarding data and is responsible for activities surrounding it beyond general stewardship.

Next enters Big Data with its promises of being able to extract knowledge and perhaps wisdom based insight from all the structured and unstructured data and content information across the enterprise as well as external information streams and stores.  The big data emergence makes the role of data stewardship and activiites of data governance that much more expansive.  The work of data stewardship and governance is already a large effort.  Now with big data added, and the undetermined and uncertain sources of information make these roles close to almost impossible. Since all of this data needs to designed to work and related to many if not all of the other data within the Data Lake of big data, it will require specific types of modeling to prevent the Big Data  - Data Lake from turning into a Data Cesspool.  This requires the work and effort of Information Architects to use their Enterprise Architecture skills and understanding of an extended enterprise and apply it to the extended information architecture boundaries that span enterprises into its overarching ecosystem and into various enterprise integration knowledge domain relatioships.

 Figure:  Traditional Relational Database Model that forces graphical concepts into it paradigm

 
If things weren't hard already, next enters the modeling of new Big Data unstructured models of  NoSQL data model of key values that are distributed but related to other key value pair table like structures that are partitioned across multiple servers with non consistent (e.g. ACID) HA redundant data stores.  In addition, Big Data has been using graph databases that are used to store and analyze graph based data instead of trying to shoe-horn graphical and/or NoSQL data into relational data stores in the form of traditional ERDs.  The new modeling approaches of which I have shown one example are more robust and can more easily depict the natural relatioships that are attempted to being solved.  From concept to stewardship to governance and abstract knowledge depictions of what is being modeled to be governed lead us to the conclusion that the work of the Information Architect is becoming more complex and requires extensive modeling skills to take on the new emerging areas beyond the tradional role and relational structure allowed for in the past.
 
Figure:  Graphiical Database Model that is more akin to the domain relations being developed

Topic 3 / Post 2 – Information Architecture layer / Enterprise Information Ontology – the foundation for EIA, EMM, EIM

February 14, 2016 / Dennis Holinka

Topic 3 – Information Architecture layer

This week's posts go over the Information Architecture layer, the various perspectives, and related reflections in the blog.

Post 2 - Enterprise Information Ontology – the foundation for EIA, EMM, EIM

The work of creating an Enterprise Information Architecture (EIA) and subsequent work of creating an Enterprise Metadata Management (EMM) to get to Enterprise Information Management (EIM) and related discipline processes flows from foundation Information architecture activities.  The work of many groups and disciplines are in conflict because of the differences in them that stem from the lack of the foundational work of Enterprise Information Ontology.  According to Wikipedia, enterprise ontologies and its engineering "aims at making explicit the knowledge contained within software applications, and within enterprises and business procedures for a particular domain."  An ontology is precise approach to the structuring knowledge of a particular domain by way of an approach known as an ontology that describes its domain, the entities of that domain, and the relationships between the entities.  It uses a specific methodology and documentation approach to describe the relationships among entities such as whether the relationships between terms are conceptually equivalent or partial in their semantic meaning.  Ontologies also allow for the dynamic relationship linking between terms between different domains so that terms may be related to each other in a discovery based approach.  Using the approach allows for the ability to have terms between silos of information to relate to other silos by way of having a central domain ontology vocabulatory on which they are based.  Ontologies allow for the central mapping of semantic terms to other alternative expressions of the same domain of knowledge expressed in the form of a taxonomy or between separate domains of semantic terms in other taxonomies.

 
Figure:  Semantic spectrums of an Ontology and the time/budge required for its development

 The use of an ontology for foundational EIA work lies in the ability of the information architect to structure of the knowledge domain of an enterprise into a set of common terms and relate multiple renditions of those tribal terms to the central term.  By documenting the relationships semantically in forms of equivalence part/whole or overlapping distinct terms, an improved understanding of the information technology work that emanates from the mapping can occur.  The possibility and/or automated form of ETL is within reach when the relationships between domains are documented between islands of information.  Isn't that what happens when ETL jobs are created between information silos today except it is done on the fly without the big picture view of a central semantic set of terms for the entire enterprise.  As an example, an ETL architect will have to map between information silos and describe by way of the ETL tool, the relatioship and the exact programmatic relationship between those term meanings.  Name in silo A is lastnm, firstnm, mi in silo B and they are equivalent while in silo c, name is first name in silo A.  What this amounts to is that an ETL architect will eventually map the semantic meaning between terms and express the precise operation to convert between the two.  In an ontology, the relationship and precision of terms can be described to an excruiciating level of detail and we can allow a futuristic ETL construct the operational precision to convert between them (e.g. ETL by way of ontology).  What that means is that by defining a central semantic meaning and relationship of terms then the work of modeling the information architecture, the defining of the metadata management of terms can be documented for dynamic lookup, and construct the precise operations that are used in Enterprise Information Management.  Perhaps, a top down approach to information can help us structure and organize the work all the way down into the details.

Topic 3 / Post 1 – Information Architecture layer / Enterprise Information Integration is now the Data Virtualization Fabric

February 14, 2016 / Dennis Holinka

Topic 3 – Information Architecture layer

This week's posts go over the Information Architecture layer, the various perspectives, and related reflections in the blog.

Post 1 - Enterprise Information Integration  is now the Data Virtualization Fabric

Enterprise Information Integration (EII) is an attempt of integrating Enterprise Information in a similar way in which services are integration in an Enterprise Service Bus (ESB).  In other words, it is an ESB for Information except that ESB uses WS-* for its main standard integration stack. In traditional client/server and N-Tier architectures, the standard integration is point to point integration between application consumers and the Information Service provider using SQL as the means to intermediation (e.g. ODBC, JDBC, CLI, etc.) and continues to use this defacto standard as the standard intergration between caller and callee service particpants.  Since 2006, EII has evolved into a full suite of Data Integration capabilities that allow for an extensive set of protocols beyond SOA WS-* SOAP based service calls.  This extensive set of Information Services based capabilities has been named Data Virtualization Fabric by the Industry.  This approach to data integration in the form of a Data Integration Platform has combined key information integration techniques akin to the way Enterprise Integration Platforms have combined ESBs, EAI, SOA Mediation Gateways, and UDDI.  The Data Virtualization Fabric has expanded beyond native SQL integration to other protocols in a form of EAI for Data using many new invocation techniques but for data.


Figure:  Cisco Data Virtualization Platform
 
 The importance of the Data Virtualization Platform is found in its ability to abstract the underlying sources and locations of data and to honor the original contracts (e.g. interfaces) that are utilized by existing application code.  The biggest reasons that underlying data service engines become obsolete and are not modernized is because the calling applications are tightly coupled to the engine providing the data service.  The intermediation abstraction layer allows for information architects to continue to honor existing interfaces including updated versions of those interfaces (e.g. Oracle 8i to 11g, etc) while allowing for an expansion of the existing services for added features that is required by a nexus of forces applicaiton using data analytics and big data.  Recently, I have encounted disruptive issues arising from the deprecation of old technology, the need to honor existing interfaces, and the need to update to newer versions or new technology that can break existing interfaces.  This technology approach to data integration can minimize disruption by allowing the data integration layer to act as a mediatior to a plug and play paradigm using large data services engines as parts while honoring the most primitive of protocols used by the engines. The minimization of disruption of interfaces leads to vast sums of savings on implementation and / or testing from upgrades, migrations, and data services replacements.