I've recently moved from a shared service organization into a customer facing IT group. It didn't take too long to realize that the customer facing IT group was very project and solution focused. There was very little Service orientation - that is, once a solution was created it went directly into 'support', which meant keeping the solution up and running and maintained. Then the team went onto the next project. The issue with this approach is that there then ends up being little re-use, service improvement, or strategic thinking. If you wanted a change, you then have to start another project, with all of the associated overhead. Sometimes a project is just an application and this way of thinking is fine. However, in many cases, a project leads to a service and should be managed as a service, with a clear service owner, customer boards, roadmap planning, meaningful service metrics, continuous improvement, SLAs, etc. Below is a model I am proposing as a strategic shift to enable re-use, speed and customer focused services rather than a pure application focus. Post a comment - would like to hear your thoughts.
Wednesday, November 28, 2012
Tuesday, November 27, 2012
8V Spider - Big Data Assessment Model
I was sitting on a plane a couple of weeks ago and my new boss is going to give a big presentation the following day to help people understand 'big data'.
Well, everyone talks about the 3 "V"s of big data... Volume, Velocity, Variety. I remember reading an article about a 4th V - Veracity: http://dsnowondb2.blogspot.com/2012/07/adding-4th-v-to-big-data-veracity.html
If we're going to add 1 V, why not a few more... and while we are at it, let's put it into a model that helps us make some decisions. So I call this the 8V Spider model for assessing big data projects.
Well it turns out that the model only made it to the appendix the following day, but was recently circulated for use in an industry presentation, so I thought I ought to describe it here in case anyone is interested.
So we should all know the Volume, Velocity and Variety elements already. And if you read Dwaine Snow's article, you'll have a good idea of the Veracity component.
The trick behind the model was to get the wording right so all of the V's would be at the outer most point of the scale. So how Valuable is the data - that is, can you really leverage the data as an asset - that is, as if it were currency? This may of course take some manipulation and analytics to turn raw data into value, but you get the point.
One might challenge the difference between Variety, Veracity and Variability. So Variety implies various types of data - structured, unstructured, semi-structured. Veracity really addresses data quality - if you get the data from facebook or twitter, how good is the quality really? So now we are left with Variability - is the data standardized - are there industry standards for this type of data that can be used across multiple data sources - ontologies and semantics. Are there standards for naming compounds and associated attributes, for example?
Visualization is an interesting one - is the data easily visualized - either directly (an image or a chemical structure) or indirectly, through statistical graphical tools?
There is a bit of a debate on whether Viscosity refers greasing the gears of the engine - that is, is the data actionable - can it be used to improve processes? See http://blog.softwareinsider.org/2012/02/27/mondays-musings-beyond-the-three-vs-of-big-data-viscosity-and-virality/ and http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/The-best-big-data-presentation-I-ve-ever-seen/ba-p/119375 For now, let's assume it refers to actionable. If you have data that allows you to make decisions because it is trusted, then we are referring to it as viscous. A bit of a stretch, I now.
There are a number of additional V's out there, but the point isn't how many V's you add to this model, it is the model itself.
So, as with all good radar / spider web diagrams, you can show the extreme case against other various use cases. In this model, we have a little data coming at us fast from a a number of different sources (structured, semi-structured, unstructured), but relatively good data quality and standard. It is of high value and can be readily leveraged in decision making.
Once you characterize the data in this way, you can begin to create technology solutions to process the data appropriately - e.g. architectural patterns.
More to come.
Well, everyone talks about the 3 "V"s of big data... Volume, Velocity, Variety. I remember reading an article about a 4th V - Veracity: http://dsnowondb2.blogspot.com/2012/07/adding-4th-v-to-big-data-veracity.html
If we're going to add 1 V, why not a few more... and while we are at it, let's put it into a model that helps us make some decisions. So I call this the 8V Spider model for assessing big data projects.
Well it turns out that the model only made it to the appendix the following day, but was recently circulated for use in an industry presentation, so I thought I ought to describe it here in case anyone is interested.
So we should all know the Volume, Velocity and Variety elements already. And if you read Dwaine Snow's article, you'll have a good idea of the Veracity component.
The trick behind the model was to get the wording right so all of the V's would be at the outer most point of the scale. So how Valuable is the data - that is, can you really leverage the data as an asset - that is, as if it were currency? This may of course take some manipulation and analytics to turn raw data into value, but you get the point.
One might challenge the difference between Variety, Veracity and Variability. So Variety implies various types of data - structured, unstructured, semi-structured. Veracity really addresses data quality - if you get the data from facebook or twitter, how good is the quality really? So now we are left with Variability - is the data standardized - are there industry standards for this type of data that can be used across multiple data sources - ontologies and semantics. Are there standards for naming compounds and associated attributes, for example?
Visualization is an interesting one - is the data easily visualized - either directly (an image or a chemical structure) or indirectly, through statistical graphical tools?
There is a bit of a debate on whether Viscosity refers greasing the gears of the engine - that is, is the data actionable - can it be used to improve processes? See http://blog.softwareinsider.org/2012/02/27/mondays-musings-beyond-the-three-vs-of-big-data-viscosity-and-virality/ and http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/The-best-big-data-presentation-I-ve-ever-seen/ba-p/119375 For now, let's assume it refers to actionable. If you have data that allows you to make decisions because it is trusted, then we are referring to it as viscous. A bit of a stretch, I now.
There are a number of additional V's out there, but the point isn't how many V's you add to this model, it is the model itself.
So, as with all good radar / spider web diagrams, you can show the extreme case against other various use cases. In this model, we have a little data coming at us fast from a a number of different sources (structured, semi-structured, unstructured), but relatively good data quality and standard. It is of high value and can be readily leveraged in decision making.
Once you characterize the data in this way, you can begin to create technology solutions to process the data appropriately - e.g. architectural patterns.
More to come.
Saturday, September 15, 2012
Evergreen - keeping your systems updated
The dirty little
secret
Most large corporations have a big problem with staying
evergreen – and when they don’t, there is a hefty price tag and a lot of
disruption to get back in sync. Interestingly,
there is very little useful research on evergreen and no one really talks about
it much, they just deal with it periodically when it becomes a big
problem.
The world is changing
and some of the more modern systems seem to have it under control (time will
tell). On the backend for example, take
some big Cloud players like Google applications, Yammer or Facebook – they just work and they are constantly
being updated with limited disruption to a massive user base running many
browsers and devices. On the client
device side of things, both Apple’s IOS and Google’s Android OS seem to have
things under control with OS releases and the App Store concept.
So what can large corporations learn from these evolving
paradigms to make ‘evergreen’ a normal way of working and is it really a
problem that needs fixing?
Why do we care about
staying Evergreen?
·
To avoid periodic large scale, costly and
disruptive upgrades caused by necessary platform upgrades (e.g. browser,
desktop OS, server OS, etc.). For
example, the latest Windows 7 upgrade cost £xM
for application remediation
·
To ensure our systems are supported by our
vendors
·
To maximize our investment by utilizing the
latest features for which we have paid
·
To avoid security breaches by having the latest
security fixes
·
To avoid some systems that don’t care about
upgrading holding back other interdependent systems that require upgrades
So, while it seems sensible to keep your systems up to date,
it is much harder than one would expect.
When does Evergreen
become a problem?
Evergreen typically becomes a problem when the Operating
System, Browser, Java version, Hardware or some other underlying technology
gets updated before all the applications (which depend on that underlying
technology) are updated. On a single
device with only a few applications, this is not a big problem. However, when there are thousands of people
using the same application and the corporate desktop is upgraded before the
application is ready and the application fails, it becomes a bigger
problem. This is further complicated
when there are thousands of applications that need to be tested / upgraded each
time there is an upgrade to the corporate desktop.
What is holding us
back from staying Evergreen?
·
There is often unclear or limited business value
to upgrade individual applications
·
It’s often not ‘easy’ to upgrade applications -
it usually takes time, money and may cause isolated business disruption. Consider the car analogy – it takes time and
money to change your oil every 5-10k miles, but if you don’t, you have a much
bigger problem later.
·
There is often unclear accountability for
ensuring individual applications are updated
·
Some of our software suppliers are very slow in
upgrading their software to run on the current release of the browser or
operating system
·
Applications are dependent on shared services AND
shared services are dependent on applications.
If ALL applications are not up to date to run on the current
browser/desktop or server, then the shared services cannot be upgraded to the
current release.
What is Changing?
This is a very old problem, but some things are
changing.
·
In the consumer world, we have seen the idea of
app store take off, where small, incremental upgrades are made available to the
user and they do the upgrade as appropriate
·
Also in the consumer world, we have seen
operating system upgrades and browser upgrades become easy for the user to
implement with wide scale success across millions of users
·
Internally, we are seeing a more serious
approach to managing our application portfolio
·
Backward compatibility of applications are
getting better
·
Backward compatibility of operating environments
are getting better
Where do we go from
here?
We have 4 general options:
1.
Status
Quo: Push
platform/infrastructure upgrades out periodically by careful planning and
coordination before the new platform is pushed.
Address the cost and disruption created by this approach as one-off
events.
2.
Cloud
model: Push platform/infrastructure upgrades out as small, incremental
upgrades over time without really telling your users it is happening. This is really only viable when a fully
backward compatible architecture is in place or there are no significant
application dependencies. For example,
Yammer, Google and Facebook update their software often with notice, but
without choice. Microsoft Sharepoint
could not do this today because the architecture is not designed in a way to
support this and many applications developed in the Sharepoint environment
would have to be retested.
3.
Consumer
model: Enable users to Pull platform/infrastructure upgrades which are
available (as soon as they are stable) in an easy to install fashion to users,
giving them a warning that some apps may not work if they upgrade. Put a fixed date when the upgrade must occur
– ensure application owners are aware of that date too. Let the ‘consumer’ drive the application
owners to compliance. Apple’s IOS
environment and the Android OS are good examples where this is working
reasonably well for millions of devices and applications.
4.
Hybrid
model: Similar to the consumer model with four key differences
-
Application owners are actively and centrally coordinated
to deliver to the date prescribed
-
Users are told specifically which applications
have not yet been tested with warning messages during the pull period allowing
them to further drive demand (sorry, we can’t
give you this upgrade because xyz application installed on your device isn’t
ready, please call the application owner, John Smith to get his plans for
upgrading his application)
-
Applications owners not ready at the cut-off
date will get cross charged for remediation and they must pay a premium to host
in a separate isolated environment (e.g. Citrix)
-
A push deployment happens when the cut-off date
is reached – critical business applications are remediated before the cut-off
date; non-critical applications are remediated after the cut-off date if they
fail to work in the new environment
How green is
evergreen?
It doesn’t always make sense to be running the latest and
greatest. Sometimes the new environment
isn’t really ready. Microsoft is
notorious for releasing ‘buggy’ software and it’s been a general rule of thumb
to wait for a few patch releases or a point release before pushing upgrades to
the general population. Microsoft isn’t
alone. It is a shame that, with all the
testing the software suppliers do, we can’t just accept that it will work in
our environment, but unfortunately that is often the case.
The general view amongst my colleagues is that Evergreen is
N and N-1; where N is the current ‘stable’ release of the software and -1 is
the previous version release of the software.
So today IE 9 is the current stable release of Microsoft Internet
Explorer Browser and IE 8 could be considered part of the evergreen
environment. This is because there is
usually a transition period where both browsers will need to be supported
concurrently.
Of course N+1 would be IE 10 which isn’t yet released and
even when it is release, there will need to be some time for it to be
tested. So, we will define a ‘stable’
release of software as software that has been running in the mainstream
environment (outside of our company) for at least 6 months and has released
patches to address reported issues. At
that stage, judgment must be used to determine if it is indeed stable based on
industry reaction.
Bottom Line
Recommendations
1.
Adopt a structured hybrid approach to evergreen
as outlined above until your application portfolio is modern enough to leverage
a cloud and consumer model for evergreen.
2.
Rationalize your application portfolio giving
preference to applications that demonstrate modern features (e.g. browser
independence; backward OS compatibility;
etc.)
3.
Manage your application portfolio – every
application has an owner; every owner understand their responsibilities. Clear guidelines are in place to select or
migrate to modern application environments over time so that evergreen becomes
the norm.
4.
Clear communications and roadmaps are made for
the underlying platforms are changing with ample lead time to test if
necessary.
5.
Create a ‘jail’ for bad applications that can’t
or won’t be compliant. Move applications
there with a financial penalty of paying for the ‘jail’. Manage the stay within the jail carefully so
it doesn’t become too full. Try to
minimize the number of users coming to visit inmates as this could be expensive
(i.e. providing Citrix licenses to users to access applications in jail).
6.
Carefully address validated applications. Know
the rules around validation and don’t overdo the testing of the app when it
isn’t absolutely required. Consider a
specialized service that is focused on this.
Subscribe to:
Posts (Atom)