Wednesday, September 23, 2015

Bringing Data Agility to Docker Containers

First, I think Docker is fantastic. But, it seemed like at every DevOps conference I attended, I heard the same gist: "Docker is great, but I need an easier way to get production data (App and DB) to my containers, and keep it fresh. I don't find bugs until way too late in my cycle." Every time I heard that, I was reminded of the great blog my colleague Neil Batlivala titled, "Why Docker Is Not Enough." And, if you don't understand why that is hard or important, then I really encourage you to read his blog. It is most excellent.

What I finally determined to do is to demonstrate, in a simple and straightforward manner, how simple it is to leverage Delphix to do this. And so I recorded a video that demonstrates how easy it is to configure Delphix to deliver fresh data to Docker containers using out-of-the-box features. That is done in about 5 minutes, and then I demonstrate how to leverage all of the self-service tools of Delphix against Docker.

Delphix Setup/Config

  1. Add the source SugarCRM Server and target Docker Server as environments in Delphix
  2. Add the SugarCRM application owner (i.e. www-data ) as an Environment User to both environments.
  3. Link Delphix to the SugarCRM source database. In this case it was a MySQL database
  4. Create Unstructured Files dataset to the webroot of the SugarCRM application and then link Delphix to it as the application. In my case, this directory was /var/sugarcrm/apps/sugarcrm/htdocs and the owner was www-data. Notice it is just the SugarCRM webcontent, not Apache, PHP, etc. The files in the htdocs folder are the same one gathered when using the SugarCRM backup utility. 
  5. After the initial link was completed, here is what my source environment looked like in Delphix:
It is after this setup that the video commences. More information on how I configured Landshark to run SugarCRM and docker at the bottom of the page.


Link to video: https://vimeo.com/140216365
Or watch here:

Landshark Modifications to Run SugarCRM

Below are the steps I personally took to get SugarCRM and running on my Landshark 2.2 installation. I made choices that simplified my path to demonstrating the use case, and may not have always been "the most administratively correct." To learn more about DelphixExpress and Landshark 2.2, please see my colleague Kyle Hailey's blog post here : http://datavirtualizer.com/delphix-express-free-version-of-delphix-available/

Landshark Linuxtarget Setup/Config:

  1. ssh into linuxtarget as root
  2. useradd -u 33 -g 33 www-data
  3. sed -i -e 's|tape|www-data|' /etc/group
  4. passwd www-data
    1. set to delphix
  5. visudo
    1. change the "delphix ALL" line to "delphix ALL=NOPASSWD:ALL"
    2. Add the following entries to the end of the file (after the delphix entries):
      Defaults:www-data !requiretty
      www-data ALL=NOPASSWD:/bin/mount, /bin/umount, /bin/mkdir, /bin/rmdir, /bin/ps, /usr/bin/docker
  6. chkconfig docker on
  7. service docker start
  8. exit

Landshark Linuxsource Setup/Config:

  1. ssh into linuxsource as root
  2. repeat steps 2-8 above
  3. ssh into the linuxsource as delphix
  4. curl -L -O https://s3-us-west-2.amazonaws.com/landshark/linuxsource_public_setup.sh
  5. chmod +x linuxsource_public_setup.sh
  6. ./linuxsource_public_setup.sh
  7. This will take several minutes to complete where you will be prompted for a password. The password is delphix
  8. If the environments are already in Delphix, then refresh them.
  9. After it is finished, launch a web browser to http://<IP of linuxsource>:8080. ie http://192.168.2.111:8080
  10. username/pass = admin/delphixdb
  11. Complete the wizard. The only information required from you is an email address. Sugar is running and you are ready to complete the Delphix Config/Setup above.

Friday, August 14, 2015

OODA Loop + Dev Ops + Delphix = High Speed, Low Drag

OODA.Boyd


Back in the 1950's USAF Colonel John Boyd came up with a new combat decision methodology. He broke combat strategy into four stages: Observe, Orient, Decide, Act. Completing those four stages returned you back to the Observe phase where the process would begin again. This process is known as the OODA Loop. Boyd maintained that the way to defeat an enemy was to lap your OODA Loop faster than your enemy can lap their OODA Loop.

As is often the case, what is good strategy for winning  on the battlefield, is also good for winning in the market place. We see this in the the DevOps mantras of today: "Continuous Feedback", "Fail Fast", "Agile Development", and "SCRUM." All of these "Devopsy" things fit snugly in the OODA Loop model.

And indeed, DevOps has proven itself invaluable in expediting a company's OODA Loop. Thanks to DevOps tools and methods, companies like Amazon do a distinct code change to production once every 11.6 seconds. That's over 7,000 times a day! They are able to observe market trends and user feedback in real time, make a decision on that information, release new features based on that information, and then observe the effect of those changes. This loop is completed many times (thousands) a day, which allows them to outpace their competition.

DevOps has provided those quicker OODA Loops mainly by eliminating the many numerous touchpoints that are required in software delivery: Support Desk, Infrastructure, Ops, DBA, Storage, Security, Project Managements, etc. Great tools like Puppet, Jenkins, Chef, and Ansible have automated the codified process flow and allowed companies to trim down environment requests from days/weeks/months to hours/days. In addition to the speed gains, the continuous feedback made possible via DevOps has allowed them to treat infrastructure as code and leverage version control to raise the overall quality of their products and projects.



That means we can go fast as we can, until we hit an impasse, because application projects still require a lot of waiting. There's a US Navy term  for that: High Speed, High Drag.  It's like having the world's fastest race car, but having the world's slowest pit crew. Development and modernization projects, datacenter and cloud migration projects, CooP/DR failover exercises, Data Masking and Auditing, BI reporting, etc. all require a waiting (hours, days, or weeks) during database and application resets and refreshes as terabytes of data are restored and copied across the network. That means the refresh/reset process is likely the longest task in the schedule. A ten minute destructive/failover test of your database/application can require a reset process that takes 10x-100x longer than the actual test, , if they are even attempted at all due of the level of effort and "time suck" required.

With Delphix, you take those application reset/refresh activities down to minutes and performed in a few clicks of a mouse or automated with your other DevOps tools. It doesn't matter if it is 5MB or 5TB; it is done with a few clicks and in just minutes. That means that your feedback cycles just became over twice as fast. When your application/database environments have fresh data near-instantly and on demand, you no longer wait, and instead spend your time Observing, Orienting, Deciding, and Acting. And that, is what the coveted "High Speed, Low Drag" for your OODLA loop that is needed to defeat the enemy and beat the competition. 

More information about Delphix can be found here:
http://www.delphix.com/

Friday, August 7, 2015

DevOps: An Engine in a Horseless Carriage

DevOps is fantastic, but the one constraint DevOps has been unable to address is data management and delivery. DevOps can automate the delivery of the ones and zeros of your applications and databases, but those bits and bytes can only travel so fast on the network. DevOps has allowed us to apply the laws of physics and maximize efficiencies to close to their breaking point for new environments. We try and build faster networks and accelerators to squeeze the last drops of speed out of silicon, copper and fiber, with diminishing returns. Even Moore's law is being re-evaluated as technology just isn’t able to make the same speed gains with our current understanding of physics. But what if we can change the equation?



I liken DevOps to the automobile. It was a tremendous innovation replacing the horse drawn carriage with the "horseless carriage," but that is truly what it was. They got rid of the horse, and improved or eliminated the many touch points associated with arriving at your destination (care, feeding, stabling, dying of typhoid fever, etc), yet you still had the same amount of miles to cover on the same roads. But, like DevOps,  you were now able to do it at breakneck speeds. That's all the current knowledge of science and physics would allow. 

Before the turn of the 20th century, the Wright Brothers saw technology like the automobile, marveled at it, but asked themselves the question "The engine is fantastic, but is this as good as it gets? What if there is a better way to use the engine? What if we changed the equation? What if we can eliminate the constraint of roads?" Can you imagine the skepticism and derision they must have endured? I even recall reading where people referred to their "flying machines" as witchcraft.  Despite all the nay saying, People came from many miles away to attend their air show demonstration just to behold the miracle. The Wright brothers persevered, got the science right, and completely changed the way we perceive the world.

I draw a parallel to when we invented Data Virtualization at Delphix. We asked, "What if we can eliminate the constraint of moving the same data over and over again? What if we could make this happen in minutes? Why do we move data around the same way we have for decades? What if we could take the engine of DevOps and liberate it from the constraints holding it back from its real potential?" Our questions were similar to those the Wright brothers must have asked themselves. And once we figured out the science of how to do this, we set out on a mission to share our creation with the world.

I truly love sharing that message. A first meeting with a customer usually begins with a complete lack of knowledge that Data Virtualization exists (flight),  almost always follows with a proclamation that Delphix is some sort of "magic" (witchcraft),  and then a request to see it live and working in their environment (the air show). After they have witnessed Delphix in action begin telling everyone they know about "the next big thing" they just witnessed.




And as amazing as our technology is, Delphix is far more amazing because of the people that work to make it happen. I am really proud of all the people I work with. They are some of the most talented and dedicated people I have ever known. And  we owe all of our success to people like the Wright brothers that paved the way before us and dared to ask "Why?"  And fittingly, I am writing this while on a flight home from one of those meetings. A tip of the hat to you both, Wilbur and Orville. Thank you for allowing our dreams to take flight.

Monday, April 6, 2015

Migrating My Enterprise App to the [anywhere but here] (Part 3 of the "The Cure for What Ails Your Remedy" series)

Part 1 of this Series
Part 2 of this Series

I cannot recall a DoD or Fortune 500 customer in the last five years that has told me they are not actively pursuing moving to the cloud or a new datacenter. Many even have a strategy and perhaps even an architecture. But, when I ask them "How do you plan to get there?", the response is often "I don't know yet." Why? The reason is that while acquiring and leveraging cloud-based resources have never been easier than before, the path to get to the cloud is still fraught with roadblocks that insert great risk, elongate project schedules, and overrun budgets; putting your mission, or business. This means the cloud migration that was supposed to deliver great advantages, is now an expensive ticking time bomb of risk. Will you make it before the timer expires? Which do you cut first, the green or the red wire? I firmly believe that Delphix eliminates the obstacles in your path keeping you from achieving your cloud goals. I invite you to read this blog post about an application migration project where I use Delphix to move a prevalent enterprise application, BMC Remedy, to the cloud. At the conclusion of the article, you will have a better understanding of how Delphix enables greater mission and business success by enabling you to securely get your applications to the cloud ahead of schedule and under budget, all while mitigating the risk of failure. 

If you are just jumping into this series, you can get caught up by reading the two links at the top of the page.

Let's start with a recap. in the first installment, I discussed the common and costly constraints of complex enterprise application projects, namely ITSM,  and how we could apply Delphix to eliminate those constraints. The second installment was more technical in nature and chronicled my experience in virtualizing a new application (BMC Remedy) for the first time. At the end of the second blog I realized the environment in my lab was not sufficient to run multiple instances of Remedy and I needed to migrate my work to the cloud. I decided to turn this to my advantage as a teaching moment. In this blog post, I am going to discuss some of the common constraints with application migration projects; how Delphix eliminates those constraints; and include a link to a video of my cloud migration of Remedy, where I use Delphix to move my Remedy installation from my lab to Amazon Web Services.

When I reflect back on what kept my customers from successfully migrating their applications, I see three main constraints.

Upside Down Plans
 Your application migration plan likely resembles something like this: Backup all application environments (AE), copy all AE's to new facility, get non-prod application environments operational (guinea pigs), stage new production, rehearse new production, validate new production, backup old production, stage new production for final cutover, cutover new production. That simple, right? Anyone who has been a part of application modernization or migration projects knows that each step, even the first one, can take months to complete; and some steps in there will have multiple iterations (rehearsal and validation).

You are spending, or planning on spending, a lot of time and resources to migrate your applications to the cloud in order for your mission or business to reap the agility, speed, and/or cost benefits that the cloud offers. Ironically, your migration plan is likely to take you a huge step backwards. But, you realize this and are willing to take the step, because you are hoping for two steps forward afterward. Hope is not a strategy. Why not just move forward the whole time?

With Delphix, you can turn your application migration project right-side up. Place a Delphix engine in your new location; Delphix is cloud-ready, so you can even put it in AWS or AWS Gov Cloud. Delphix will make a non-disruptive copy, compressed and filtered, of your production application and database in the new location.  Delphix will even work across a slow WAN link. Going forward, Delphix will stay in near real-time sync with your production applications and databases,  building a sort of time flow, providing you with a copy of your production application in your new datacenter from virtually every point in time after Delphix made that initial copy.

With your production application data in Delphix, located in the new datacenter, Delphix can now share the underlying data blocks across your non-prod environments  to create as many virtual copies as you need. Instead of stopping work and backing up, then moving, all of your non-prod environments,  you can hydrate all of your non-production environments with fresh virtual copies of production in minutes. These copies are fully autonomous, read/write, and consume a fraction of the storage (around 90% reduction). The virtual applications feel and behave the exact same way, so there is no need for your developers to have to change the way the application interacts with the database or how they currently perform their development.

But, the real power of how Delphix changes the process and economics of these projects is with the self-services tools. Delphix gives your DBA's and developers the ability, with a few clicks of a mouse, to perform fast refresh, rewind, restore, archive, bookmark, and branching of their applications; and for it to be completed in minutes.

Rational FUD

But nothing slows down, or completely paralyzes, a project like fear, uncertainty, and doubt (FUD). With all of the migration horror stories about business outages, cost runaway, and vendor lock-in; a healthy dose of FUD is warranted when using traditional migration methods. After all, it's called paranoia only if there isn't a reason to be worried. 

Delphix allows you to increase your rehearsal and validation by 1000%, and allows you to use fresh production data each time. No longer are you conducting rehearsal and validation activities that run on weeks/months old data and takes minutes to run and days/weeks to reset for another attempt. Provision a copy of your production application to the rehearsal environment with a few clicks and in minutes it's ready. Perform your validation activity. Something didn't go right, or you want to tweak a setting and try again? Restore that application to the pre-test state with a few clicks and it is ready for testing again in a a few minutes. If validation was successful, or if you had some interesting rehearsal results that you want to share with others/revisit at a later time, just click "bookmark" and you have a complete copy of your application environment that you can call on at any time in the future (and produce within minutes).

Do you need your non-production data to be masked? Delphix Agile masking can replace all of the PII, PCI, or any other sensitive information in your production data with realistic data (think randomized SSN's, etc) so that your developers can test to ensure that their new query works to pull the details of the entire fleet's personnel records, but without it being anyone's actual information. And since Delphix can handle all of this automatically, developers no longer wait for someone to scrub the new copy of production so they can have it, and you no longer have to hope masking has actually been done. And all of this is setup with a web browser without requiring specialized programming skill sets.

Once you have successfully completed all validation activities and are ready for cutover, you can have Delphix unvirtualize the AE. Delphix will create a full copy of that AE on in your new production environment so you can complete any final steps needed to complete cutover. Once new production is up and running, you can have Delphix link to that environment and leverage the data from new production for all of your downstream copies.

Would your anxiety level go down and stakeholder confidence go up if you were able to conduct 1000% more high-quality tests with fresh production data, without adding a day to the schedule?

Dragging a Sled

But if the migration project has already begun, even with the customer desiring all these benefits, I commonly hear a variation of this irrational FUD statement: "I can't introduce a new product into my migration because we have already budgeted our funds nor can we allow the project to slip to the right any further."  Every time I hear this, it reminds me of a response to that statement that  I once heard an executive tell a now-customer. It was something like this:
I understand where you are coming from, I do. You and your entire team are heads-down and working your hardest to drag this project across the finish line. You have accepted that you won't get it there in time, but you are working your hardest to ensure "the application  sled" delivers your application as soon as possible. But, I am offering you wheels and an engine, not a fancier sled. Wheels and an engine turn your sled into a automobile. And, you will  actually deliver your application on time and under budget, with no losses along the way. You have to take time to now to make the changes needed in order to speed up your projects. Putting on wheels after you cross the finish line is pointless.

It usually only takes a few days to get Delphix installed and operational in your environment. Most of my  customers begin reaping the benefits of Delphix inside of a week. That means that in as few as five days, your team can be resetting the rehearsal environments in minutes and performing so many more tests, while considerably increasing the quality of those tests. In five days you can be driving your application project, not dragging it.

Real-Life Example
The perfect example of Delphix addressing these three areas are exemplified in this customer example. A fortune 500 consumer products company had undergone an application modernization and migration project. In addition to changing to Oracle Exadata hardware, the customer was also changing MSP's. That, in itself, is a double-whammy. Not only are you migrating data centers, you are also changing contracts and operations personnel. As you can imagine, this became a political nightmare. Very quickly, milestones were being missed and costs were escalating. It got so bad, that the company had come to the conclusion that the only way to get the migration done was to plan for a weeklong outage and rent forklifts to load their equipment into semi-trucks and drive them across country to the new location! At about this time, Delphix entered into discussions with the company. The customer purchased and implemented Delphix into their migration project. The end results were astounding. The customer eliminated the "Lift and Shift" of moving four out-of-five of their non-production environments, and reduced their overall migration and ongoing operational costs by 80% for their non-production environments. The customer finished their migration project ahead of schedule, under budget, and had experienced zero downtime all the way through cutover.

Below is a link to my video where I actually move my BMC Remedy Installation into Amazon cloud using the features I spelled out in this blog. The video is for the more BMC Remedy-knowledgeable viewer, but it's basic concepts can be applied to your enterprise applications.

Tuesday, March 17, 2015

Virtualizing Remedy (Part 2 of the "The Cure for What Ails Your Remedy" series)

I am finally back from vacation and had a chance to focus on virtualizing BMC ITSM (Remedy). This second installment in the blog series journals my experiences with virtualizing the Remedy application. If you have never heard of Delphix before, watch this quick video and then come back here. Beyond this summary, I wax into some technical details that may only appeal to a more technical audience that are intimately familiar with enterprise software and virtualization minutiae. Overall, the virtualization of Remedy was straight forward and quick to accomplish. It took me longer to record, edit, and blog about this process than it actually did to complete it.  In the end, I was able to successfully produce a fully-functional virtual copy of Remedy in under 15 minutes with Delphix. 12 of those minutes were just waiting for Remedy to start. Below are some notes of my experiences, both captured in the video (link at the bottom) and done "off camera":


Satisfy Prerequisites

It sounds obvious, I know, but the first step is to ensure the systems I was using met the prerequisites of the application, such as libraries and other dependencies. In my case, since I was installing my source application for the first time as well, I just had to repeat the same preparatory steps on the system(s) that would be receiving the virtual copies of the application. 

I also took the steps necessary to satisfy the Delphix prerequisites. In a nutshell, I have to be able to provide Delphix with an account that can actually read and access the data on the source I want to virtualize and also provide an account that can read and execute the data on the target system(s). This can be an existing account, but I am security minded and like to keep separate accounts for different functions. I created a non-privileged user called 'delphix' on both machines, though they did not have to be the same. On the source, I gave that user the ability to read the database and necessary schema info (Delphix provides a script to do this for you, if you wish to leverage it) and also the ability to read the file system directories of the source application. On the target system, I gave that user the ability to execute a few limited commands like mounting and unmounting a specified directory.

A full list of system requirements can be found on our publicly accessible product documentation page:
Oracle Support and Requirements


Tidy Things Up

I examined my installed source application and noticed that though I had specified one directory (/u02/bmc), it had actually also put necessary files in the default directory (/opt/bmc). I also tracked down all the other supporting file locations (/etc/arsystem). There are a few ways to accomplish handling the disparate locations. In my case, I opted to relocate everything into the installation directory and place symlinks on the systems for the original locations. This makes it easier to track, maintain, and update applications in general, so this is how I opted to arrange it before bringing the application it into Delphix. 


Bring the Application Into Delphix

Once the application was ready, it was time to start bringing the data into Delphix. I first told Delphix where the data was housed, by providing it with the IP address (no DNS in my little network) and credentials of the delphix user I created. Then it logged in, via ssh, and discovered some basic system information about my system, my Oracle installation, the arsys db instance, and my PostGreSQL instance (which I had forgot was on there).  Delphix copies over some temporary files to accomplish this, executed as the non-privileged user. This took about 30 seconds. Once the discovery was completed, I told Delphix where to find my two application components (Mid-Tier and ARSystem).

Now that Delphix is aware of the three data sources (DB, ARSystem, and MidTier), it's time to start bringing in the data from those systems. I linked Delphix to each of those sources, creating what is known as a dSource. At that point, Delphix starts bringing in a filtered, compressed, and "deduped" copy of those three sources into the Delphix engine. It takes about 7 minutes for the initial copy of the roughly 12GB of data to be completed, which isn't bad for my little home network. That 12GB of data in Delphix is only around 8GB; a ratio of about 1.5:1 on the apps and 5.5:1 on the oracle database. Going forward, Delphix is only non-disruptively copying changed data blocks from the source system.

For more information on how Delphix works, see the "How Delphix Works" page on Delphix.com:
http://pages.delphix.com/rs/delphixcorp/images/Delphix_Jet-Stream_DS.pdf


Provision a Virtual Copy of the Application

Now that the data is in Delphix, I am ready to create virtual copies of it. Since I haven't already done so, I add my target environment into Delphix. It does the same discovery, and identifies my Oracle and PostGreSQL installations, but there are no databases running on these instances. Now I have a place to run my virtual applications and database. I first provision my virtual database (VDB) by selecting my Remedy DB dSource and choosing a point-in-time and then click "provision". I specify a new SID, UNQName, and Database name, as well as change the memory target option that was gathered from my source Remedy Database (my target instance doesn't have as much memory available). I can also change or specify Oracle parameters here, such as cursors, etc. I click next and give it a name to identify it in Delphix ("Remedy DB - Dev"), what group I want to put it in ("Dev"), and my default snapshot policy for this virtual database (I choose "none"). I accept the remaining default values and click finish on the summary screen. It takes me less than 10 seconds to begin the virtual database provisioning process. 

I repeat similar steps for the other two dSources. The only difference to really note here is that I obviously don't have SIDs and Oracle parameters to specify with these apps. It takes around 30 seconds to provision the virtual application data and a little less than 3 minutes to provision a running virtual instance of the arsys database. At this point, the database is running, but the application is not. 


Rinse, Lather, Repeat

Now that I have all of the necessary components of Remedy provisioned to the target system, I need to reconfigure the data so that the application will run on the new system. This is a full copy of production. It has all my tickets, changes, data, patches, and configuration files. I am not a Remedy expert (I did proclaim this in my previous blog), so this part is going to take Google and a few emails to some former colleagues of mine. In short, I know I am going to mess up a lot and need to revert destructive changes. I will want to be sure I can take all three components back to the same synchronized point in time. For this, I used Delphix's JetStream component. I created a Remedy Template in Jetstream that identified the three tiers of the Remedy application and the source of the data for those tiers (the dSources). I then added my three virtual components, created in the last section, into a container of that template and called it Development Instance. This allows me to, with just one or two clicks of a button, quickly rewind, restore, refresh, or bookmark all three components to the exact same point in time.

After many numerous mistakes that is purely from my fumbling around, I finally figured out all of the steps needed to reconfigure the Remedy application to run on the target system. I then login to the Midtier and make the necessary changes and validate everything via logging into the ARSystem.

You can find additional information about JetStream in this data sheet:
http://pages.delphix.com/rs/delphixcorp/images/Delphix_Jet-Stream_DS.pdf

Bookmark

After all of the reconfiguration was done, I used Jetstream to bookmark all three components. I now have all three tiers of the Remedy stack archived in a crash-consistent copy. That means I have a known-good, ready -to-go, virtual copy of Remedy that will run on my target system. And I can get back to that known-good state in just three minutes, and then start up arsystem and the mid-tier. No other reconfiguration needed. I could do something stupid like recursively delete the installation directory or drop all the tables from the database, and with one click of a button ("Rewind" in Jetstream), all of my data and applications will be back to their ready state and ready to go. I can also take all three tiers back to any point in time, in just minutes. I can roll back to a previous state to check something out, and then roll forward again. All of this point-in-time data would consume 160GB in traditional media; in Delphix it consumes less than 1.5GB, a ratio of over 100:1.




Additional Thoughts/Notes

  • I chose to manually start/stop the Remedy application components for the purpose of the video and demonstration. Delphix has the ability to call automation ("Hooks" in Delphix parlance) and be called by existing automation (i.e. BladeLogic, Puppet, or Atrium Orchestrator).
  • Though I chose the simplest installation possible due to my lack of Remedy expertise and system resources, Delphix could have ingested a multi-ARserver/mid-Tier instance with a RAC database. Which leads me to my next point.
  • I don't have enough system resources! It was near impossible to run both my "production" and "non-production" Remedy instances simultaneously (in a manner where their speed was tolerable). I am going to need to migrate to the cloud in order to do my next sessions, which is a great opportunity for us. My next blog post will be one of necessity: Using Delphix to Migrate Remedy to the Cloud.
  • Once that blog post is complete, I will post a demonstration of how Delphix eliminates the rest of the constraints I spoke about in my previous blog post.
The video journaling this experience can be found here:

Thursday, February 26, 2015

The Cure for What Ails Your Remedy


Information Technology Service Management (ITSM) is, in a nutshell, an approach to managing everything in the IT lifecycle as a service, as opposed to just assets and technology. Most enterprises end up with huge ITSM systems (think BMC Remedy, HP ServiceCenter, CA Intellicenter, or ServiceNow). There are a few ITSM suites out there that can deliver the framework and required core capabilities for your environment; but your enterprise also has a lot of unique requirements, and that means a lot of customization in your ITSM systems.  And, just like ERP systems, the ITSM customizations aren't only upfront, but are constantly ongoing. The necessary evil of customizations brings about immense pain that manifests itself in production outages due to insufficient test cycles; exorbitant project costs; and crumbling RTO and RPO.

I had the privilege and honor to spend many years with the fine folks at BMC and designed many architectures and integrations around their ITSM platform (Remedy) and their Cloud Lifecycle Management platform. I will share my POV from my experience with Remedy, but everything that I speak about also applies to the other Enterprise ITSM systems like HP ServiceCenter.

The ITSM teams are often plagued with data constraint issues that can be categorized as follows:
  • Insufficient Environments - Everybody shares one instance. Perhaps just one non-Prod instance total. (Again, I know this from direct experience). This results in everyone waiting in line to do their work and the release process is always dependent on the last person finishing their tasks before the next stage can commence. (It can't go to staging until the last QA engineer is finished). And when I say everyone, I mean Break/Fix, Dev, Training, etc. I can only personally name a few places where I have seen more than two non-production environments (total of three ITSM environments).
  • Inadequate Data - The data in the non-prod environment(s) is often synthetic and seldom contains real data (usually the Calbro Services data + some synthetic transactions). And, if it does contain real data, it is a subset and almost never refreshed. ITSM environments can be huge! Having to copy and move that much data becomes a very labor intensive task. On top of that, ITSM systems, by nature, contain the most sacred details of your enterprise: Product Serial Numbers, Asset locations, Fleet info, Vendor contracts, Service Level Agreements, Employee SSN's, Birthdays,  Dependent information, Salary Info, etc. No CIO/CSO I know wants multiple copies of that data around.
  • Inefficient Processes - How long does it take you to spin up a new ITSM environment? How long does it take you to back out of a failed patch? How long does it take you to restore or reset the environment? How long does it take you refresh your break/fix instance with production data when an issue arises?  What about Friday at 4:30 PM? For any of those questions, can you describe the process? My guess is that if you are a Remedy manager, developer, or tester, your answer (days, weeks, months) would be far different than the answer of your sysadmin, dba, or storage guy (minutes or hours). Why? Because your process likely starts with, "I file a ticket…" and includes statements like "I then have to wait for …. And then I have to wait for… (*n)…and then finally….." The sysadmins, dbas, and storage guys are awesome, but the process inherently is slow, even if the individual actions are fast.

I have known a lot of customers over the years that have struggled with the huge project costs and timelines associated with upgrading  Remedy  to a new version or migrating to a new datacenter. We're talking $MM services engagements here. What if we could turn that on its head and reduce the timelines and projects by half?


Over the next few weeks I am going to blog about how Delphix completely changes the process and economics off your enterprise applications. First, I will be chronicling my journey of virtualizing BMC Remedy with video and my blog. This won't be so much a tutorial on the best practices of Remedy development/testing or a "how to" guide of virtualizing Remedy with Delphix (full disclaimer: I am not a Remedy Guru). Consider this a documented exploration of a solution to the problems listed above. The ultimate goal of this series is for the follower to have an epiphany of what is now possible due to Delphix.