Sunday, July 8, 2018

Deploying Data as Code (Delphix + Terraform + Amazon RDS)

Architecture diag from delphix.

Last year Delphix blogged about how the Dynamic Data Platform can be leveraged with Amazon's RDS (link here).  Subsequently, they released a knowledge article outlining how the solution can be accomplished (link here).

I thought I would take the work I have been doing in developing terraform plugin and create a set of blueprints that could easily deploy a working example of the scenario. I also took that a step further and created some docker containers that would package up all of the requirements to make this as simple as possible.

This demonstration requires the Delphix Dynamic Data Platform and Oracle 11g. You will need to be licensed to use both.

TLDR;

  1. Build the packer template delphix-centos7-rds.json via the instructions found here:  https://github.com/delphix/packer-templates
  2. Via terraform, build the blueprints found here:  https://github.com/delphix/delphix-terraform-blueprints-rds

Walk Through

This example automates the deployment of the solution described in KBA1671

This example requires that you posses the proper privileges in an AWS account, access to Oracle 11g software, and access to version 5.2 of the Delphix Dynamic Data Platform in AWS.

Consult https://github.com/delphix/delphix-terraform-blueprints-rds details on the prerequisties.

Automation products used:
  • Ansible
  • Packer
  • Terraform
  • Docker
  • Delphix Dynamic Data Platform

Building the AMI

In this example, we will be using a simple configuration using Oracle 11g as the backend. 
We will first create an Amazon AMI that is configured with Oracle 11g and ready to use with Delphix.
We will build the image using a docker container running Packer and Ansible.

We will follow the instructions here to build the delphix-centos7-rds.json template




We will be using the cloudsurgeon/packer-ansible docker container to build our ami.
See the full description on https://hub.docker.com/r/cloudsurgeon/packer-ansible for usage details.


  1. First we clone the repo, then navigate into the directory.
  2. Next, copy the .example.docker file to .environment.env and edit the values to reflect our environment
  3. Now we run the docker container against the delphix-centos7-rds.json template to create our AMI.
    Details of the command:
    • docker run – invoking docker to run a specified container

    • --env-file .environment.env –passing in a file that will be instantiated as environment variables inside the container
    • -v pwd:/build – mount the current working directory to /build inside the container
    • -i - run the container in interactive mode
    • -t – run a seudo tty
    • cloudsurgeon/packer-ansible:latest - use the latest version of this image
  4. When the container starts, it will download the necessary Ansible roles required to build the image.
  5. After downloading the Ansible roles, the container executes Packer to start provisioning the infrastructure in AWS to prepare and create the machine image. This process can take around 20 minutes to complete. 

Build the Demo environment with Terraform


Now that we have a compatible image, we can build the demo environment.

  1. First we clone the repo, then navigate into the directory.
  2. Next, copy the .example.docker file to .environment.env and edit the values to reflect our environment.
  3. See the Configuring section of the README for details on the variables.
  4. We will be using the cloudsurgeon/rds_demo docker container to deploy our demo environment. See the full description on https://hub.docker.com/r/cloudsurgeon/rds_demo for usage details.
  5. run the rds_demo container to initialize the directory.
    docker run — env-file .environment.env -i -t -v $(pwd):/app/ -w /app/ cloudsurgeon/rds_demo init
  6. run the rds_demo container to build out the environment.
    Details of the command:
    • docker run – invoking docker to run a specified container
    • --env-file .environment.env – passing in a file that will be instantiated as environment variables inside the container
    • -v $(pwd):/app – mount the current working directory to /app inside the container
    • -w /app/ - use /app as the working directory inside the container 
    • -i - run the container in interactive mode
    • -t – run a seudo tty
    • cloudsurgeon/rds_demo:latest – use the latest version of this image
    • apply –auto-approve – pass along the apply flag to terraform and automatically approve the changes (avoids typing yes a few times)

This is repo is actually a set of three terraform blueprints that build sequentially on top of eachother, due to dependencies.
The sequence of automation is as follows:
Phase 1 - Build the networking, security rules, servers and RDS instance. This phase can will take around 15 minutes to complete, due to the time it takes AWS to create a new RDS instance

Phase 2 - Configure DMS & Delphix, Start the DMS replication task.
Phase 3 - Create the Virtual Database copy of the RDS data source.

Using the Demo

Once phase_3 is complete, the screen will present two links. One is to the Delphix Dynamic Data Platform, the other link is to the application portal you just provisioned.

  1. Click the “Launch RDS Source Instance” button. The RDS Source Instance will open in a new browser tab.
  2. Add someone, like yourself,  as a new employee to the application
  3. Once your new record is added. Go back to the application portal and launch the RDS Replica Instance
  4. You are now viewing a read-only replica of our application data. The replica is a data pod running on the Delphix Dynamic Data platform. The data is being sync’d automatically from our source instance in RDS via Amazon DMS.
  5. Go back to the application portal and launch the Dev Instance.
    The backend for the Dev Instance is also a data pod running on the Delphix Dynamic Data Platform
    It is a copy of the RDS replica data pod.
    Notice we don’t see our new record.
    That is because we provisioned this copy before we entered our new data.
    If we want to bring in the new data, we just need to simply refresh our Dev data pod.
    While we could definitely easily do that using the Dynamic Data Platform web interface, let’s do it via terraform instead.
  6. In the terminal, we will run our same docker command again, but with a slight difference in the end. 
    This time, instead of apply --auto-approve, we will pass phase_3 destroy –auto-approve
    Details of the new parts of the command:
    • phase_3 – apply these actions only to phase_3
    • destroy – destroy the assets 
    • --auto-approve – assume ‘yes’

    Remember, phase_3 was just the creation of our virtual copy of the replica. By destroying phase_3, Terraform is instructing the DDP to destroy the virtual copy.
  7. If you login to the DDP (username delphix_admin/password is in your .environment.env file), you will see the dataset being deleted in the actions pane.
  8. If you close and relaunch the Dev Instance from the application portal again, you will see that the backend database is no longer present.

  9. Now we run our Docker container again with the apply command. And it rebuilds phase_3

  10. If you close and relaunch the Dev Instance from the application portal again, you will see that the backend database is present again and this time includes the latest data from our environment.
  11. When you are finished playing with your demo, you can destroy all of the assets you created with the following docker command:
    docker run --env-file .environment.env -i -t -v $(pwd):/app/ -w /app/ cloudsurgeon/rds_demo destroy -auto-approve
  12. It will take about 15-20 minutes to completely destroy everything.

Thursday, June 21, 2018

Creating a Test Data Catalog with Delphix

Test environment data is all over the place, slowing down your projects, and injecting quality issues. It doesn’t have to be this way.



According to the TDM Strategy survey done by Infosys in 2015, up to 60% of application development and testing time is devoted to data-related tasks. That statistic is consistent with my personal experience with the app dev lifecycle, as well as my experience with the world’s largest financial institutions.

A huge contributor to the testing bottleneck is data friction. Incorporating people, process, and technology into DataOps practices is the only way to reduce data friction across organizations and to enable the rapid, automated, and secure management of data at scale.

For example, by leveraging the Delphix Dynamic Data Platform as a Test Data Catalog, I have seen several of my customers nearly double their test frequency while reducing data-related defects. The Test Data Catalog is a way of leveraging Delphix to transform manual event-driven testing organizations into automated testing factories; where everyone in testing and dev, including the test data engineers, can leverage self-service to get the data they need and to securely share the data they produce.

Below you will find two videos I recorded to help illustrate and explain this concept. The first is an introduction that speaks a little deeper on the problem space. In the second video, I demonstrate how to use Delphix as a Test Data Catalog.





Reach out to me on Twitter or LinkedIn with your questions or if you have suggestions for future videos.

Wednesday, June 20, 2018

Solving CI/CD (Continuously Interrupted/Continuously Disappointed)

Continuous — (adj.) forming an unbroken whole; without interruption.

Continuous Integration and Continuous Deployment are two popular practices that have yielded huge benefits for many companies across the globe. Yet, it’s all a lie.

Although the benefits are real, the idea behind CI&CD is largely aspirational for most companies, and would more properly be titled, “The Quest for CI/CD: A Not-So-Merry Tale.”

Because, let’s face it, there is still a lot of waiting in most CI/CD. To avoid false advertising claims, perhaps we should just start adding quiet disclaimers with asterisks, like so CI/CD**.

The waiting still comes from multiple parts of the process, but most frequently, teams are still waiting on data. Waiting for data provisioning. Waiting for data obfuscation. Waiting for access requests. Waiting for data backup. Waiting for data restore. Waiting for new data. Waiting for data subsets. Waiting for data availability windows. Waiting for Bob to get back from lunch — even when devs just generate their own data on the fly– QA and Testing get stuck with the bill. (I am talking to three F100 companies right now where this last issue is the source of some extreme pain).
I wish I could say that any one technology could solve all data issues (I have seven kids and that fact alone would pay for their entire college fund). But, I can say that Delphix solves some very real and very big data issues for some of the world’s biggest and best known brands, through the power of DataOps. It allows organizations to leverage the best of people, process, and technology to eliminate data friction across all spectrums.

Here I share a video of how I tie Jenkins together with Delphix to provision, backup, restore, and share data in a automated, fast, and secure manner. This video explains how I demonstrated some of the functionality in my Delphix SDLC Toolchain demo.


**excluding those things that we obviously have to wait for.

Monday, November 13, 2017

Delphix Toolchain Integration

Hey Everyone!

I know I have been talking about this for a while, but with the DevOps Enterprise Summit kicking off, I thought it was time to finally do it! Below you will find a video of Delphix integrated into a typical toolchain consisting of tools like Datical, Maven, git, Jenkins, and Selenium.

In this video, I walk through a form of "A Day in the Life" of the SDLC, where we want to introduce a new feature to our employee application: we want to record their Twitter handle. To do this simple change, we will need to introduce database object changes (a column to store the handle) and application level changes to display and record the changes. This is a simple application with a Java + Apache front end and an Oracle 12c backend.

Below is a general swim diagram of the flow and the video, as well. More details on the "how" of the components next week! (I will replace this video with a better quality video, but my computer crashed last night with all the changes, and I had to reproduce on a loaner system. Crazy story)




Delphix as a part of the DevOps toolchain demo from Adam Bowen on Vimeo.

Thursday, September 7, 2017

Easily Moving Data between Clouds with the Delphix Dynamic Data Platform

Hey everyone! I’m back in the “demonstration saddle” again to showcase how easy it is to replicate data from one cloud to another. Data friction abounds, and there are few places that feel as much data friction as cloud migration projects. Getting data into the cloud can be a challenge, and adding security concerns can make it seem almost impossible. DataOps practices can ensure that data friction is not the constraint keeping you from leveraging the cloud. I recorded this video to demonstrate how the Delphix Dynamic Data Platform (DDP) works across the five facets of DataOps ( governance, operations, delivery, transformation, and version control) to make migrations "friction free."

In this video, you will see me replicate data from Amazon Web Services (AWS) into Microsoft Azure, and also from Azure to AWS. Since the actual steps to replicate are very few and only take a matter of seconds, I spend time on the video explaining some of the different aspects of the DDP. I also highlight leveraging the DDP’s Selective Data Distribution which only replicates data that has been sanitized as a part of an approved and automated masking process. In the conclusion of the video, I demonstrate creating a copy of the masked virtual database (VDB) and demonstrating how quickly you can do a destructive test and recover.


Here is a high-level diagram to understand the layout of what I am working with:


arch.png


And the video:

Sunday, May 7, 2017

He Ain't Heavy, He's My Data

Man collecting data into funnel
bigstock/monsitj

The explosion of data in the recent years has had some knock-on effects. For example:
  • Data theft is far more prevalent and profitable now than ever before. Ever heard of Crime-as-a-Service?
  • There is now more pressure than ever before to modernize our applications to take advantage of the latest advances in DevOps and Cloud capabilities.
But the problem is that data growth is actually encumbering most companies' ability to modernize applications and protect customer information. The effect is exacerbated in environments leveraging containerization where application stacks are spun up in seconds and discarded in minutes. Through no fault of their own, the DBA/Storage Admin can't even initiate a data restore that quickly. This has painted data as the bad guy.

Thief hiding behind handcuffs
bigstock/andrianocz

The consequence of this is that Dev/Test shops have moved towards eliminating the 'bad guy' by using subsetted or pure synthetic data throughout their SDLC. After all, it kills two birds with one stone: Data is small and easy to get it when they need it, and nothing of value exists to be stolen.

But the implication of this well-meaning act is that application quality decreases and their application projects are just as slow, if not slower, than before. Their use of non-realistic datasets results in an increase in data-related defects. Then they try to combat the self-inflicted quality issues by creating a whole new data program lifecycle around coverage mapping, corner cases, data quality, data release, etc. The net result is that they spend at least as much human and calendar time on data, as they did before...yet they will still have self-inflicted data-related quality issues.

We need to stop the madness. Data is not the enemy, rather it is the lifeblood of our companies. The true enemy is the same enemy we have been tackling with DevOps: Tradition. The traditional way that we have been dealing with the culture, process, and technology around data is the enemy. At Delphix we help our customers quickly flip this on its head and eliminate the true enemy of their business. By enabling our customers to provision full masked copies of data in minutes via self-service automation, they now have data that moves at the speed of business. Their applications release over 10X faster, their data-related defects plummet, and their surface area of data-risk decreases by 80%. And one of the beautiful things is that, in most cases, Delphix is delivering value back to the business inside of two weeks.

bigstock/pryzmat
When you only address the symptoms of a problem, the problem remains. Data is not your enemy; serving data like you did for the last two decades is the enemy. Your data is more-than-ready to be your business-enabling partner, you just need to unshackle it with Delphix.


Wednesday, March 22, 2017

The Missing Ingredient


Last week, Delphix held its annual Company Kickoff (CKO). It reminded me of what makes Delphix such a fantastic company, and energized the whole company about what’s to come for this new fiscal year. There are many observations and takeaways, with a lot to share, so I’ll break this into two or three blog posts. First allow me to share some personal reflection on the past year.


Having been at Delphix for over three years, I have enjoyed the ride that being a part of a disruptive startup has to offer. There have been successes, and there have also been setbacks. We have had some easy wins, and we have also had some scrappy battles. While there is no doubt of our success, being the pioneers and masters of our space, or the value we deliver to our customers; I couldn’t help feeling that we were lacking something. We were a company on the cusp of greatness, yet that that golden ring seemed to be just beyond our fingers. Of a surety, something had to change. And 2017 ushered in an abundance of change: we filled all of our senior leadership vacancies, had a few organizational realignments, and even product realignments. And we are definitely better for it; people were excited, bustling, and busy. Yet coming into this new fiscal year, something still felt like it was missing.




The evening before the official start of CKO, we had a welcome reception for everyone who had already arrived. The only way for me to describe the scene and to give it justice, is to liken it to distant family reuniting to celebrate a holiday. And why not such a description? For us at Delphix, this is a time of celebration: we reflect on the previous year’s tremendous accomplishments, and also share our plans and dreams for the future. And true to the analogy, there were many warm embraces, huge smiles, and boisterous bursts of laughter between those that are normally separated by thousands of miles. We had food, drink, and friends. Yes, indeed this was a joyous event.

The next day we our CKO was kicked off by our CEO, Chris Cook. Our event opened up with a video of Chris taking a car full of Delphix employees on a “Carpool Karaoke” drive from our HQ in Redwood City, California to the CKO location. The video showed a more personal side of Delphix, and the laughter among the Delphix family was infectious. As the video ended and Chris took the stage to an standing ovation, I looked around me to see everyone with grins from ear to ear. It was at this very moment I realized that the transformation of Delphix was happening, and that I was witnessing a metamorphosis before my very eyes. For the first time that I had ever witnessed, all of Delphix were in the moment...together. This was my first aha moment (more on that later).



As everyone returned to their seats, Chris began to share with us “The Delphix story,” painting a vision where data is no longer the constraint, but data moves at the speed of business. He shared with us what a world looks like where data is as easy and instant to conjure as a snap of the finger. He then challenged us to make that vision a reality, to execute on our mission to reduce the weight of data, accelerate the pace of discoveries and breakthroughs and inspire more aha moments.


Wait. What are aha moments? Chris explained that this is the moment when people discover something profound where they previously had no knowledge. This happens a lot with Delphix customers. It is so pronounced, that you can see it physically emoted in many cases. To be honest, this is one of the best parts of my job. To see someone finally grasp Delphix and then get slammed with the realization of the huge impact Delphix will have on their lives is absolutely amazing. I have seen people go wide eyed, shake their head in disbelief,  get out of their seat and walk around, and just stare at me with their mouth agape. Watching those life changing moments are just as life changing to me.


This was my second aha moment. Chris was ushering in a fundamental shift for Delphix. It’s not our job to sell software; but instead it’s our job to change our people’s lives for the better with our technology, unlocking as much innovation and potential as possible while freeing them from the shackles of their data constraints. This is a mission that I cannot refuse.





After his talk, Chris brought in former Blue Angels pilot, John Foley, to speak to us that morning. John shared some amazing stories about his exploits as a pilot, including functioning as a goodwill ambassador during times of heightened tensions between the United States and Russia. He taught us the value of his #gladtobehere initiative, where a heart of thanks and gratitude are the bedrocks of character and success. But the thing that stuck out the most to me was the precision that was needed to achieve the level of excellence that makes the Blue Angels the best. Yes, that precision takes thousands of hours of practice, teamwork, dedication, and skill. But what set the Blue Angels apart were their unity in focus and purpose. The hours of daily drills where they would sit in a briefing room and talk through the day’s flight, literally turn by turn, maneuver by maneuver. Why? Because they are all committed to precise execution; the consequences of even slight miscalculations could be catastrophic. That was their promise to each other: to be of one mind, focused on their execution and of those around them, in order to collectively be the best at what they do.


And it was the conclusion of John Foley’s presentation that I realized that we finally had what we had been missing: one vision, one mission, one focus, one promise, one team…Unity. We have a lot a unbelievable talent at Delphix – you won’t find better anywhere else. We’ve been been doing our best individually or as small groups, and that’s gotten us to a great place. Now that we’re unified, we are poised and ready to transform the way the world deals with its data.