Thursday, December 15, 2016

Minding the [Data] Gap

Mind the Data Gap
I am fortunate enough to find myself in London, England once again this year. If you have been to London and have ridden "the tube," then you are familiar with the phrase “Please mind the gap.” For those who may be unfamiliar with this phrase, it is repeated at every stop on the train/subway to remind departing passengers to not step in the space between the train and the sidewalk. And, like most constantly repeated sound advice, we tend to hear it the first time, and then drown it out. And, true to form, ignoring that advice usually comes back to bite us in the end. This is what almost happened to me today as “the gap” was twice as big as it normally is. I have never been so thankful to have such large feet.
The events played over and over again through my mind on the remainder of my journey back to the hotel. And then the thought hit me: this is exactly what happens in our SDLC (though often with a more unfortunate outcome). We have learned to live with the peril of old, stale, subsetted, or purely synthetic data (the data gap) in our day-to-day lives and completely forget about its presence...until it is much bigger than we assume and almost kills us (or in the least causes us some embarrassment and bruises).
We have acknowledged the data gap in our SDLC and have managed to just work around it ... that is, until we don't. All of us have experienced injury from the data gap in our projects. Here are some typical injuries:
  • We plan for the two week database provision time, but then it takes 4 weeks. Project delay and cost overrun.
  • We plan for three days for a database refresh, but it takes 5 days. Teams waiting, features drop. testing cycles drop.
  • We don't plan refreshes, so our projects don't suffer downtime; but the six week/month-old data caused us to miss detecting a P1 defect.
  • We program back-out scripts/steps to reset our dev/test environments to avoid 5 day refreshes; but they unknowingly fail, introducing bugs and productivity loss.
  • We don't mask non-prod copies, because masking is hard and takes too long. Dev gets compromised.
  • We just run pure synthetic data in non-prod but we miss corner-cases; introducing bugs into late-cycle dev or into Prod.
There are even more data gap pains we have all faced around processes like subsetting and break fix activities. Just like in my tube experience, we knew the gaps were there. In fact, we counted on the gap to be there, but in those moments the gaps were far larger than we planned. We planned to march forward with our data in place, but instead we plunged into the abyss.
While Delphix can't heal every peril in your SDLC, let's examine just a few of the places where Delphix can remediate:

Provisioning new data

Today, if you are like most traditional shops, you wait days or weeks to get new environments, and additional days/weeks to get those new environments provisioned with data. If you are a more modern DevOps/Automation shop, you can get environments in minutes, but you still wait hours or days for data. After all, even if you automate the request, copying/transferring 60TB of data only happens so fast (thanks, physics). With Delphix, you can eliminate the words "days", "weeks", and "hours" as descriptors for waiting for data. Yes, that is even for a 60TB database. This can either be done ad hoc by the developer/tester/DBA via the Delphix self-service tools, or can be integrated right into your automation/DevOps processes with very little effort.
In the below diagram, I depict a situation where you are already using configuration automation, such as Ansible, Puppet, Chef, or Salt Stack to build your infrastructure and supporting applications. In this case, you can easily tell those tools to automatically call Delphix to provision the data after the infrastructure is in a ready-state.
Flow diagram of provisioning data with and without Delphix
Fullscreen image here

Refreshing Data

The constraints that afflict data provisioning in your environment likely afflict data refreshes in your environment, though in some cases the constraints may be somewhat lessened (days, instead of weeks). The same technology that Delphix uses to provision environments can also be applied to refreshes. That means that refreshes take the same amount of seconds/minutes that it took to provision the first copy. The same self-service and automation capabilities that were available to provision, are also available to refresh. Also, Delphix stays in near real-time sync with production. That means you can refresh your non-prod copy from 3-seconds-old production in just a few minutes time, at will. In the time that it would have normally took you to shoot your friendly DBA an email to request the refresh, you could already have the data. How does that impact your project timelines? If every time you do a pull from git, or trigger a commit gate on TFS, etc. it automatically refreshes your database (including applying any DDL/DML that needs to occur), how does that affect your quality?
The below diagram depicts a real account of one of our Wall Street financial customers. Because production data was cumbersome to deliver to non-prod, development would occur on months old environments. Changes to production occurred outside of development, courtesy of hot fixes, etc. Over time, this would add more and more inconsistencies between production and development data which resulted in more and more bugs making it to production. Routinely refreshed data in development results in more defects being fixed early in the SDLC where they are easier to fix. Here I show refreshes happening on a weekly schedule, but they could be set to any interval or trigger by some other tool such as a git hook.
Fullscreen image here

Resetting Data

Some tests are destructive by intentional design, and some tests are unintentionally destructive. In either case, you require a way to get be to a "test-ready" state. That really leaves only a couple of choices: either refresh the data, or back out of the changes. But, backing out of the changes implies a couple of very important constants. First, you have to be aware that changes were made to your data. If your development or tests were not designed to be destructive, are you even scrutinizing that Field A2354 on Form 234 now points to a different column in table XYZ? You simply don't know what you don't know.
But, if you are running intentionally destructive tests, are you sure you are backing out of all the changes? How much time and energy are you spending on your back-out/reset procedures? Do you subject those scripts/procedures to the same level of QA as the application you are developing? If you are, I commend you. But, there is still a better way. Once your non-prod environments are virtualized in Delphix, you can have crash-consistent copies of your applications that are as easy to access as rewinding as a movie on Netflix, or flipping pages on your Kindle. You have already provisioned your data with Delphix in minutes. You do some development that did not yield the results you wanted. Just click "Rewind" to go back to the point in time you want. This can either be a literal timestamp, or something more canonical, like a bookmark titled "Step 5 complete." This process takes just about as long as it takes to restart your application/database. If you no longer have to develop, test, and maintain reset scripts, and the reset happens in minutes, what productivity and quality gains are delivered to your projects?
In the diagram below, I have depicted a typical process where you are testing the application of package updates to a composite application with multiple data sources or an ERP system, like SAP. In a traditional test, if you are applying a series of SAP packages and one fails catastrophically, you likely have to wipe and start from scratch. This process takes weeks. Our customers that use Delphix for SAP are able to revert the last successful step in minutes and are ready to resume their testing with the click of a button.
Flow diagram of resetting test environments with and without Delphix
Fullscreen image here

Data Masking and Anonymization

Security is paramount to protecting our businesses, missions, patients, and consumers. Non-production copies, with few exceptions, should never contain sensitive data. I know that we all know this; yet we all have worked (or are working) somewhere where banking/patient/customer information was strewn about in many places. If masking was easy, everyone would do it, everywhere, all the time. With Delphix, masking is easy. Furthermore, with Delphix, Agile Masking for non-prod copies can be automated eliminating the potential for a process breakdown whereby a developer gets an unmasked copy of production. Leveraging role based access control, every time a developer clicks "provision," "refresh," or "rewind," his request is supplied from a pre-masked copy of production. Yes, pre-masked. So, the tax has already been paid for that 8 hour masking job by the time your developers get into the office at 8AM, and they have fresh masked data available from the previous day's close. Delphix Agile Masking is easy to setup and use, requires no programming expertise, and can even analyze your data for possible sensitive information. With the complexity and time constraints removed from masking, how can you afford to not mask anymore?
In the diagram below, I show a typical process where a new copy of masked data is requested and the time and manual touch points that it takes before the data is delivered. In the Delphix scenario, security can establish and review a masking policy that is automatically applied by Delphix. Delphix automatically updates with a masked copy of production on a specified interval. At any time, and without impacting the data delivery chain, security can review any of the automatically masked copies to ensure compliance and satisfy audits. The requestor only has access to request data from the certified masked copy and can get it delivered via self-service in minutes. This application of masked data delivery can be applied to any of the above scenarios I described, as well.
Flow diagram of masking data with and without Delphix
Fullscreen image here

These are just a few of the scenarios where Delphix can be inserted in your SDLC. I have previously blogged about our customers that leverage Jenkins or SNOW Orchestration as orchestration tools to call Delphix provisioning to complete their CI pipeline. They key point is to look at your SDLC and identify points where you are waiting. If you are waiting, it is likely for data. If it is indeed data for which you are waiting, then Delphix can help. Delphix is Data Delivered.

Tuesday, December 13, 2016

Fundamentals of DevOps: The Servant Leadership Gene

farakos/BigStock.com

I have been privileged to be a part of the technology sector these last two decades. In the last four years, we have seen a fantastic shift in ability for companies to innovate, thanks to what has been aptly called "DevOps".  Drastically oversimplifying, DevOps is the unification of the Operations and Development groups inside of an organization; leveraging Culture, Automation, Lean, Measurement, and Sharing (CALMS) to rapidly accelerate software from Development to Production. Companies like eRetail startup Etsy have used DevOps to rapidly develop their products and capture huge market share; likewise, DevOps has also brought light speed agility to established giants such as Amazon, Apple, Facebook, and Fidelity to be able to deploy thousands of times a day. In the face of such demonstrable results, it is uncertain how companies that aim to compete in the marketplace can do so without embracing DevOps.

And since software rules the world, we tend to look to software to improve our situation. Indeed, software has allowed us to automate, measure, and lean "all the things" to achieve some amazing results. Yet, every day companies seem to be waking up to the realization that software alone isn't enough. Just a simple google of "DevOps failures" gives several pages of new listings from the last month. It seems that these companies are just late to learn what Patrick Debois discovered near the beginning of the DevOps movement: “DevOps is a human problem”. Fittingly, the IT Revolution Press bookends the DevOps acronym of core principals with two people-centric items: Culture and Sharing. But, even some of those that have put people first are among those who have failed. So then, what is the missing ingredient that hinders IBM's success with DevOps and enables the Etsy’s? I am afraid I don’t know of any spell to conjure, but I but my meditation on this subject has led me to three magic letters: DNA.

In reading numerous interviews of some of the DevOps Elite, I have noticed a recurring pattern: Servant Leadership. Ken Blanchard breaks down Servant Leadership into a threefold role: servant, steward, and shepherd:

The Servant – seek to meet the needs of others
The Steward – take great care and consideration of what has been entrusted to you
The Shepherd – protect, guide, and nurture those under your sphere of influence.

In the preface of The DevOps Handbook, Jez Humble, Gene Kim, Patrick Debois, and John Willis give brief interviews as to how they got involved with DevOps. Though I have only met Gene Kim a couple of times, and know none of them personally, I do not believe they were motivated by a quest for glory or self-interest. The common theme among their interviews was that they saw their peers struggling and, thus, they felt compelled to find a better way to help their community. This required many years of swimming upstream against a long-established IT culture of anti-patterns rewarding fiefdoms, silos, and lone wolves. For those of us who have been in the industry any real length of time, we have either been participants or victims of this culture (or perhaps both).

Servant Leadership, isn’t that just culture? No, though if you have a culture of Servant Leadership, that is a beautiful thing. Culture is the result of group action and thinking, and each of the aforementioned pioneers had to initially go it alone. Such was their isolation, that in their brief few paragraphs, each of them noted the moment when they encountered like-minded individuals. The realization that you are not alone in the world is a life changing moment.

What then would make them do an about-face and sacrifice of their own selves to swim upstream for the greater good? I submit that it is the same thing that drive the salmon upstream: DNA. Not that these individuals were endowed with some sort of “altruism” gene; but somewhere along the way these individuals had developed a sense of purpose that extended beyond themselves. This could have been instilled in them in the home as young men, or perhaps a result of counseling from a great mentor in the workplace.  Subscribing to Dan Pink’s theory of what motivates us, because of that purpose, they leveraged their autonomy and mastery in the pursuit of the solution to this complex problem.

And I think this is a common missing component across the Technology-sphere, whether you are “DevOps’ing”, or not. One cannot simply list “Servant Leadership” as a core value in the employee handbook and reap the rewards in a few quarters. To truly get your organizations to go against the current, begin to openly collaborate and share, and work to a common business objective; you are going to have to rely on individuals that have the Servant Leadership DNA. Even if this requires a transplant. This is needed at all levels, or your servant leaders will leave. With top-level servant leaders in place, your front-line servant leaders will have the support they need to continue to face cultural adversity for the sake of everyone under their watch.

I believe the heart of a Servant Leader can only be taught by example. I am the truth of that statement. I owe Dave Lavanty, now VP Public Sector at Adobe, a debt of gratitude. When he met me, I was a quick-tempered lone wolf upstart. I am certain that I was a challenge that caused him to lose a few winks on occasion. And the entirety of the lessons he taught me still haven’t fully soaked in. I am still learning from those past lessons today. If it wasn’t for his persistence of Servant Leadership towards me, I am certain that my current state would be far worse than I enjoy today. And because of this truth, I do my best to be a servant leader in all things I do, both in and out of the workplace.  

Wednesday, October 26, 2016

Data Delivery: Do you have the Ch(at)Ops?

Recently, I got the privilege to engage in a lively conversation with a Delphix customer from Wall Street. The conversation was around all the ways that data virtualization via Delphix has improved their day-to-day operations, SDLC, TDM, CI/CD, and DevOPS initiatives. They also shared how Delphix has fundamentally catalyzed their ability to deliver and innovate at unprecedented levels. As we started discussing the "Art of the Possible" for their journey with Delphix, they asked about integrating Delphix into their ChatOps environment. Based on that conversation, I submit the following:


Links I reference in my video:
My Github with examples
Meet Will, the friendliest, easiest-to-teach bot you've ever used.
HipChat
Delphixpy Tutorial 1
Delphixpy Tutorial 2
Delphixpy Tutorial 3

Thursday, June 16, 2016

More Fun with Delphix, delphixpy part 3: Snapshots and Async

Delphix Express is no longer offered, but these examples will work with your Delphix installation



Alright. Third part in the series. I pick right up from the last blog, so if you are just joining in the fun, I suggest you go back and read/watch the first two parts (links below).
Part 1
Part 2

Also, as I add to the series, I am adding the example scripts to the delphixpy-examples repo. So, if you originally cloned the repo in part one, be sure to update it with the latest version.

In this blog we cover the following:

  1. Working with Groups
  2. Obtaining a specific database
  3. Understanding required vs. optional parameters in the API documentation
  4. Performing a Snapshot/Sync on a Database
  5. Performing a Snapshot/Sync on a group
  6. Performing Asynchronous jobs in Delphix
Before you watch the video below, I want to hear from you! Reach me on Twitter @CloudSurgeon with your delphixpy projects. Some you have done so already. I love it! Keep them coming.






Friday, June 10, 2016

Working with Delphix python module (delphixpy) part 2

Delphix Express is no longer offered, but these examples will work with your Delphix installation

Ack. Sorry for taking so long to get this second part out, everyone! In this blog I explain the very basics of how to translate what you do in the Delphix GUI to python via the delphixpy module. Here is what we cover:
  1. List the databases in our Landshark environment via the GUI
  2. Discuss the GUI to CLI mapping
  3. List the databases in our Landshark enviornment via the CLI
  4. Enable tracing and then repeat step 2
  5. Discuss the tracing output
  6. Discuss the CLI to python mapping
  7. Establishing a connection to Delphix via python
  8. Discuss invoking the delphixpy namespace operators
  9. Working with the delphixpy value objects
Below is my video; but first, here are some links that I reference in my video that you may want to quickly glance over before you watch. Feel free to explore with the cookbooks on our documentation page after you watch the video.


Also, as an aside, if you are using Delphix Express, you may need specify an older version of the module. When going into the CLI in the tutorial, type version and hit enter. That is the API version you are using. Replace delphixpy.v1_6_0.delphix_engine in the tutorial with delphixpy.vX_X_X.delphix_engine where X_X_X is the X.X.X number from the version command.



I hope this is helpful in getting you going. My next blog in the series should come next week and be about using the delphixpy module to create snapshot of VDB's. 

Wednesday, June 1, 2016

We Value Our Partners, and It Shows!

Partners, we love you!

Wow! What a whirlwind of success I have had the privilege to personally witness Delphix enjoy over these last three years. We at Delphix realize that this has only been possible by forming successful partnerships with the world's premier system integrators, value added resellers, and trusted services providers. There is no way we could boast over 30% of the Fortune 100 without the dedication, determination, and trust of our partners. At Delphix, it is well-known that we believe our people are our most important resource, and we value our partners to the same degree. We Love Our Partners

We can do better

Yet, when I looked at the Delphix business toward the end of our last fiscal year, I kept asking myself, "Yeah, this has been an incredible year, but why couldn't it have been 2 or 3 times that?" So, I started working through our sales data and identifying constraints. Partners were doing incredibly well selling Delphix, but the lead time for the first deal was taking too long for my liking. When I started analyzing statistical data for correlations, there seemed to be a common-thread: Too many "touch points" for new partners. What I mean by that, is partner on boarding was too bespoke and required heavy assistance until the partner could sustain themselves. To that end, I want pause for a moment and thank all of my colleagues who have spent so many hours enabling our partners. Well done.

Constraints

Following the theory of constraints, I know that adding more staff or reducing our influx of new partners did not actually solve for the issue. The constraint was (mainly) around enablement. That was something I knew we could fix. So, I reached out to one of our Channel Directors and asked him if he agreed with me on the constraint. He did. Then I boldly told him my plan to fix it, and asked him if he would let me attempt to do so in his territory. Thankfully, he did. In March, I went to our partner summit in Rome and gathered the "needs and wants" list from our partners, found out what worked and what didn't. I made an open commitment to our partners then that we would work to rapidly address the problem. And we sure did. The road less traveled...

The Essential Path

A couple of weeks ago, we introduced our "Essential Path for Delphix Partners" Sales and Presales tracks. This enablement is designed to quickly enable our partners to have a command of the core Delphix competencies. After completing this track, and coupled with our strategic touch points, our partners will be able to autonomously explain both "Why Delphix?" and "How does Delphix do it?" for 80% of the use cases they will encounter. The overarching goal is to accelerate our partners' first 90 days in Delphix, so that they are closing more deals and bigger deals earlier in their journey. The new enablement boasts 27 courses, over 60 new chapters, over 400 minutes of new content, and a variety of perspectives from the US, South America, Europe, the UK, and Australia. We are really excited at what has been accomplished in so short of a time!

But wait... there's more!

But, this is just the beginning. The Essential path is currently being extended to include services delivery, and the Advanced Path and Master Path are also being added. Plus, for some of our select strategic alliances, we will soon be offering our Delphix Partner University (that my great pal, Woody Evans, is leading). The Delphix Partner University will be based upon the same courses of the Essential Path, but will also include additional courses that are tailored for our partners that are making considerable investments in establishing Centers of Excellence and creating core business units inside their organization around Delphix. If you are one of our partners and are reading my blog, please accept my personal and heartfelt THANK YOU! You are one of the many reasons that it is such a great time to be at Delphix. Well, I am off to fix the next constraint in the chain.... talk to you soon!

Friday, April 29, 2016

Getting started with the Delphix python module

Delphix Express is no longer offered, but these examples will work with your Delphix installation

Allo everyone. By popular demand, I going to start a short series on using the Delphix python module, "delphixpy". If you have been following my work any time over the last two years and have downloaded the Landshark environment along with Delphix Express, then you have been a benefactor of delphixpy.


In short, delphixpy is a way to call the Delphix API from within python so that you can leverage all that object oriented goodness that python provides. This also allows you to treat json as dictionaries and many other powerful things of python. I have posted three common examples for DevOps, CI/CD, and Enterprise Automation shops and did try to meticulously comment them. I have recorded an intro video where I walk you through getting your Landshark environment setup to run the scripts and run the examples against a Landshark environment so that you can easily follow along where ever you are.

There will be much more information to come very soon, we are just getting started!

You can get my examples from GitHub here
You can post questions, etc on the Delphix Community Site here
Landshark and Delphix Express download and setup instructions here


Summary of preparation commands:
  1. ssh into the Landshark LinuxTarget as the delphix user.
  2. assume root: su -
  3. yum install git -y
  4. pip install virtualenv
  5. exit root (you should now be the delphix user again)
  6. git clone https://github.com/CloudSurgeon/delphixpy-examples
  7. virtualenv --system-site-packages ~/landshark
  8. source ~/landshark/bin/activate
  9. pip install --upgrade -r ~/delphixpy-examples/requirements.txt

Tuesday, March 1, 2016

The Professional Latchkey Kids

This is a bit of a read, but I hope this will help someone that is going, or has been, through being a Latchkey employee. I hope this will open up all leaders' eyes to the impact they can make in someone's life. TLDR: Slow down, look up, be the change you want to be.

Of late, I have been spending increasingly more time reflecting upon my former "professional self." I suppose you could chalk this up to age and maturity, as the adages of generations have assured us that age begets wisdom; but, I have been around too many immature mid-lifers to lift a glass and sing that song. And, though I make no claims to be wise, I yet know enough to attribute whatever wisdom I posses to the people that have kept vigil over my life.  And to all of those, present and past, who have done so, I am forever indebted to you.

And there is one I want to call out in particular, and that is my good friend and mentor, Ted. When I met him almost ten years ago, he saw something that others didn't in this scrappy, uncouth hilljack from WV. While I wouldn't go as far as to call myself a diamond, he saw something of worth hidden beneath all the "rough" that others couldn't look past. Though I wasn't in his chain of command (he was the VP of Federal Sales, and I was a professional services consultant), he proactively took me under his wing. It was there that I found protection from mistakes I made, as well as correction; someone I could vent to when I was frustrated; and someone who offered me sound advice, especially when I didn't seek it (a.k.a., when I needed it most). And when I was reflecting on this, it reminded me of Latchkey kid programs that were a part of my early childhood life in the Detroit metro area, before moving to WV. If you are not familiar with the term "Latchkey kid", it was a term used to describe children that would return to an empty home after school (they used their own key to unlatch the door) until the parents returned from work.

And then it hit me, "That's it!" If you spend any sort of time and dedication to self-reflection, you will frequently ask yourself an important question: "Why in the world did I do that? (or act that way?)" I am not proud of my scrappy past, and often wish I hadn't been so caustic and aggressive towards my peers and superiors. I have been consistently analyzing why I behaved that way at work when outside of work I was docile and jovial. And it finally came to me as I was reflecting on how Ted's mentorship saved me from myself: my bad behavior had been shaped by being a Latchkey employee.

I was a young adult from a rural part of the country, without a college education, trying to make my way in the world and establish myself. I had an uphill battle from the very beginning. I had no mentorship or leadership, only a bunch of bullying, condescension, and discouragement from peers, managers, and bosses. When you are the low man on the totem pole, you have little status, and you don't have some connection to someone of stature; your only choice is to fight your way through. Even though the opportunity was meager, when you only have crumbs you'll do anything you can to just get that slice of the pie. And, because of the scarcity of reward, every contribution you made in an effort to be noticed, or to gain a raise, would have to be vigorously defended from jealous peers and managers that were eager to take credit for your work (and your slice of the pie). There was no one "home" to ensure that employees were getting the proper nutrition, tutelage, and counseling they needed to grow professionally. The only teacher was "The School of Hard Knocks" that my father so famously would list as his higher education credential. I can assure you, that school changes a man (or woman).

I was living and working in a cutthroat professional caste system. It took many years to be able to achieve enough velocity to escape the toxic gravity. Unfortunately, although I escaped that environment by the time I had met Ted, I was still there in my mind. Every coworker was still out to take credit for my work, every manager was still looking for a way to justify why I couldn't have a raise, every boss was still scheming to lead me along with empty promises of future recognition or gains.

And this is what I saw so commonly growing up as a Latchkey kid in the Detroit metro area. Thankfully, I didn't personally experience this downward spiral as a Latchkey kid. But, an all-to-common outcome for many Latchkey kids (including many of my childhood friends), was declining behavioral health that was a real result of life in the urban jungle. Moms and dads were not there to watch over us kids, so the rules of the streets took over. You have something someone else wants? They take it from you unless you fight (literally) to keep it. You start getting a little popular among kids on your block? Someone will be along shortly to challenge that status and knock you down a few notches (with their fist). You want protection? You can quickly find the price of said protection escalates quickly and into something you were not originally willing to pay. This translated into behaviors that followed my friends into adulthood, although they were no longer in that situation. For some of them, it unfortunately resulted in prison or death.

But for some of my friends they found rescue at the hands of a grandparent, neighbor, or the Big Brothers Big Sisters network. The true action of importance was that someone took the timeout of their schedule to reach out and to help someone find themselves that lost their way. Someone to whom they owed nothing, but still gave freely. They are difference makers. Ted was a difference maker.

Today, thanks to my mentor and others, I find myself at a place that was unimaginable two years ago: a leader with a global role inside of one of the most exciting software startups to have come along since VMware. Now, I take personal responsibility for the success of my peers, as opposed to treating them as would be thieves in the night.  I spent two years trying analyzing why a certain executive and I just couldn't seem to get along and then finally we clicked, and it was fantastic. This would have been impossible for the "old me." And now, when I find that I can't stand someone at work, my question I ask myself "Do you only feel this way because they are somewhere you want to be or have something you want?" And, if the truth is told, sometimes that answer is "yes" and the response is "ouch."

To be clear, I don't feel this absolves me of past conduct. And, I reflect to not ask myself what I could have done differently, but what I can do differently going forward. This revelation has caused me to redouble my efforts to have understanding in the work place for those who are just starting out, or don't fit in, or perhaps just can't seem to get along with others. Perhaps they too just need someone to reach out, and extend them some time and compassion. As leaders if we all would just stop, close our email, and open our yes; maybe, just maybe, we too will find our next innovative leader that is trapped in a latchkey employee's frame of reference.

Be well. 

Sunday, February 7, 2016

Unlocking Development Streams with Jetstream and Continuous Masking

Recently I have become inundated with requests from all over the country to talk to people about leveraging their Delphix installations to remove the constraints in their SDLC and DevOps initiatives. I really don't know the last time I got to speak about something I am this this passionate about.

One of the commonly recurring themes I hear is how the masking process is a huge hurdle. That hurdle takes all shapes and forms: Using synthetic data or subsetting because the masking process takes too long; Using synthetic data because masking is too complicated; Using home grown scripts that worked at one point back in time; or, very commonly, "We know we should, but we don't."

I enjoy the enthusiastic discussion with my customers about addressing these pain points. In the end, we come down to the critical remedy: Developers and Testers need self-service tools to be able to get the data they need, when they need it, where they need it; and that data needs to be masked. Some time ago, I put this little video demonstration together highlighting three or four "day in the life of a developer" scenarios to help my customers visualize what happens when your application teams are no longer hindered by yesterday's problems. I thought I would share this with everyone. I re-recorded the audio as the quality wasn't great, so there are a couple of places where I am just a second ahead of the demonstration.  I hope this helps!


SugarCRM Jetstream Demo from Adam Bowen on Vimeo.

Tuesday, January 26, 2016

Getting child node names from xmlstarlet

Ok. Just throwing this out there, because I was banging my head on my desk this morning trying to figure this out. I couldn't find any simple examples of this, but that could just be due to my inability to find things. I would have likely fared better if I asked my wife to find it for me. (It certainly works for my car keys). Here's hoping this helps someone.


cat <xml file>|  xmlstarlet sel -t -m "//path/to/child::node()" -v "name()" -n

with XML like this:

<?xml version="1.0" encoding="UTF-8"?><landshark_properties VERSION="2.3.1">
<general>
<stuff/>
</general>
<groups>
<vdb_groups>
<vdb_group>
<value>Dev Copies</value>
</vdb_group>
</vdb_groups>
<dsource_group>
<value>Sources</value>
<type>String</type>
<description>The ONE group Landshark setup will create for the dSources - No need to change</description>
</dsource_group>
</groups>
<environments>
<things/>
</environments>
<engine>
<morethings/>
</engine>
<content>
<sugarcrm/>
<employee_11/>
</content>
</landshark_properties>

Output like this:
cat text.xml |  xmlstarlet sel -t -m "//content/child::node()" -v "name()" -n

sugarcrm

employee_11